text
stringlengths
16
172k
source
stringlengths
32
122
Neural style transfer(NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use ofdeep neural networksfor the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, includingDeepArtandPrisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s). NST is an example ofimage stylization, a problem studied for over two decades within the field ofnon-photorealistic rendering. The first two example-based style transfer algorithms wereimage analogies[1]and image quilting.[2]Both of these methods were based on patch-basedtexture synthesisalgorithms. Given a training pair of images–a photo and an artwork depicting that photo–a transformation could be learned and then applied to create new artwork from a new photo, by analogy. If no training photo was available, it would need to be produced by processing the input artwork; image quilting did not require this processing step, though it was demonstrated on only one style. NST was first published in the paper "A Neural Algorithm of Artistic Style" by Leon Gatys et al., originally released toArXiv2015,[3]and subsequently accepted by the peer-reviewedCVPR conferencein 2016.[4]The original paper used aVGG-19 architecture[5]that has been pre-trained to performobject recognitionusing theImageNetdataset. In 2017,Google AIintroduced a method[6]that allows a single deep convolutional style transfer network to learn multiple styles at the same time. This algorithm permits style interpolation in real-time, even when done on video media. This section closely follows the original paper.[4] The idea of Neural Style Transfer (NST) is to take two images—a content imagep→{\displaystyle {\vec {p}}}and a style imagea→{\displaystyle {\vec {a}}}—and generate a third imagex→{\displaystyle {\vec {x}}}that minimizes a weighted combination of two loss functions: a content lossLcontent(p→,x→){\displaystyle {\mathcal {L}}_{\text{content }}({\vec {p}},{\vec {x}})}and a style lossLstyle(a→,x→){\displaystyle {\mathcal {L}}_{\text{style }}({\vec {a}},{\vec {x}})}. The total loss is a linear sum of the two:LNST(p→,a→,x→)=αLcontent(p→,x→)+βLstyle(a→,x→){\displaystyle {\mathcal {L}}_{\text{NST}}({\vec {p}},{\vec {a}},{\vec {x}})=\alpha {\mathcal {L}}_{\text{content}}({\vec {p}},{\vec {x}})+\beta {\mathcal {L}}_{\text{style}}({\vec {a}},{\vec {x}})}By jointly minimizing the content and style losses, NST generates an image that blends the content of the content image with the style of the style image. Both the content loss and the style loss measures the similarity of two images. The content similarity is the weighted sum of squared-differences between the neural activations of a single convolutional neural network (CNN) on two images. The style similarity is the weighted sum ofGram matriceswithin each layer (see below for details). The original paper used aVGG-19CNN, but the method works for any CNN. Letx→{\textstyle {\vec {x}}}be an image input to a CNN. LetFl∈RNl×Ml{\textstyle F^{l}\in \mathbb {R} ^{N_{l}\times M_{l}}}be the matrix of filter responses in layerl{\textstyle l}to the imagex→{\textstyle {\vec {x}}}, where: A given input imagex→{\textstyle {\vec {x}}}is encoded in each layer of the CNN by the filter responses to that image, with higher layers encoding more global features, but losing details on local features. Letp→{\textstyle {\vec {p}}}be an original image. Letx→{\textstyle {\vec {x}}}be an image that is generated to match the content ofp→{\textstyle {\vec {p}}}. LetPl{\textstyle P^{l}}be the matrix of filter responses in layerl{\textstyle l}to the imagep→{\textstyle {\vec {p}}}. The content loss is defined as the squared-error loss between the feature representations of the generated image and the content image at a chosen layerl{\displaystyle l}of a CNN:Lcontent(p→,x→,l)=12∑i,j(Aijl(x→)−Aijl(p→))2{\displaystyle {\mathcal {L}}_{\text{content }}({\vec {p}},{\vec {x}},l)={\frac {1}{2}}\sum _{i,j}\left(A_{ij}^{l}({\vec {x}})-A_{ij}^{l}({\vec {p}})\right)^{2}}whereAijl(x→){\displaystyle A_{ij}^{l}({\vec {x}})}andAijl(p→){\displaystyle A_{ij}^{l}({\vec {p}})}are the activations of theith{\displaystyle i^{\text{th}}}filter at positionj{\displaystyle j}in layerl{\displaystyle l}for the generated and content images, respectively. Minimizing this loss encourages the generated image to have similar content to the content image, as captured by the feature activations in the chosen layer. The total content loss is a linear sum of the content losses of each layer:Lcontent(p→,x→)=∑lvlLcontent(p→,x→,l){\displaystyle {\mathcal {L}}_{\text{content }}({\vec {p}},{\vec {x}})=\sum _{l}v_{l}{\mathcal {L}}_{\text{content }}({\vec {p}},{\vec {x}},l)}, where thevl{\displaystyle v_{l}}are positive real numbers chosen as hyperparameters. The style loss is based on the Gram matrices of the generated and style images, which capture the correlations between different filter responses at different layers of the CNN:Lstyle(a→,x→)=∑l=0LwlEl,{\displaystyle {\mathcal {L}}_{\text{style }}({\vec {a}},{\vec {x}})=\sum _{l=0}^{L}w_{l}E_{l},}whereEl=14Nl2Ml2∑i,j(Gijl(x→)−Gijl(a→))2.{\displaystyle E_{l}={\frac {1}{4N_{l}^{2}M_{l}^{2}}}\sum _{i,j}\left(G_{ij}^{l}({\vec {x}})-G_{ij}^{l}({\vec {a}})\right)^{2}.}Here,Gijl(x→){\displaystyle G_{ij}^{l}({\vec {x}})}andGijl(a→){\displaystyle G_{ij}^{l}({\vec {a}})}are the entries of theGram matricesfor the generated and style images at layerl{\displaystyle l}. Explicitly,Gijl(x→)=∑kFikl(x→)Fjkl(x→){\displaystyle G_{ij}^{l}({\vec {x}})=\sum _{k}F_{ik}^{l}({\vec {x}})F_{jk}^{l}({\vec {x}})} Minimizing this loss encourages the generated image to have similar style characteristics to the style image, as captured by the correlations between feature responses in each layer. The idea is that activation pattern correlations between filters in a single layer captures the "style" on the order of the receptive fields at that layer. Similarly to the previous case, thewl{\displaystyle w_{l}}are positive real numbers chosen as hyperparameters. In the original paper, they used a particular choice of hyperparameters. The style loss is computed bywl=0.2{\displaystyle w_{l}=0.2}for the outputs of layersconv1_1,conv2_1,conv3_1,conv4_1,conv5_1in the VGG-19 network, and zero otherwise. The content loss is computed bywl=1{\displaystyle w_{l}=1}forconv4_2, and zero otherwise. The ratioα/β∈[5,50]×10−4{\displaystyle \alpha /\beta \in [5,50]\times 10^{-4}}. Imagex→{\displaystyle {\vec {x}}}is initially approximated by adding a small amount of white noise to input imagep→{\displaystyle {\vec {p}}}and feeding it through the CNN. Then we successivelybackpropagatethis loss through the network with the CNN weights fixed in order to update the pixels ofx→{\displaystyle {\vec {x}}}. After several thousand epochs of training, anx→{\displaystyle {\vec {x}}}(hopefully) emerges that matches the style ofa→{\displaystyle {\vec {a}}}and the content ofp→{\displaystyle {\vec {p}}}. As of 2017[update], when implemented on aGPU, it takes a few minutes to converge.[8] In some practical implementations, it is noted that the resulting image has too much high-frequency artifact, which can be suppressed by adding thetotal variationto the total loss.[9] Compared to VGGNet,AlexNetdoes not work well for neural style transfer.[10] NST has also been extended to videos.[11] Subsequent work improved the speed of NST for images by using special-purposenormalizations.[12][8] In a paper byFei-Fei Liet al. adopted a different regularized loss metric and accelerated method for training to produce results in real-time (three orders of magnitude faster than Gatys).[13]Their idea was to use not thepixel-based lossdefined above but rather a 'perceptual loss' measuring the differences between higher-level layers within the CNN. They used a symmetric convolution-deconvolution CNN. Training uses a similar loss function to the basic NST method but alsoregularizesthe output for smoothness using atotal variation(TV) loss. Once trained, the network may be used to transform an image into the style used during training, using a single feed-forward pass of the network. However the network is restricted to the single style in which it has been trained.[13] In a work by Chen Dongdong et al. they explored the fusion ofoptical flowinformation intofeedforward networksin order to improve the temporal coherence of the output.[14] Most recently,feature transformbased NST methods have been explored for fast stylization that are not coupled to single specific style and enable user-controllableblendingof styles, for example thewhitening and coloring transform(WCT).[15]
https://en.wikipedia.org/wiki/Neural_style_transfer
Computer facial animationis primarily an area ofcomputer graphicsthat encapsulates methods and techniques for generating and animating images or models of a character face. The character can be ahuman, a humanoid, ananimal, alegendary creatureor character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields frompsychologyto traditionalanimation. The importance ofhuman facesinverbal and non-verbal communicationand advances incomputer graphics hardwareandsoftwarehave caused considerable scientific, technological, and artistic interests in computer facial animation. Although development ofcomputer graphicsmethods for facial animation started in the early-1970s, major achievements in this field are more recent and happened since the late 1980s. The body of work around computer facial animation can be divided into two main areas: techniques to generate animation data, and methods to apply such data to a character. Techniques such asmotion captureandkeyframingbelong to the first group, whilemorph targets animation(more commonly known as blendshape animation) andskeletal animationbelong to the second. Facial animation has become well-known and popular through animated featurefilmsandcomputer gamesbut its applications include many more areas such ascommunication,education, scientificsimulation, andagent-based systems (for example online customer service representatives). With the recent advancements in computational power in personal andmobile devices, facial animation has transitioned from appearing in pre-rendered content to being created at runtime. Humanfacial expressionhas been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example byJohn Bulwerin the late 1640s,Charles Darwin's bookThe Expression of the Emotions in Men and Animalscan be considered a major departure for modern research in behaviouralbiology. Computer based facial expression modelling andanimationis not a new endeavour. The earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created byParkein 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. in 1974,Parkedeveloped a parameterized three-dimensional facial model. One of the most important attempts to describe facial movements wasFacial Action Coding System(FACS). Originally developed byCarl-Herman Hjortsjö[1]in the 1960s and updated byEkmanandFriesenin 1978, FACS defines 46 basic facial Action Units (AUs). A major group of these Action Units represent primitive movements of facial muscles in actions such as raising brows, winking, and talking. Eight AU's are for rigid three-dimensional head movements, (i.e. turning and tilting left and right and going up, down, forward and backward). FACS has been successfully used for describing desired movements of synthetic faces and also in tracking facial activities. The early-1980s saw the development of the first physically based muscle-controlled face model by Platt and the development of techniques for facial caricatures by Brennan. In 1985, the animated short filmTony de Peltriewas a landmark for facial animation. This marked the first time computer facial expression and speech animation were a fundamental part of telling the story. The late-1980s saw the development of a new muscle-based model byWaters, the development of an abstract muscle action model byMagnenat-Thalmannand colleagues, and approaches to automatic speech synchronization by Lewis and Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such asToy Story(1995),Antz(1998),Shrek, andMonsters, Inc.(both 2001), andcomputer gamessuch asSims.Casper(1995), a milestone in this decade, was the first movie in which a lead actor was produced exclusively using digital facial animation. The sophistication of the films increased after 2000. InThe Matrix ReloadedandThe Matrix Revolutions, denseoptical flowfrom several high-definition cameras was used to capture realistic facial movement at every point on the face.Polar Express (film)used a large Vicon system to capture upward of 150 points. Although these systems are automated, a large amount of manual clean-up effort is still needed to make the data usable. Another milestone in facial animation was reached byThe Lord of the Rings, where a character specific shape base system was developed. Mark Sagar pioneered the use ofFACSin entertainment facial animation, and FACS based systems developed by Sagar were used onMonster House,King Kong, and other films. The generation of facial animation data can be approached in different ways: 1.)marker-based motion captureon points or marks on the face of a performer, 2.)markerless motion capturetechniques using different type of cameras, 3.) audio-driven techniques, and 4.)keyframeanimation. The main techniques used to apply facial animation to a character are: 1.)morph targets animation, 2.)bone driven animation, 3.) texture-based animation (2D or 3D), and 4.)physiologicalmodels. Many face animation languages are used to describe the content of facial animation. They can be input to a compatible "player"softwarewhich then creates the requested actions. Face animation languages are closely related to othermultimediapresentation languages such asSMILandVRML. Due to the popularity and effectiveness ofXMLas a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample fromVirtual Human Markup Language(VHML): More advanced languages allow decision-making, event handling, and parallel and sequential actions. TheFace Modeling Language(FML) is anXML-based language for describing faceanimation.[5]FML supportsMPEG-4Face Animation Parameters(FAPS), decision-making and dynamicevent handling, and typicalprogrammingconstructs such asloops. It is part of the iFACE system.[5]The following is an example from FML:
https://en.wikipedia.org/wiki/Computer_facial_animation
Digital cloningis an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existingaudio,photos, andvideosthat are hyper-realistic.[1]One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake.[2]Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns. Digital cloning can be categorized into audio-visual (AV), memory, personality, andconsumer behaviourcloning.[3]In AV cloning, the creation of a cloned digital version of the digital or non-digital original can be used, for example, to create a fake image, an avatar, or a fake video oraudioof a person that cannot be easily differentiated from the real person it is purported to represent. A memory and personality clone like a mindclone is essentially a digital copy of a person’s mind. A consumer behavior clone is a profile or cluster of customers based on demographics. Truby and Brown coined the term “digital thought clone” to refer to the evolution of digital cloning into a more advanced personalized digital clone that consists of “a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.”[3] Digital cloning first became popular in the entertainment industry. The idea of digital clones originated from movie companies creatingvirtual actorsof actors who have died. When actors die during a movie production, a digital clone of the actor can be synthesized using past footage, photos, and voice recordings to mimic the real person in order to continue the movie production.[4] Modern artificial intelligence, has allowed for the creation ofdeepfakes. This involves manipulation of a video to the point where the person depicted in the video is saying or performing actions he or she may not have consented to.[5]In April 2018,BuzzFeedreleased a deepfake video ofJordan Peele, which was manipulated to depict former President,Barack Obama, making statements he has previously not made in public to warn the public against the potential dangers of deepfakes.[6] In addition to deepfakes, companies such asIntellitarnow allows one to easily create a digital clone of themselves by feeding a series of images and voice recordings. This essentially createsdigital immortality, allowing loved ones to interact with representations of those who died.[7]Digital cloning not only allows one to digitally memorialize their loved ones, but they can also be used to create representations of historical figures and be used in an educational setting. With the development of various technology, as mentioned above, there are numerous concerns that arises, includingidentity theft,data breaches, and other ethical concerns. One of the issues with digital cloning is that there are little to no legislations to protect potential victims against these possible problems.[8] Intelligent Avatar Platform (IAP) can be defined as an online platform supported byartificial intelligencethat allows one to create acloneof themselves.[7]The individual must train his or her clone to act and speak like themselves by feeding the algorithm numerous voice recordings and videos of themselves.[9]Essentially, the platforms are marketed as a place where one 'lives eternally', as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attaindigital immortality, along with maintaining a family tree and legacy for generations following to see.[7] Some examples of IAP includeIntellitarand Eterni.me. Although most of these companies are still in its developing stages, they all are trying to achieve the same goal of allowing the user to create an exact duplicate of themselves to store every memory they have in their mind into the cyberspace.[7]Some include a free version, which only allows the user to choose their avatar from a given set of images and audio. However, with the premium setting, these companies will ask the user to upload photos, videos, and audio recordings of one to form a realistic version of themselves.[10]Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn thecognitionof the original person and apply that to the clone. Intellitar closed down in 2012 because of intellectual property battles over the technology it used.[11] Potential concerns with IAP includes the potentialdata breachesand not gettingconsentof the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages.[9]In addition to the risk ofpersonal privacybeing compromised, there is also the risk of violating theprivacy of the deceased. Although one can give consent to creating a digital clone of themselves before his or her physical death, they are unable to give consent to the actions the digital clone may take. As described earlier, deepfakes is a form of video manipulation where one can change the people present by feeding various images of a specific person they want. Furthermore, one can also change the voice and words the person in the video says by simply submitting series of voice recordings of the new person lasting about one or two minutes long. In 2018, a new app called FakeApp was released, allowing the public to easily access this technology to create videos. This app was also used to create theBuzzfeedvideo of former PresidentBarack Obama.[6][12]With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual.[13] Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns.[14] Voice cloning is a case of theaudio deepfakemethods that usesartificial intelligenceto generate a clone of a person's voice. Voice cloning involvesdeep learningalgorithm that takes in voice recordings of an individual and cansynthesizesuch a voice to the point where it can faithfully replicate a human voice with great accuracy of tone and likeness.[15] Cloning a voice requires high-performance computers. Usually, the computations are done using theGraphics Processing Unit (GPU), and very often resort to thecloud computing, due to the enormous amount of calculation needed. Audio data for training has to be fed into an artificial intelligence model. These are often original recordings that provide an example of the voice of the person concerned. Artificial intelligence can use this data to create an authentic voice, which can reproduce whatever is typed, calledText-To-Speech, or spoken, called Speech-To-Speech. This technology worries many because of its impact on various issues, from political discourse to the rule of law. Some of the early warning signs have already appeared in the form of phone scams[16][17]and fake videos on social media of people doing things they never did.[18] Protections against these threats can be primarily implemented in two ways. The first is to create a way to analyze or detect the authenticity of a video. This approach will inevitably be an upside game as ever-evolving generators defeat these detectors. The second way could be to embed the creation and modification information in software or hardware.[19][20]This would work only if the data were not editable, but the idea would be to create an inaudible watermark that would act as a source of truth.[21]In other words, we could know if the video is authentic by seeing where it was shot, produced, edited, and so on.[15] 15.ai—a non-commercial freeware web application that began as aproof of conceptof thedemocratizationof voice acting and dubbing using technology—gives the public access to such technology.[22]Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used[23]), ease of use, and substantial improvements to current text-to-speech implementations have been lauded by users;[24][25][26]however, some critics andvoice actorshave questioned the legality andethicalityof leaving such technology publicly available and readily accessible.[22][27][28][29] Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such asGoogleandAmazonare investing vast amounts of money for the development.[30] Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor.[31]Also, voice cloning was used to translate podcast content into different languages using the podcaster's voice.[32]Another includes those who may have lost their voice can gain back a sense of individuality by creating their voice clone by inputting recordings of them speaking before they lost their voices.[33] On the other hand, voice cloning is also susceptible to misuse. An example of this is the voices of celebrities and public officials being cloned, and the voice may say something to provoke conflict despite the actual person has no association with what their voice said.[34] In recognition of the threat that voice cloning poses to privacy, civility, and democratic processes, the Institutions, including theFederal Trade Commission,U.S. Department of JusticeandDefense Advanced Research Projects Agency (DARPA)and the ItalianMinistry of Education, University and Research (MIUR), have weighed in on various audio deepfake use cases and methods that might be used to combat them.[35][36][37] Digital cloning can be useful in an educational setting to create a more immersive experience for students. Some students may learn better through a more interactive experience and creating deepfakes can enhance the learning ability of students. One example of this includes creating a digital clone of historical figures, such asAbraham Lincoln, to show what problems he faced during his life and how he was able to overcome them. Another example of using digital clones in an educational setting is having speakers create a digital clone of themselves. Various advocacy groups may have trouble with schedules as they are touring various schools during the year. However, by creating digital clones of themselves, their clones can present the topic at places where the group could not physically make it. These educational benefits can bring students a new way of learning as well as giving access to those who previously were not able to access resources due to environmental conditions.[13] Although digital cloning has already been in the entertainment and arts industry for a while, artificial intelligence can greatly expand the uses of these technology in the industry. The movie-industry can create even more hyper-realistic actors and actresses who have died. Additionally, movie-industry can also create digital clones in movie scenes that may require extras, which can help cut the cost of production immensely. However, digital cloning and other technology can be beneficial for non-commercial purposes. For example, artists can be more expressive if they are looking to synthesize avatars to be part of their video production. They can also create digital avatars to draft up their work and help formulate their ideas before moving on working on the final work.[13]ActorVal Kilmerlost his voice in 2014 after atracheotomydue to histhroat cancer. However, he partnered with an AI company that produced a synthetic voice based on his previous recordings. The voice enabled Kilmer to retake his "Iceman" role from 1986Top Gunin the 2022 sequel filmTop Gun: Maverick.[38] Althoughdigital immortalityhas existed for a while as social media accounts of the deceased continue to remain in cyberspace, creating a virtual clone that is immortal takes on a new meaning. With the creation of a digital clone, one can not only capture the visual presence of themselves but also their mannerism, including personality and cognition. With digital immortality, one can continue to interact with a representation of their loved ones after they have died. Furthermore, families can connect with the representations of multiple generations, forming a family tree, in a sense, to pass on the family legacy to future generations, providing a way for history to be passed down.[7] With a lack of regulations for deepfakes, there are several concerns that have arisen. Some concerning deepfake videos that can bring potential harm includes depiction of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians may begin to appear although it may have never occurred in real life.[39]With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society.[13]Strict regulations should be made by social media companies and other platforms for news.[40] Another reason deepfakes can be used maliciously is for one to sabotage another on a personal level. With the increased accessibility of technologies to create deepfakes,blackmailersand thieves are able to easily extract personal information for financial gains and other reasons by creating videos of loved ones of the victim asking for help.[13]Furthermore, voice cloning can be used maliciously for criminals to make fake phone calls to victims. The phone calls will have the exact voice and mannerism as the individual, which can trick the victim into givingprivate informationto the criminal without knowing.[41]Alternatively, a bad actor could, for example, create a deepfake of a person superimposed onto a video to extract blackmail payment and/or as an act ofrevenge porn. Creating deepfakes and voice clones for personal use can be extremely difficult under the law because there is no commercial harm. Rather, they often come in the form of psychological and emotional damage, making it difficult for the court to provide a remedy for.[5] Although there are numerous legal problems that arises with the development of such technology, there are also ethical problems that may not be protected under the current legislations. One of the biggest problems that comes with the use of deepfakes and voice cloning is the potential of identity theft. However, identity theft in terms of deepfakes are difficult to prosecute because there are currently no laws that are specific to deepfakes. Furthermore, the damages that malicious use of deepfakes can bring is more of a psychological and emotional one rather than a financial one, which makes it more difficult to provide a remedy for. Allen argues that the way one’s privacy should be treated is similar toKant’s categorical imperative.[5] Another ethical implication is the use of private and personal information one must give up to use the technology. Because digital cloning, deepfakes, and voice cloning all use a deep-learning algorithm, the more information the algorithm receives, the better the results are.[42]However, every platform has a risk of data breach, which could potentially lead to very personal information being accessed by groups that users never consented to. Furthermore,post-mortem privacycomes into question when family members of a loved one tries to gather as much information as possible to create a digital clone of the deceased without the permission of how much information they are willing to give up.[43] In the United States, copyright laws require some type of originality and creativity in order to protect the author’s individuality. However, creating a digital clone simply means taking personal data, such as photos, voice recordings, and other information in order to create a virtual person that is as close to the actual person. In the decision of Supreme Court caseFeist Publications Inc. v. Rural Television Services Company, Inc., Justice O’Connor emphasized the importance of originality and some degree of creativity. However, the extent of originality and creativity is not clearly defined, creating a gray area for copyright laws.[44]Creating digital clones require not only the data of the person but also the creator’s input of how the digital clone should act or move. InMeshwerks v. Toyota,this question was raised and the court stated that the same copyright laws created for photography should be applied to digital clones.[44] With the current lack of legislations to protect individuals against potential malicious use of digital cloning, the right of publicity may be the best way to protect one in a legal setting.[4]Theright of publicity, also referred to as personality rights, gives autonomy to the individual when it comes to controlling their own voice, appearance, and other aspects that essentially makes up their personality in a commercial setting.[45]If a deepfake video or digital clone of one arises without their consent, depicting the individual taking actions or making statements that are out of their personality, they can take legal actions by claiming that it is violating their right to publicity. Although the right to publicity specifically states that it is meant to protect the image of an individual in a commercial setting, which requires some type of profit, some state that the legislation may be updated to protect virtually anyone's image and personality.[46]Another important note is that the right of publicity is only implemented in specific states, so some states may have different interpretations of the right compared to other states. Digital and digital thought clones raise legal issues relating to data privacy, informed consent, anti-discrimination, copyright, and right of publicity. More jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorised development and use of digital thought clones.[3] One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel.[47]By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated.[48] In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance forprivate corporationscreating stricter guidelines to protect individual privacy.[30]With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, includingmedicine,education,politics, and theeconomy. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protecthumans rightsas technology takes over. As the private sector gains more digital power over the public, it is important to set strictregulationsand laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations ofprivacy policyshould also be a warning for how personal information can be accessed and used without the person’s consent.[8] Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances.[49]Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing thedigital literacyof the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online.[30]
https://en.wikipedia.org/wiki/Digital_cloning
Digital face replacementis acomputer generated imageryeffect used inmotion picture post-production.[1]It is commonly used to make an actor'sbody doubleorstunt doublelook as if they are the original actor. Possibly the earliest use of face replacement was in the 1993 movieJurassic Park.[1] Digital face replacementhas also been used to finish an actor's performance in the event of their death during shooting. Examples include the use of face replacement to double forBrandon Leeafter his death during the shooting ofThe Crow(1994),[2]and the use of face replacement to completeOliver Reed's performance inGladiator(2000).[3] There are publicly accessible online platforms that enable users to perform digital face swapping. One example is dailyfakes.com, a website that offers face swap functionality directly in the browser.[4] This filmmaking article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Digital_face_replacement
Facial motion captureis the process of electronically converting the movements of a person's face into a digital database using cameras orlaser scanners. This database may then be used to producecomputer graphics(CG),computer animationfor movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually. A facialmotion capturedatabase describes the coordinates or relative positions of reference points on the actor's face. The capture may be in two dimensions, in which case the capture process is sometimes called "expressiontracking", or in three dimensions. Two-dimensional capture can be achieved using a single camera and capture software. This produces less sophisticated tracking, and is unable to fully capture three-dimensional motions such as head rotation. Three-dimensional capture is accomplished usingmulti-camera rigsor laser marker system. Such systems are typically far more expensive, complicated, and time-consuming to use. Two predominate technologies exist: marker and marker-less tracking systems. Facial motion capture is related to body motion capture, but is more challenging due to the higher resolution requirements to detect and track subtle expressions possible fromsmall movementsof the eyes and lips. These movements are often less than a few millimeters, requiring even greater resolution and fidelity and different filtering techniques than usually used in full body capture. The additional constraints of the face also allow more opportunities for using models and rules. Facial expression captureis similar to facial motion capture. It is a process of using visual or mechanical means to manipulate computer generated characters with input from humanfaces, or torecognize emotionsfrom a user. One of the first papers discussing performance-driven animation was published byLance Williamsin 1990. There, he describes 'a means of acquiring the expressions of realfaces, and applying them to computer-generated faces'.[1] Traditional marker based systems apply up to 350 markers to the actorsfaceand track the marker movement with high resolutioncameras. This has been used on movies such asThe Polar ExpressandBeowulfto allow an actor such asTom Hanksto drive the facial expressions of several different characters. Unfortunately this is relatively cumbersome and makes the actors expressions overly driven once the smoothing and filtering have taken place. Next generation systems such asCaptiveMotionutilize offshoots of the traditional marker based system with higher levels of details. Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback. Markerless technologies use the features of the face such asnostrils, the corners of the lips and eyes, and wrinkles and then track them. This technology is discussed and demonstrated atCMU,[2]IBM,[3]University of Manchester(where much of this started withTim Cootes,[4]Gareth Edwards and Chris Taylor) and other locations, usingactive appearance models,principal component analysis,eigen tracking,deformable surface modelsand other techniques to track the desired facial features fromframeto frame. This technology is much less cumbersome, and allows greater expression for the actor. These vision based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer-animated features. Typical limitations of vision based approaches are resolution and frame rate, both of which are decreasing as issues as high speed, high resolutionCMOS camerasbecome available from multiple sources. The technology for markerless face tracking is related to that in aFacial recognition system, since a facial recognition system can potentially be applied sequentially to each frame of video, resulting in face tracking. For example, the Neven Vision system[5](formerly Eyematics, now acquired by Google) allowed real-time 2D face tracking with no person-specific training; their system was also amongst the best-performing facial recognition systems in the U.S. Government's 2002 Facial Recognition Vendor Test (FRVT). On the other hand, some recognition systems do not explicitly track expressions or even fail on non-neutral expressions, and so are not suitable for tracking. Conversely, systems such asdeformable surface modelspool temporal information to disambiguate and obtain more robust results, and thus could not be applied from a single photograph. Markerless face tracking has progressed to commercial systems such asImage Metrics, which has been applied in movies such asThe Matrixsequels[6]andThe Curious Case of Benjamin Button. The latter used theMovasystem to capture a deformable facial model, which was then animated with a combination of manual and vision tracking.[7]Avatarwas another prominent performance capture movie however it used painted markers rather than being markerless.Dynamixyz[permanent dead link]is another commercial system currently in use. Markerless systems can be classified according to several distinguishing criteria: To date, no system is ideal with respect to all these criteria. For example, the Neven Vision system was fully automatic and required no hidden patterns or per-person training, but was 2D. The Face/Off system[8]is 3D, automatic, and real-time but requires projected patterns. Digital video-based methods are becoming increasingly preferred, as mechanical systems tend to be cumbersome and difficult to use. Usingdigital cameras, the input user's expressions are processed to provide the headpose, which allows the software to then find the eyes, nose and mouth. The face is initially calibrated using a neutral expression. Then depending on the architecture, the eyebrows, eyelids, cheeks, and mouth can be processed as differences from the neutral expression. This is done by looking for the edges of the lips for instance and recognizing it as a unique object. Often contrast enhancing makeup or markers are worn, or some other method to make the processing faster. Like voice recognition, the best techniques are only good 90 percent of the time, requiring a great deal of tweaking by hand, or tolerance for errors. Since computer generated characters don't actually havemuscles, different techniques are used to achieve the same results. Some animators create bones or objects that are controlled by the capture software, and move them accordingly, which when the character is rigged correctly gives a good approximation. Since faces are very elastic this technique is often mixed with others, adjusting the weights differently for theskinelasticity and other factors depending on the desired expressions. Several commercial companies are developing products that have been used, but are rather expensive.[citation needed] It is expected that this will become a majorinput devicefor computer games once the software is available in an affordable format, but the hardware and software do not yet exist, despite the research for the last 15 years producing results that are almost usable.[citation needed] The first application that got wide adoption is communication. Initially video telephony and multimedia messaging and later in 3D with mixed reality headsets. With the advance ofmachine learning, compute power and advanced sensors, especially on mobile phones, facial motion capture technology became widely available. Two notable examples are Snapchat'slensfeature and Apple's Memoji[9]that can be used to record messages with avatars or live via theFaceTimeapp. With these applications (and many other) most modern mobile phones today are capable of performing real time facial motion capture! More recently, real time facial motion capture, combined with realistic 3Davatarswere introduced to enable immersive communication inmixed reality(MR) andvirtual reality(VR).Metademonstrated their Codec Avatars to communicate via their MR headsetMeta Quest Proto record a podcast with two remote participants.[10]Apple's MR headsetApple Vision Proalso supports real-time facial motion capture that can be used with applications such asFaceTime. Real-time communication applications prioritize lowlatencyto facilitate natural conversation and ease of use, aiming to make the technology accessible to a broad audience. These considerations may limit on the possible accuracy of the motion capture.
https://en.wikipedia.org/wiki/Facial_motion_capture
Fake nude photographyis the creation ofnude photographsdesigned to appear as genuine nudes of an individual.[1][2]The motivations for the creation of these modified photographs include curiosity, sexual gratification, the stigmatization or embarrassment of the subject, and commercial gain, such as through the sale of the photographs via pornographic websites.[1][3][4][5][6]Fakes can be created usingimage editing softwareor throughmachine learning. Fake images created using the latter method are calleddeepfakes. Magazines such asCelebrity Skinpublished non-fake paparazzi shots and illicitly obtained nude photos, showing there was a market for such images.[7]Subsequently, some websites hosted fake nude or pornographic photos of celebrities, which are sometimes referred to ascelebrity fakes. In the 1990s and 2000s, fake nude images of celebrities proliferated onUsenetand on websites, leading to campaigns to take legal action against the creators of the images[8][9]and websites dedicated to determining the veracity of nude photos.[10]"Deepfakes", which useartificial neural networksto superimpose one person's face into an image or video of someone else, were popularized in the late 2010s, leading to concerns about the technology's use infake newsandrevenge porn.[11][12] Fake nude photography is sometimes confused withDeepfake pornography, but the two are distinct. Fake nude photography typically starts with human-made non-sexual images, and merely makes it appear that the people in them are nude (but not having sex). Deepfake pornography typically starts with human-made sexual (pornographic) images or videos, and alters the actors' facial features to make the participants in the sexual act look like someone else.[citation needed] In June 2019, a downloadableWindowsandLinuxapplication called DeepNude was released which used aGenerative Adversarial Networkto remove clothing from images of women. The images it produced were typically not pornographic, merely nude. Because there were more images of nude women than men available to its creator, the images it produced were all female, even when the original was male. The app had both a paid and unpaid version.[13]A few days later, on June 27, the creators removed the application and refunded consumers, although various copies of the app, both free and for charge, continue to exist.[14]OnGitHub, theopen-sourceversion of this program called "open-deepnude" was deleted.[15]The open-source version had the advantage of allowing it to be trained on a larger dataset of nude images to increase the resulting nude image's accuracy level.[16]A successor free software application, Dreamtime, was later released, and some copies of it remain available, though some have been suppressed. In July 2019 a deepfake bot service was launched on messaging appTelegramthat used AI technology to create nude images of women. The service was free and enabled users to submit photos and receive manipulated nude images within minutes. The service was connected to seven Telegram channels, including the main channel that hosts the bot, technical support, and image sharing channels. While the total number of users was unknown, the main channel had over 45,000 members. As of July 2020, it is estimated that approximately 24,000 manipulated images had been shared across the image sharing channels.[17] By late 2024, most ways to produce nude images from photographs of clothed people were accessible at websites rather than in apps, and required payment.[18] The reasons for the creation of nude photos may range from a need to discredit the target publicly, personal hatred for the target, or the promise of pecuniary gains for such work on the part of the creator of such photos.[1][3][4]Fake nude photos often target prominent figures such as businesspeople or politicians. "They are fake nudes, altered in Photoshop, and it is one of many tactics that has been used to silence me." In 2010, 97 people were arrested in Korea after spreading fake nude pictures of the groupGirls' Generationon the internet.[19]In 2011, a 53-year-oldIncheonman was arrested after spreading more fake pictures of the same group.[20][21][22] In 2012, South Korean police identified 157 Korean artists of whom fake nudes were circulating.[23] In 2012, whenLiu Yifei's fake nude photography released on the network, Liu Yifei Red Star Land Company declared a legal search to find out who created and released the photos.[24][25] In the same year,ChineseactorHuang Xiaomingreleased nude photos that sparked public controversy, but they were ultimately proven to be real pictures.[26] In 2014, supermodelKate Uptonthreatened to sue a website for posting her fake nude photos.[27]Previously, in 2011, this page was threatened by Taylor Swift.[28] In November 2014, singerBi Rainwas angry because of a fake nude photo that spread throughout the internet. Information reveals that: "Rain's nude photo was released fromKim Tae-hee's lost phone." Rain's label,Cube Entertainment, stated that the person in the nude photo is not Rain and the company has since stated that it will take strict legal action against those who post photos together with false comments.[29][30] In July 2018, Seoul police launched an investigation after a fake nude photo of PresidentMoon Jae-inwas posted on the website of the Koreanradical feministgroupWOMAD.[31] In early 2019,Alexandria Ocasio-Cortez, aDemocraticpolitician, was berated by other political parties over a fake nude photo of her in the bathroom. The picture created a huge wave of media controversy in the United States.[32][33][34][35] Fake nude images can be created using image editing software or neural network applications.[12][36]There are two basic methods:[37] Images of this type may have a negative psychological impact on the victims and may be used forextortionpurposes.[1][39][40]
https://en.wikipedia.org/wiki/Fake_nude_photography
Fifth-generation warfare(5GW) is warfare that is conducted primarily throughnon-kinetic military action, such associal engineering,misinformation,cyberattacks, along with emerging technologies such asartificial intelligenceand fullyautonomous systems. Fifth generation warfare has been described by Daniel Abbot as a war of "information and perception".[1]There is no widely agreed upon definition of fifth-generation warfare,[2]and it has been rejected by some scholars, includingWilliam S. Lind, who was one of the original theorists offourth-generation warfare.[3] The term 'fifth-generation warfare' was first used in 2003 by Robert Steele. The following year, Lind criticised the concept, arguing that the fourth generation had yet to fully materialize.[4] In 2008, the term was used by Terry Terriff,[5]who presented the2003 ricin lettersas a potential example, but stated that he was not entirely sure if it was a fifth-generation attack, claiming "we may not recognize it as it resolves around us. Or we might look at several alternative futures and see each as fifth generation."[5]Terriff argued that while fifth-generation warfare allows "super-empowered individuals" to make political statements throughterrorism, they lack the political power to actually have their demands met.[6] Alex P. Schmidsaid that fifth-generation warfare is typified by its "omnipresent battlefield", and the fact that people engaged in it do not necessarily use military force, instead employing a mixture of kinetic and non-kinetic force.[7]In the 1999 bookUnrestricted Warfare,by colonelsQiao LiangandWang Xiangsuiof thePeople's Liberation Army, they noted that in the years since the 1991Gulf War, conventional military violence had decreased, which correlated to an increase in "political, economic, and technological violence", which they argued could be more devastating than a conventional war.[8]On the contrary,Thomas P. M. Barnettbelieves that the effectiveness of fifth-generational warfare is exaggerated, as terrorism conducted by individuals, such asTimothy McVeighorTed Kaczynski, lacks the support of more organized movements. This was seconded byGeorge Michael, who noted that in the United States,gang violencewas responsible for far more deaths thanlone wolf terrorist attacks.[9] L.C. Rees described the nature of fifth generation warfare as difficult to define in itself, alluding tothe third lawof science fiction authorArthur C. Clarke– "any sufficiently advanced technology is indistinguishable from magic."[10]
https://en.wikipedia.org/wiki/Fifth-generation_warfare
Hyperrealityis a concept inpost-structuralismthat refers to the process of the evolution of notions of reality, leading to a cultural state of confusion betweensignsand symbols invented to stand in for reality, and direct perceptions ofconsensus reality.[1]Hyperreality is seen as a condition in which, because of the compression of perceptions of reality in culture and media, what is generally regarded as real and what is understood as fiction are seamlessly blended together in experiences so that there is no longer any clear distinction between where one ends and the other begins.[2] The term was proposed by French philosopherJean Baudrillard, whosepostmodernwork contributed to a scholarly tradition in the field of communication studies that speaks directly to larger social concerns. Postmodernism was established through the social turmoil of the 1960s, spurred by social movements that questioned preexisting conventions and social institutions. Through the postmodern lens, reality is viewed as a fragmented, complimentary and polysemic system with components that are produced by social and cultural activity.Social realitiesthat constitute consensus reality are constantly produced and reproduced, changing through the extended use of signs and symbols which hence contribute to the creation of a greater hyperreality. The postmodern semiotic concept of hyperreality was contentiously coined by Baudrillard inSimulacra and Simulation(1981).[3]Baudrillard defined "hyperreality" as "the generation by models of a real without origin or reality";[4]and his earlier bookSymbolic Exchange and Death. Hyperreality is a representation, a sign, without an original referent. According to Baudrillard, the commodities in this theoretical state do not haveuse-valueas defined byKarl Marxbut can be understood assignsas defined byFerdinand de Saussure.[5]He believes hyperreality goes further than confusing or blending the 'real' with the symbol which represents it; it involves creating a symbol or set of signifiers which represent something that does not actually exist, like Santa Claus. Baudrillard borrows, fromJorge Luis Borges' "On Exactitude in Science" (already borrowed fromLewis Carroll), the example of a society whosecartographerscreate a map so detailed that it covers the very things it wasdesigned to represent. When the empire declines, the map fades into the landscape.[6]He says that, in such a case, neither the representation nor the real remains, just the hyperreal. Baudrillard's idea of hyperreality was heavily influenced byphenomenology,semiotics, and the philosophy ofMarshall McLuhan. Baudrillard, however, challenges McLuhan's famous statement that "the medium is the message," by suggesting that information devours its own content. He also suggested that there is a difference between the media and reality and what they represent.[6]Hyperreality is the inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced societies.[7]However, Baudrillard's hyperreality theory goes a step further than McLuhan's medium theory: "There is not only an implosion of the message in the medium, there is, in the same movement, the implosion of the medium itself in the real, the implosion of the medium and of the real in a sort of hyperreal nebula, in which even the definition and distinct action of the medium can no longer be determined".[8] Italian authorUmberto Ecoexplores the notion of hyperreality further by suggesting that the action of hyperreality is to desire reality and in the attempt to achieve that desire, to fabricate a false reality that is to be consumed as real.[9]Linked to contemporarywestern culture, Umberto Eco andpost-structuralistswould argue that in current cultures, fundamental ideals are built on desire and particularsign-systems. Temenuga Trifonova fromUniversity of California, San Diegonotes, [...]it is important to consider Baudrillard's texts as articulating anontologyrather than anepistemology.[10] Hyperreality is significant as aparadigmto explain current cultural conditions.Consumerism, because of its reliance on sign exchange value (e.g. brand X shows that one is fashionable, car Y indicates one's wealth), could be seen as a contributing factor in the creation of hyperreality or the hyperreal condition. Hyperreality tricks consciousness into detaching from any real emotional engagement, instead opting for artificial simulation, and endless reproductions of fundamentally empty appearance. Essentially (although Baudrillard himself may balk at the use of this word), fulfillment orhappinessis found through simulation and imitation of a transientsimulacrumof reality, rather than any interaction with any "real"reality.[11] While hyperreality is not a new concept, its effects are more relevant in modern society, incorporating technological advancements like artificial intelligence, virtual reality andneurotechnology(simulated reality). This is attributed to the way it effectively captured the postmodern condition, particularly how people in the postmodern world seek stimulation by creating unreal worlds of spectacle and seduction and nothing more.[12]There are dangers to the use of hyperreality within our culture; individuals may observe and accept hyperreal images as role models when the images don't necessarily represent real physical people. This can result in a desire to strive for an unobtainable ideal, or it may lead to a lack of unimpaired role models.Daniel J. Boorstincautions against confusing celebrity worship with hero worship, "we come dangerously close to depriving ourselves of all real models. We lose sight of the men and women who do not simply seem great because they are famous but who are famous because they are great".[13]He bemoans the loss of old heroes likeMoses,Julius CaesarandAbraham Lincoln, who did not havepublic relations(PR) agencies to construct hyperreal images of themselves.[14]The dangers of hyperreality are also facilitated by information technologies, which provide tools to dominant powers that seek to encourage it to drive consumption and materialism.[15]The danger in the pursuit of stimulation and seduction emerge not in the lack of meaning but, as Baudrillard maintained, "we are gorged with meaning and it is killing us."[16] Hyperreality, some sources point out, may provide insights into the postmodern movement by analyzing how simulations disrupt thebinary oppositionbetween reality andillusionbut it does not address or resolve the contradictions inherent in this tension.[17] The concepts most fundamental to hyperreality are those of simulation and the simulacrum, first conceptualized byJean Baudrillardin his bookSimulacra and Simulation. The two terms are separate entities with relational origin connections to Baudrillard's theory of hyperreality. Simulationis characterized by a blending of 'reality' and representation, where there is no clear indication of where the former stops and the latter begins. Simulation is no longer that of a territory, a referential being, or a substance; "It is the generation by models of a real without origin or reality: a hyperreal."[18]Baudrillard suggests that simulation no longer takes place in a physical realm; it takes place within a space not categorized by physical limits i.e., within ourselves, technological simulations, etc. Thesimulacrumis "an image without resemblance"; asGilles Deleuzesummarized, it is the forsaking of "moral existence in order to enter into aesthetic existence".[19]However, Baudrillard argues that a simulacrum is not a copy of the real, but becomes—through sociocultural compression—truth in its own right. There are four steps of hyperreal reproduction: The concept of "hyperstition" as expounded upon by the English collectiveCybernetic Culture Research Unitgeneralizes the notion of hyperreality to encompass the concept of "fictional entities that make themselves real." In Nick Land's own words:[21] Hyperstition is a positivefeedback circuitincluding culture as a component. It can be defined as the experimental (techno-)science ofself-fulfilling prophecies. Superstitions are merely false beliefs, but hyperstitions – by their very existence as ideas –function causally to bring about their own reality. The concept of hyperstition is also related to the concept of "theory-fiction", in which philosophy,critical theoryandpostmodern literaturespeculate on actual reality and engage with concepts for potentialities and virtualities. An oft-cited example of such a concept iscyberspace—originating inWilliam Gibson's 1984 novelNeuromancer—which is a concept for the convergence between virtualities and actualities.[22]By the mid-1990s, the realization of this concept had begun to emerge on a mass scale in the form of the internet. The truth was already being called into question with the rise of media andtechnology, but with the presence of hyperreality being used most and embraced as a new technology, there are a couple of issues or consequences of hyperreality. It's difficult enough to hear something on the news and choose not to believe it, but it's quite another to see an image of an event or anything and use your empirical sense to determine whether the news is true or false, which is one of the consequences of hyperrealism.[23]The first is the possibility of various simulations being used to influence the audience, resulting in an inability to differentiate fiction from reality, which affects the overall truth value of a subject at hand. Another implication or disadvantage is the possibility of being manipulated by what we see. The audience can interpret different messages depending on the ideology of the entity behind an image. As a result, power equates to control over the media and the people.[24]Celebrities, for example, have their photographs taken and altered so that the public can see the final result. The public then perceives celebrities based on what they have seen rather than how they truly are. It can progress to the point where celebrities appear completely different. As a result of celebrities' body modifications and editing, there has been an increase in surgeries and a decrease in self-esteem during adolescence.[25]Because the truth is threatened, a similar outcome for hyperreality is possible. There is a strong link between media and the impact that the presence of hyperreality has on its viewers. This has shown to blur the lines between artificial realities and reality, influencing the day to day experiences of those exposed to it.[26]As hyperreality captures the inability to distinguish reality from a simulation of reality, common media outlets such as news, social media platforms, radio and television contribute to this misconception of true reality.[27]Descriptions of the impact of hyperreality can be found in popular media. They present themselves as becoming blended with reality, which influences the experience of life and truth for its viewers. Baudrillard, likeRoland Barthesbefore him, explained that these impacts have a direct effect on younger generations who idolize the heroes, characters orinfluencersfound on these platforms. As media is a social institution that shapes and develops its members within society, the exposure to hyperreality found within these platforms presents an everlasting effect.[28]Baudrillard concludes that exposure to hyperreality over time will lead, from the conservative perspective of the institutions themselves, to confusion and chaos, in turn leading to the destruction of identity, originality and character while ironically still being the mainstay of the institutions. The hyperreality environment on the internet has shifted dramatically over the course of theCOVID-19 pandemic, so much so that it has an influence on theItalian Stock Exchangein 2021.[29] TheHollywood signin Los Angeles, California, itself produces similar notions, but is more asymbolof a facet of hyperreality—the creation of a city with its main target being media production.[30] BothUmberto EcoandJean Baudrillardrefer toDisneylandas an example of hyperreality. Eco believes that Disneyland with its settings such asMain Streetand full sized houses has been created to look "absolutely realistic", taking visitors' imagination to a "fantastic past".[31]This false reality creates an illusion and makes it more desirable for people to buy this reality. Disneyland works in a system that enables visitors to feel that technology and the created atmosphere "can give us more reality than nature can".[32]The "fake nature" of Disneyland satisfies our imagination and daydream fantasies in real life. The idea is that nothing in this world is real. Nothing is original, but all are endless copies of reality. Since we do not imagine the reality of simulations, both imagined and real are equally hyperreal, for example, the numerous simulated rides, including thesubmarine rideand theMississippi boat tour.[8]When entering Disneyland, consumers form into lines to gain access to each attraction. Then they are ordered by people with special uniforms to follow the rules, such as where to stand or where to sit. If the consumers follow each rule correctly, they can enjoy "the real thing" and see things that are not available to them outside of Disneyland's doors.[33]
https://en.wikipedia.org/wiki/Hyperreality
Identity replacement technologyis any technology that is used to cover up all or parts of a person's identity, either in real life orvirtually. This can include facemasks, face authentication technology, anddeepfakeson the Internet that spread fakeeditingof videos and images.[1]Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads toheistsor robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors. Onlineidentity theft, information stealing, and deepfakes are all methods used by hackers to replace or alter the identity of a victim. Along with these hacking methods are some solutions: face liveness detection,obfuscationof crucial information, and location privacy obfuscation. More advanced obfuscation technology can cover up the location of a person through privacy protection. The main method to achieve these kinds of obfuscation is by replacing personal information such as the location of a person with anonymous identities and operators ortrackers.[2]There is also research being done on the effectiveness and use ofbiometricidentity authentication such as fingerprints and faces to replace personal identity authentication such as one'sSSN. For biotechnology identity replacement, gene sequencing and identity adjustments are common areas of research. With cutting-edge technology, it is possible to change the identity of a person or the identity of an offspring.[3]With the advancement of science comes the ethical issues ofcloning, identity change, and societal and organizational transformations. Features replacement technology is any technology that changes, alters, hides, or misrepresent a person's features. This can include feature replacements such as fingerprint replacement, face replacement,pupilauthentication replacement, etc. The technology involved in features replacement ranges from masking to creating 3D videos and images.[4] A variety of technologies attempt to fool facial recognition software by the use ofanti-facial recognition masks.[5]3D masks that replace body features, usually faces, can be made from materials such as plastic, cotton, leather, etc. These identity masks can range from realistic imitations of a person to unrealistic characters that hide the identity of an individual. Criminals and hackers tend to use a spectrum of masks depending on their intended objectives of the crime and other environmental factors. Usually, if a crime involves more planning and execution, criminals and hackers put more effort into creating their 3d masks.[6] There are many intended purposes for feature replacements.Cyber criminalsor real-life criminals use masks or 3D generated images of a mask to hide from security systems or pass through security checks. They usually do this finding the identity of a victim who has access to certain security systems. Then, criminals wear masks in public to conduct fraud and passes through security systems as the victim of the identity theft.[6]This usage of masks and3D printeditems to cover certain body features while conducting crime is illegal under laws likeanti-mask laws.[7] Another use of face replacement technology is to hide one's identity from third-party trackers, monitors, and government officials.[8]Although uncommonly used by individuals, this method of hiding one's identity (either online or in-person) is mainly used for hiding from government tracking, for entertainment purposes, and for religious purposes. People may decide to wear a mask in-doors to prevent government tracking, for example. Deepfakes, a type of identity replacement technology, are pictures or video edits that replace the identity of a person in the picture or the video. This digital forgery is used to manipulate informations, defame people, and blackmail individuals. Through editing techniques such as face replacement andpixelor coloration implant, deepfakes can resemble the real image closely.[1] Deepfakes are classified into four types of identity manipulations: face synthesis, identity swap, attribute manipulation, and expression swap.[6]Some more specific examples include face swapping,lip syncing, motion transfer, and audio generation.[9]Although a more common usage of synthetic media or deepfakes is political disinformation, a less known phenomenon is financial fraud committed bycybercriminalswho use deepfakes to steal financial data and profit from doing so. Hackers and criminals use deepfakes to penetrate social media accounts, security systems of banks, and individual financial information of wealthy individuals.[1]Two scenarios that are used by hackers and manipulators includenarrowcastandbroadcast. Some deepfake techniques include deepfakevoice phishing, fabricated private marks, and synthetic social media profiles that contain profiles of fake identities. According to research,[1]deepfake prevention requires collaboration from key stakeholder such as internal firm employees, industry-wide experts, and multi-stakeholder groups. Some possible methods of deterring deepfakes include early detection of face mismatches, individual feature analysis of the face, identity confirmation of images or videos, and techniques that utilize multi-feature analysis that pinpoint face liveness, etc.[6]There is further research being done on deepfakes techniques such as face morphing and face de-identification. However, deepfake prevention techniques tend to be worse at identifying more advanced deepfakes, identification methods sometimes fail to recognize unseen conditions not related to facial analysis, and databases and technology must be up-to-date based on evolving deepfake techniques. Deepfakes can be used as weapons to spreadmisinformationand threaten democratic systems through identity replacement strategies. Some deepfakes, due to low cost and ease of usage, can be used to replace identities and spread misinformation across nations and the international landscape effectively.[9]Hackers can usebotsor deepfakes that spread propaganda and disinformation to adversaries, and these attempts could challenge democratic processes internationally. The public will be distrustful due to the potential use of deepfakes by politicians or outside countries. Spoofing, a concept related to deepfake, is a method of hacking and identity manipulation by impersonating as a known source trusted by a spoof target or system of security. Spoofing attacks can be easily launched due to common uses of face recognition systems inmobile deviceunlocking.[10]One way the hackers get into the system is by using a synthetic-forged biometric that fools sensors and grants a different identity to the hacker who passes through as the real identity.[11] Spoofing can also involve fake physical artifacts such as fake printouts of masks and fingers that hackers use to manipulate biometric authentication technology.[8]Due to spoofing attempts on a mass scale, there are global political, ethical, and economical threats that goes beyond a country's borders. Mass crimes involving cybersecuritybreaches, political hacking, and personal identity thieving can cause damage to the international landscape.[8] Payment information,personal information, and biometric information are all potential exploitation sources performed on by hackers.[10]There are both feature level and sensor level anti-spoofing techniques. The goal of anti-spoofing is to deter illegitimate users from accessing to important and personal information. 4 main groups of anti-spoofing techniques are widely used by cybersecurity experts: motion analysis, texture analysis, image quality analysis, and hardware based analysis that integrates software components.[10]Another anti-spoofing technique is using color texture to analyze the joint color-texture density of facial features in images and videos.[12]By comparing across databases using replay videos of spoof attacks, many of these methods are able to detectlivenessof faces and facial symmetry under a controlled environment.[12] Anti-spoofing and deepfake identification are prone to errors and attacks. For example, one model of attentional adversarial network to generate fake images that match the original pictures in terms of features, face strength, andsemanticinformation.[13]One drawback of such an adversarial network model is it analyzes only one attack target; However, research is being done on using various models to target multiple attacks.[13]Some other shortcomings of anti-spoofing techniques include failure to detect spoofing across databases, failure to apply to real life scenarios, and performance issues related to the limits of the technologies involved.[11] Gene sequencingand gene therapy are cutting-edge technology used by biotech researchers to discover ways of altering the identities or genes of offsprings and humans. With gene alternating and features enhancement, one can change the structural identities of offsprings. Another related concept is cloning, a more futuristic concept about replicating human beings.[3] On a broader level, identity change leads tosocial transformation.[14]Identity change and organization transformations occur sometimes at the same time. For example, there is profound socio-political change related to collective and individual identity change in Ireland. Identity change is also associated with economical, political, and social factors related to the changing environment. Individuals maintain the right to make personal choices, but these choices are often affected by one's surroundings and one's immediate environment.[14]Enhancement and alteration of the human body and identity is thus connected to broader social transformations. If society changes and evolves, then individuals may choose to evolve with it. Generational factors are also considered by researchers as biotech evolves and advances. Fundamentally, some current objections to enhancement biotech include questions about authenticity of biotech enhancement and fundamental attributes and values of being human. Some key concerns include safety, ethical distributions, and identity traits violations.[3]Current biotech research is seeking to expand upon what human identity means, the connection between gene alteration and human enhancement, generational offspring alterations. More research is needed in this realm of biotechnology research for scientists to determine the viability and ethical issues revolving around advancedbiotechnology.[3] Biometric identifications, including face authentication technology, is used by firms, governments, and various other organizations for security checks and personal identification.[8]This procedure and technology is especially important to protect private materials and information of a firm or government. Due to evolving security technology, Biometric authentication methods are replacing physical copies of IDs, numbers likeSSN, and personal information written on paper. 3D camerasand depth analysis can be used to detect spoofing and fraudulent data.[4]Biometric identifications with a wide range of depth and flexility can aid the detection of spoofing attempts by hackers and identity thieves. Liveness assurance and authentication of faces can help prevent face identity manipulation and forgery in that liveness detection of the face can use color, depth, angle of facial features, and other factors to distinguish between fake and real faces. Due to the ease of making a 3D mask and creating deepfakes online, fake identities is increasingly common in the tech industry. Some common methods used to achieve face authentication results include: SVM classifiers,image qualityassessment, pupil tracking and color texture analysis.[4]Biometric identificationtechnology with a higher flexibility leads to better detection of spoofing attacks. 3D face reconstruction and face alignment can aid the use of biometric identification systems when authenticating identities of individuals. An end-to-end method called Position Map Regression Network is used to reconstruct 3D facial features from the 3D space such as from an image of a person. Some key metrics in measuring the effectiveness of alignment and reconstruction include face reconstruction speed, runtime of alignments and accuracy of facial alignment compared to original image.[15]Through restructuring 3D facial structures using density to align faces, position maps can convert a 3D face into a 2D image based on a UV plain analysis. 3D shapes are acquired by 3D sensors and specific features within the face shape are acquired by the sensors to retrieve information.[16]Convolutional neural networks are trained to extract facial and semantic information from the 3D image to the 2D image with a process called regression.[15]Overall, this position-map method of facial reconstruction and alignment can be used in cybersecurity authentication, biometric verification, and identity matching. Fingerprintingis also a biometric identification method researched on by cybersecurity firms and governments. Fingerprint verification can be used to counter identity theft or potential fraud just like face authentication technologies. One study uses a minutiae-extraction algorithm to develop an identity-authentication system based on how it extracts data and verifiable information from the fingerprint scan.[17]This model is based on alignment, where it matches inputs to stored template to verify the identity of someone faster and more accurately. The goal of all biometric authentication methods, including fingerprint identification, is to have accurate and speedy responses in authenticating data. Systems and alignment technologies are constantly updated to achieve better results. Some drawbacks of fingerprint identification are large distortions in poor image quality, straight linedeformations, vague transformations that affects authentication quality, and missing minutiae for some parts of an image.[17]However, multiple biometric authentication tools could be used, such as face and fingerprint, in order to obtain better and more accurate performances. The components of 3d sensors such as key electronic parts and sensor systems are increasingly made smaller and better by emphasizing compactness of sensors, effectiveness of detecting shapes, portability, strength of imaging, etc.[16]3D imaging and optical sensor can be expensive, but the cost can be decreased when manufacturers and suppliers make individual sensor components cheaper and more flexible to fit a variety of sensors and cameras. Virtualrendersand prototyping tools are integrated into 3D sensor and camera systems to aid with facial reconstruction, identity search, and shape designs. 3D sensors can be made to form sensor systems where the entire system is more effective at capture an image compared to single sensors or cameras.[16]There are applications for 3D sensors such as in manufacturing, optical uses, and robotic applications. Key industries that could utilize 3d cameras includerobotics, law enforcements, automatic authentication systems, and product development. Identity theft is the concept when a thief steals the identity of a victim and portrays oneself as the victim's identity. Identity theft has many implications both on a small and large scale. Individual identity theft can be limited to a single person when the identity thief takes on the identity of that victim.[18]The reason for identity theft might include pleasure of entertainment, malicious hacking, settling revenge, or for political purposes of sabotage. Mass scale identity theft can involve political sabotage, financial and economical heists and crimes, and social changes for the worse. Identity theft and identity replacement has shaped and affectedconsumer spendingover the past years in the financial world. One method used to analyze identity theft is to map identity theft incidents to determine geographical locations, environmental factors, and purposes of identity theft. Payment instruments used by different types of payment systems can affect how identity theft is used to obtain financial information and commit financial fraud.[18]Identity theft has an implication for consumer payment behaviors and adoptions. Although customers have different payment methods, geographical areas with more identity theft occurrences tend to have an increased use of payment methods such as money orders, travelers’ check,prepaid cards, and credit card payments.Electronic paymentsare widely used by consumers given society's evolving landscape of payment technology. However, these payment systems, including transactions of checks, cards, and cash, require periodic updates to keep up with evolving ways of identity theft.[19] Given our current economy of transactions involvingcustomer data, more opportunities are created for fraudulent transactions since more consumers are shopping online and conducting financial transactions online.[19]A thief could hack data related to common financial derivatives and items such as product payments,loans, mortgages, stocks,options trading, etc.[19]One way identity theft can happen is when the thief tries to obtain a service or product but pays it with someone else's financial data or account information. This fraudulent transaction will attribute the cost of the transaction to the identity thief victim. The victim's identity could be used multiple times by different thieves using similar or different identity theft methods. Some solutions to such problems includeconsumer protections,credit freezesif fraud occurs, credit verification, and penalties and enforcements. Identity theft can also involve political manipulations and hacking on a large scale that is detrimental to the political wellbeing of international politics.[8]Identity thieves can use identity replacement methods such as biometric replacement, face masks, deepfakes, and personal information stealing to conduct political sabotages. For example, an identity thief could conductvoter fraudby imposing as one or more individuals who cast ballots. The thief could also hack the social media account of a politician and post scandals or defamation about that politician. Obfuscationhas a technical meaning of code protection and making coding patterns, structures, and lines anonymous to everyone but the code programmer. This way, the programmer deters incoming hacks and shell-injection attacks methods. Another use of obfuscation is protecting a person's identity online, such as protection of privacy, location, and behaviors.[2] Obfuscation operators can be used to determine distribution areas, privacy protections, and location preferences. Probabilistic fundamentals such as thejoint distribution functionare used to test out obfuscation operators and how operators can be used to protect location privacy of individuals without sacrificing certain app features and efficiencies.[2]Thus, obfuscation can be used to make location and related information anonymous and useless to potential hackers who are trying to breach the privacy of individuals. Adversary models can be used to form combinations of operators to test the viability of obfuscation operators based on adversary awareness, utility functions, and robustness of operator families.[2] Another obfuscationprivacy protectionmethod protects images online and through social media.[13]The targeted-identity-protection-iterative method(TIP-IM) is used for this type of image-privacy protection. The method is to feed various adversarial models into TIP-IM and look at the performance of adversarial networks. By simulating an identity-protection system, the method identifies an adversarial network that interacts with privacy protection results.[20]Thus, the TIP-IM can prevent hackers' unauthorized access to images, accounts, and systems that have sensitive information. There is also a trade-off between effectiveness and naturalness of the protected face and identity images: naturalness of faces decreases as image protection becomes more effective.[13] Obfuscation can be divided into three categories: construction, empirical, and construction and empirical combination. Mapping obfuscation techniques involves analysis in data, layout, control, and preventive structures of applications.[21]By diversifying systems and obfuscation of data through system analysis, data scientists and security experts can make it harder for hackers to breach a system's security and privacy settings. Virtualization systems are used by cybersecurity experts to test the effects of various obfuscation techniques on potential cyber attacks. Different cyber attacks on private information require different diversification and obfuscation methods. Thus, a combination of multiple obfuscation methods such as code blocking, locationprivacy protection, identity replacements can be used. Some further studies in the field of obfuscation include analysis on diversification methods and performing tests on different virtual environments such ascloudand trusted computing.[21] One study formed a system of obfuscation operators called Olympus, a system of managing data and protecting privacy of individuals on applications.[22]Olympus's goal is to maintain existing data structures and functionality of the applications while also protecting the privacy of personal information uploaded onto the testing applications. These data usually come from sensors and are uploaded onto the application where it's analyzed. Through obfuscation operators and certain combinations of them, an individual's private data can be protected while also being analyzed. Information categories like SSN, birth dates, home locations, age, gender, race, and income that are sensitive todata-stealingand identity thieving are protected. Olympus is an attempt to apply privacy protection to real world applications. By forming adversarial networks between utility requirements and privacy through weighing the tradeoffs between them, data's usability is kept.[22]
https://en.wikipedia.org/wiki/Identity_replacement_technology
Avirtual assistant(VA) is asoftware agentthat can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporatechatbotcapabilities to streamline task execution. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices. In many cases, users can ask their virtual assistants questions, control home automation devices and media playback, and manage other basic tasks such as email, to-do lists, and calendars - all with verbal commands.[1]In recent years, prominent virtual assistants for direct consumer use have includedApple'sSiri,Amazon Alexa,Google Assistant, andSamsung'sBixby.[2]Also, companies in various industries often incorporate some kind of virtual assistant technology into their customer service or support.[3] Into the 2020s, the emergence ofartificial intelligencebasedchatbots, such asChatGPT, has brought increased capability and interest to the field of virtual assistant products and services.[4][5][6] Radio Rex was the first voice activated toy, patented in 1916[7]and released in 1922.[8]It was a wooden toy in the shape of a dog that would come out of its house when its name is called. In 1952,Bell Labspresented "Audrey", the Automatic Digit Recognition machine. It occupied a six-foot-high relay rack, consumed substantial power, had streams of cables and exhibited the myriad maintenance problems associated with complex vacuum-tube circuitry. It could recognize the fundamental units of speech, phonemes. It was limited to accurate recognition of digits spoken by designated talkers. It could therefore be used for voice dialing, but in most cases push-button dialing was cheaper and faster, rather than speaking the consecutive digits.[9] Another early tool which was enabled to perform digital speech recognition was theIBM Shoeboxvoice-activated calculator, presented to the general public during the1962 Seattle World's Fairafter its initial market launch in 1961. This early computer, developed almost 20 years before the introduction of the firstIBM Personal Computerin 1981, was able to recognize 16 spoken words and the digits 0 to 9. The firstnatural language processingcomputer program or the chatbotELIZAwas developed by MIT professorJoseph Weizenbaumin the 1960s. It was created to "demonstrate that the communication between man and machine was superficial".[10]ELIZA used pattern matching and substitution methodology into scripted responses to simulate conversation, which gave an illusion of understanding on the part of the program. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.[11] This gave name to theELIZA effect, the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation, a phenomenon present in human interactions with virtual assistants. The next milestone in the development of voice recognition technology was achieved in the 1970s at theCarnegie Mellon UniversityinPittsburgh, Pennsylvania with substantial support of theUnited States Department of Defenseand itsDARPAagency, funded five years of a Speech Understanding Research program, aiming to reach a minimum vocabulary of 1,000 words. Companies and academia including IBM, Carnegie Mellon University (CMU) and Stanford Research Institute took part in the program. The result was "Harpy", it mastered about 1000 words, the vocabulary of a three-year-old and it could understand sentences. It could process speech that followed pre-programmed vocabulary, pronunciation, and grammar structures to determine which sequences of words made sense together, and thus reducing speech recognition errors. In 1986, Tangora was an upgrade of the Shoebox, it was a voice recognizing typewriter. Named after the world's fastest typist at the time, it had a vocabulary of 20,000 words and used prediction to decide the most likely result based on what was said in the past. IBM's approach was based on ahidden Markov model, which adds statistics to digital signal processing techniques. The method makes it possible to predict the most likelyphonemesto follow a given phoneme. Still each speaker had to individually train the typewriter to recognize his or her voice, and pause between each word. In 1983, Gus Searcy invented the "Butler In A Box", an electronic voice home controller system.[12] In the 1990s, digital speech recognition technology became a feature of the personal computer withIBM,PhilipsandLernout & Hauspiefighting for customers. Much later the market launch of the firstsmartphoneIBM Simonin 1994 laid the foundation for smart virtual assistants as we know them today.[citation needed] In 1997, Dragon'sNaturally Speakingsoftware could recognize and transcribe natural human speech without pauses between each word into a document at a rate of 100 words per minute. A version of Naturally Speaking is still available for download and it is still used today, for instance, by many doctors in the US and the UK to document their medical records.[citation needed] In 2001Colloquispublicly launchedSmarterChild, on platforms likeAIMandMSN Messenger. While entirely text-based SmarterChild was able to play games, check the weather, look up facts, and converse with users to an extent.[13] The first modern digital virtual assistant installed on a smartphone wasSiri, which was introduced as a feature of theiPhone 4Son 4 October 2011.[14]Apple Inc.developed Siri following the 2010 acquisition ofSiri Inc., aspin-offofSRI International, which is a research institute financed byDARPAand theUnited States Department of Defense.[15]Its aim was to aid in tasks such as sending a text message, making phone calls, checking the weather or setting up an alarm. Over time, it has developed to provide restaurant recommendations, search the internet, and provide driving directions.[citation needed] In November 2014, Amazon announced Alexa alongside the Echo.[16] In April 2017 Amazon released a service for buildingconversational interfacesfor any type of virtual assistant or interface. In the 2020s, artificial intelligence (AI) systems likeChatGPThave gained popularity for their ability to generate human-like responses to text-based conversations. In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was then the "largest language model ever published at 17 billion parameters."[17]On November 30, 2022, ChatGPT was launched as a prototype and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. The advent of ChatGPT and its introduction to the wider public increased interest and competition in the space. In February 2023, Google began introducing an experimental service called "Bard" which is based on itsLaMDAprogram to generate text responses to questions asked based on information gathered from theweb. While ChatGPT and other generalized chatbots based on the latestgenerative AIare capable of performing various tasks associated with virtual assistants, there are also more specialized forms of such technology that are designed to target more specific situations or needs.[18][4] Virtual assistants work via: Many virtual assistants are accessible via multiple methods, offering versatility in how users can interact with them, whether through chat, voice commands, or other integrated technologies. Virtual assistants usenatural language processing(NLP) to match user text or voice input to executable commands. Some continually learn usingartificial intelligencetechniques includingmachine learningandambient intelligence. To activate a virtual assistant using the voice, a wake word might be used. This is a word or groups of words such as "Hey Siri", "OK Google" or "Hey Google", "Alexa", and "Hey Microsoft".[21]As virtual assistants become more popular, there are increasing legal risks involved.[22]: 815 Virtual assistants may be integrated into many types of platforms or, like Amazon Alexa, across several of them: Virtual assistants can provide a wide variety of services. These include:[30] Conversational commerce ise-commercevia various means of messaging, including via voice assistants[33]but alsolive chaton e-commerceWeb sites, live chat on messaging applications such asWeChat, Facebook Messenger andWhatsApp[34]andchatbotson messaging applications or Web sites. A virtual assistant can work with customer support team of a business to provide24x7support to customers. It provides quick responses, which enhances a customer's experience. Amazon enables Alexa "Skills" and Google "Actions", essentially applications that run on the assistant platforms. Virtual assistants have avariety of privacy concernsassociated with them. Features such as activation by voice pose a threat, as such features requires the device to always be listening.[35]Modes of privacy such as the virtual security button have been proposed to create a multilayer authentication for virtual assistants.[36] The privacy policy of Google Assistant states that it does not store the audio data without the user's permission, but may store the conversation transcripts to personalise its experience. Personalisation can be turned off in settings. If a user wants Google Assistant to store audio data, they can go to Voice & Audio Activity (VAA) and turn on this feature. Audio files are sent to the cloud and used by Google to improve the performance of Google Assistant, but only if the VAA feature is turned on.[37] The privacy policy of Amazon's virtual assistant, Alexa, states that it only listens to conversations when its wake word (like Alexa, Amazon, Echo) is used. It starts recording the conversation after the call of a wake word, and stops recording after 8 seconds of silence. It sends the recorded conversation to the cloud. It is possible to delete the recording from the cloud by visiting 'Alexa Privacy' in 'Alexa'.[38] Apple states that it does not record audio to improve Siri. Instead, it claims to use transcripts. Transcript data is only sent if it is deemed important for analysis. Users can opt out anytime if they don't want Siri to send the transcripts in the cloud.[39] Cortana is a voice-only virtual assistant with singular authentication.[40][41][42]This voice-activated device accesses user data to perform common tasks like checking weather or making calls, raising privacy concerns due to the lack of secondary authentication.[43][44] Added value of the virtual assistants can come among others from the following: In 2019Antonio A. Casilli, a Frenchsociologist, criticized artificial intelligence and virtual assistants in particular in the following way: At a first level the fact that the consumer provides free data for the training and improvement of the virtual assistant, often without knowing it, is ethically disturbing. But at a second level, it might be even more ethically disturbing to know how theseAIsare trained with this data. Thisartificial intelligenceis trained vianeural networks, which require a huge amount oflabelled data.However, this data needs to be labelled through a human process, which explains the rise ofmicroworkin the last decade. That is, remotely using some people worldwide doing some repetitive and very simple tasks for a few cents, such as listening to virtual assistant speech data, and writing down what was said. Microwork has been criticized for the job insecurity it causes, and for the total lack of regulation: The average salary was 1,38dollar/hourin 2010,[50]and it provides neither healthcare nor retirement benefits,sick pay,minimum wage. Hence, virtual assistants and their designers are controversial for spurring job insecurity, and the AIs they propose are still human in the way that they would be impossible without the microwork of millions of human workers.[49] Privacy concerns are raised by the fact that voice commands are available to the providers of virtual assistants in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner.[51]Additionally to the linguistic content of recorded speech, a user's manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin.[52] Notable developer platforms for virtual assistants include: In previous generations of text chat-based virtual assistants, the assistant was often represented by anavatar(a.k.a.interactive online characterorautomated character) — this was known as anembodied agent. Digital experiences enabled by virtual assistants are considered to be among the major recent technological advances and most promising consumer trends. Experts claim that digital experiences will achieve a status-weight comparable to 'real' experiences, if not become more sought-after and prized.[57]The trend is verified by a high number of frequent users and the substantial growth of worldwide user numbers of virtual digital assistants. In mid-2017, the number of frequent users of digital virtual assistants is estimated to be around 1 bn worldwide.[58]In addition, it can be observed that virtual digital assistant technology is no longer restricted to smartphone applications, but present across many industry sectors (incl.automotive, telecommunications,retail, healthcare and education).[59]In response to the significant R&D expenses of firms across all sectors and an increasing implementation of mobile devices, the market for speech recognition technology is predicted to grow at aCAGRof 34.9% globally over the period of 2016 to 2024 and thereby surpass a global market size of US$7.5 billion by 2024.[59]According to anOvumstudy, the "native digital assistant installed base" is projected to exceed the world's population by 2021, with 7.5 billion active voice AI–capable devices.[60]According to Ovum, by that time "Google Assistant will dominate the voice AI–capable device market with 23.3% market share, followed by Samsung's Bixby (14.5%), Apple's Siri (13.1%), Amazon's Alexa (3.9%), and Microsoft's Cortana (2.3%)."[60] Taking into consideration the regional distribution of market leaders, North American companies (e.g.Nuance Communications,IBM,eGain) are expected to dominate the industry over the next years, due to the significant impact of BYOD (Bring Your Own Device) and enterprise mobility business models. Furthermore, the increasing demand for smartphone-assisted platforms are expected to further boost the North American intelligent virtual assistant (IVA) industry growth. Despite its smaller size in comparison to the North American market, the intelligent virtual assistant industry from theAsia-Pacificregion, with its main players located in India and China is predicted to grow at an annual growth rate of 40% (above global average) over the 2016–2024 period.[59] Virtual assistants should not be only seen as a gadget for individuals, as they could have a real economic utility for enterprises. As an example, a virtual assistant can take the role of an always available assistant with an encyclopedic knowledge. And which can organize meetings, check inventories, verify informations. Virtual assistants are all the more important that their integration in small and middle-sized enterprises often consists in an easy first step through the more global adaptation and use ofInternet of Things (IoT). Indeed, IoT technologies are first perceived by small and medium-sized enterprises as technologies of critical importance, but too complicated, risky or costly to be used.[61] In May 2018, researchers from theUniversity of California, Berkeley, published a paper that showed audio commands undetectable for the human ear could be directly embedded into music or spoken text, thereby manipulating virtual assistants into performing certain actions without the user taking note of it.[62]The researchers made small changes to audio files, which cancelled out the sound patterns that speech recognition systems are meant to detect. These were replaced with sounds that would be interpreted differently by the system and command it to dial phone numbers, open websites or even transfer money.[62]The possibility of this has been known since 2016,[62]and affects devices from Apple, Amazon and Google.[63] In addition to unintentional actions and voice recording, another security and privacy risk associated with intelligent virtual assistants is malicious voice commands: An attacker who impersonates a user and issues malicious voice commands to, for example, unlock a smart door to gain unauthorized entry to a home or garage or order items online without the user's knowledge. Although some IVAs provide a voice-training feature to prevent such impersonation, it can be difficult for the system to distinguish between similar voices. Thus, a malicious person who is able to access an IVA-enabled device might be able to fool the system into thinking that they are the real owner and carry out criminal or mischievous acts.[64]
https://en.wikipedia.org/wiki/Interactive_online_characters
TheStyleGenerative Adversarial Network, orStyleGANfor short, is an extension to the GAN architecture introduced byNvidiaresearchers in December 2018,[1]and madesource availablein February 2019.[2][3] StyleGAN depends on Nvidia'sCUDAsoftware, GPUs, andGoogle'sTensorFlow,[4]orMeta AI'sPyTorch, which supersedes TensorFlow as the official implementation library in later StyleGAN versions.[5]The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7]Nvidia introduced StyleGAN3, described as an "alias-free" version, on June 23, 2021, and made source available on October 12, 2021.[8] A direct predecessor of the StyleGAN series is the Progressive GAN, published in 2017.[9] In December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits offake human faces. StyleGAN was able to run on Nvidia's commodity GPU processors. In February 2019,Uberengineer Phillip Wang used the software to create the websiteThis Person Does Not Exist, which displayed a new face on each web page reload.[10][11]Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively "pick apart all the relevant features (of human faces) and recompose them in a way that's coherent."[12] In September 2019, a website called Generated Photos published 100,000 images as a collection ofstock photos.[13]The collection was made using a private dataset shot in a controlled environment with similar light and angles.[14] Similarly, two faculty at the University of Washington's Information School used StyleGAN to createWhich Face is Real?, which challenged visitors to differentiate between a fake and a real face side by side.[11]The faculty stated the intention was to "educate the public" about the existence of this technology so they could be wary of it, "just like eventually most people were made aware that you can Photoshop an image".[15] The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7] In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed "alias-free", this version was implemented withpytorch.[16] In December 2019,Facebooktook down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with machine learning techniques.[17] Progressive GAN[9]is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator asG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, and the discriminator asD=DN∘DN−1∘⋯∘D1{\displaystyle D=D_{N}\circ D_{N-1}\circ \cdots \circ D_{1}}. During training, at first onlyGN,DN{\displaystyle G_{N},D_{N}}are used in a GAN game to generate 4x4 images. ThenGN−1,DN−1{\displaystyle G_{N-1},D_{N-1}}are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images. To avoid discontinuity between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper[9]). For example, this is how the second stage GAN game starts: StyleGAN is designed as a combination of Progressive GAN withneural style transfer.[18] The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant[note 1]4×4×512{\displaystyle 4\times 4\times 512}array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer usesGramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance). At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector). After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles. Style-mixing between two imagesx,x′{\displaystyle x,x'}can be performed as well. First, run a gradient descent to findz,z′{\displaystyle z,z'}such thatG(z)≈x,G(z′)≈x′{\displaystyle G(z)\approx x,G(z')\approx x'}. This is called "projecting an image back to style latent space". Then,z{\displaystyle z}can be fed to the lower style blocks, andz′{\displaystyle z'}to the higher style blocks, to generate a composite image that has the large-scale style ofx{\displaystyle x}, and the fine-detail style ofx′{\displaystyle x'}. Multiple images can also be composed this way. StyleGAN2 improves upon StyleGAN in two ways. One, it applies the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.[19]The "blob" problem roughly speaking is because using the style latent vector to normalize the generated image destroys useful information. Consequently, the generator learned to create a "distraction" by a large blob, which absorbs most of the effect of normalization (somewhat similar to using flares to distract aheat-seeking missile). Two, it uses residual connections, which helps it avoid the phenomenon where certain features are stuck at intervals of pixels. For example, the seam between two teeth may be stuck at pixels divisible by 32, because the generator learned to generate teeth during stage N-5, and consequently could only generate primitive teeth at that stage, before scaling up 5 times (thus intervals of 32). This was updated by the StyleGAN2-ADA ("ADA" stands for "adaptive"),[20]which usesinvertible data augmentation. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive". StyleGAN3[21]improves upon StyleGAN2 by solving the "texture sticking" problem, which can be seen in the official videos.[22]They analyzed the problem by theNyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon. To solve this, they proposed imposing strictlowpass filtersbetween each generator's layers, so that the generator is forced to operate on the pixels in a wayfaithfulto the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using moresignal filters. The resulting StyleGAN-3 is able to generate images that rotate and translate smoothly, and without texture sticking.
https://en.wikipedia.org/wiki/StyleGAN
Theuncanny valley(Japanese:不気味の谷,Hepburn:bukimi no tani)effect is a hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object. The uncanny valley hypothesis predicts that an entity appearing almost human will risk eliciting eerie feelings in viewers. Examples of the phenomenon exist amongrobotics,3D computer animationsand lifelikedolls. The increasing prevalence of digital technologies (e.g.,virtual reality,augmented reality, andphotorealisticcomputer animation) has propagated discussions and citations of the "valley"; such conversation has enhanced the construct'sverisimilitude. As related to robotics engineering, robotics professorMasahiro Morifirst introduced the concept in 1970 from his book titledBukimi No Tani(不気味の谷), phrasing it asbukimi no tani genshō(不気味の谷現象,lit.'uncanny valley phenomenon').[1]Bukimi no taniwastranslated literallyasuncanny valleyin the 1978 bookRobots: Fact, Fiction, and Predictionwritten byJasia Reichardt.[2]Over time, this translation created an unintended association of the concept toErnst Jentsch'spsychoanalyticconcept of theuncannyestablished in his 1906 essayOn the Psychology of the Uncanny(German:Zur Psychologie des Unheimlichen),[3][4]which was then critiqued and extended inSigmund Freud's 1919 essayThe Uncanny(German:Das Unheimliche).[5] Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive andempathetic, until it becomes almost human, at which point the response quickly becomes strong revulsion. However, as the robot's appearance continues to become less distinguishable from that of a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[7]When plotted on a graph, the reactions are indicated by a steep decrease followed by a steep increase (hence the "valley" part of the name) in the areas where anthropomorphism is closest to reality. This interval of repulsive response aroused by a robot with appearance and motion between a "somewhat human" and "fully human" entity is the uncanny valley effect. The name represents the idea that an almost human-looking robot seems overly "strange" to some human beings, produces a feeling ofuncanniness, and thus fails to evoke the empathic response required for productivehuman–robot interaction.[7] A number of theories have been proposed to explain the cognitive mechanism causing the phenomenon: A series of studies experimentally investigated whether uncanny valley effects exist for static images of robot faces. Mathur MB & Reichling DB[18]used two complementary sets of stimuli spanning the range from very mechanical to very human-like: first, a sample of 80 objectively chosen robot face images from Internet searches, and second, a morphometrically and graphically controlled 6-face series set of faces. They asked subjects to explicitly rate the likability of each face. To measure trust toward each face, subjects completed an investment game to measure indirectly how much money they were willing to "wager" on a robot's trustworthiness. Both stimulus sets showed a robust uncanny valley effect on explicitly rated likability and a more context-dependent uncanny valley on implicitly rated trust. Their exploratory analysis of one proposed mechanism for the uncanny valley, perceptual confusion at a category boundary, found that category confusion occurs in the uncanny valley but does not mediate the effect on social and emotional responses. One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can result in less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.[31] As of 2011[update], researchers atUniversity of California, San DiegoandCalifornia Institute for Telecommunications and Information Technologywere measuring human brain activations related to the uncanny valley.[32][33]In one study usingfMRI, a group ofcognitive scientistsandroboticistsfound the biggest differences in brain responses for uncanny robots inparietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain'svisual cortexthat processes bodily movements with the section of themotor cortexthought to containmirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict.[13]The brain "lit up" when the human-like appearance of the android and its robotic motion "didn't compute". Ayşe Pınar Saygın, an assistant professor from UCSD, stated that "The brain doesn't seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."[15][34][35] Viewer perception offacial expressionand speech and the uncanny valley in realistic, human-like characters intended forvideo gamesand movies is being investigated by Tinwell et al., 2011.[36]Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games.[37]Building on the body of work already performed for android science, this research intends to build a conceptual mapping of the uncanny valley using 3D characters generated in a real-time gaming engine. The goal is to analyze how cross-modal factors of facial expression and speech can exaggerate the uncanny. Tinwell et al., 2011[38]have also introduced the notion of an 'unscalable' uncanny wall that suggests that a viewer's discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism. A summary of Angela Tinwell's research on the uncanny valley, psychological reasons behind the uncanny valley and how designers may overcome the uncanny in human-like virtual characters is provided in her book,The Uncanny Valley in Games and AnimationbyCRC Press. A number of design principles have been proposed for avoiding the uncanny valley: A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny: If the uncanny valley effect is the result of general cognitive processes, there should be evidence in evolutionary history and cultural artifacts.[21]An effect similar to the uncanny valley was noted byCharles Darwinin 1839: The expression of this [Trigonocephalus] snake's face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of thevampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness. A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves withtranshumanenhancements (cf.body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be iteyesight,musclestrength, orcognition.[53]So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley.[53]Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.[54] A number of movies that usecomputer-generated imageryto show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic. Examples include the following: An increasingly common practice is to featurevirtual actorsin movies: CGI likenesses of real actors used because the original actor either looks too old for the part or is deceased. Sometimes a virtual actor is created with involvement from the original actor (who may contribute motion capture, audio, etc.), while at other times the actor has no involvement. Reviewers have often criticized the use of virtual actors for its uncanny valley effect, saying it adds an eerie feeling to the movie. Examples of virtual actors that have received such criticism include replicas ofArnold SchwarzeneggerinTerminator Salvation(2009)[92][93]andTerminator Genisys(2015),[94]Jeff BridgesinTron: Legacy(2010),[95][96][97]Peter CushingandCarrie FisherinRogue One(2016),[98][99]andWill SmithinGemini Man(2019).[100]
https://en.wikipedia.org/wiki/Uncanny_valley
Avirtual actoror also known asvirtual human,virtual persona, ordigital cloneis the creation or re-creation of a human being in image and voice usingcomputer-generated imageryand sound, that is often indistinguishable from the real actor. The idea of a virtual actor was first portrayed in the 1981 filmLooker, wherein models had their bodies scanned digitally to create 3D computer generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used this concept:FoolsbyPat Cadigan, andEt Tu, BabebyMark Leyner. In general, virtual humans employed in movies are known assynthespians,virtual actors,vactors,cyberstars, or"silicentric" actors. There are several legal ramifications for thedigital cloningof human actors, relating tocopyrightandpersonality rights. People who have already been digitally cloned as simulations includeBill Clinton,Marilyn Monroe,Fred Astaire,Ed Sullivan,Elvis Presley,Bruce Lee,Audrey Hepburn,Anna Marie Goddard, andGeorge Burns.[1][2] By 2002,Arnold Schwarzenegger,Jim Carrey,Kate Mulgrew,Michelle Pfeiffer,Denzel Washington,Gillian Anderson, andDavid Duchovnyhad all had their headslaser scannedto create digital computer models thereof.[1] Early computer-generated animated faces include the 1985 filmTony de Peltrieand themusic videoforMick Jagger's song "Hard Woman" (fromShe's the Boss). The first actual human beings to be digitally duplicated wereMarilyn MonroeandHumphrey Bogartin a March 1987 film "Rendez-vous in Montreal" created byNadia Magnenat ThalmannandDaniel Thalmannfor the 100th anniversary of theEngineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café inMontreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.[3] In 1987, the Kleiser-Walczak Construction Company (now Synthespian Studios), founded by Jeff Kleiser andDiana Walczakcoined the term "synthespian" and began its Synthespian ("syntheticthespian") Project, with the aim of creating "life-like figures based on the digital animation of clay models".[2][4] In 1988,Tin Toywas the first entirely computer-generated movie to win anAcademy Award(Best Animated Short Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were controlled in real time by a puppeteer using a custom-built controller, was developed bySilicon Graphics, and performed live atSIGGRAPH. In 1989,The Abyss, directed byJames Cameronincluded a computer-generated face placed onto a watery pseudopod.[3][5] In 1991,Terminator 2: Judgment Day, also directed by Cameron, confident in the abilities of computer-generated effects from his experience withThe Abyss, included a mixture of synthetic actors with live animation, including computer models ofRobert Patrick's face.The Abysscontained just one scene with photo-realistic computer graphics.Terminator 2: Judgment Daycontained over forty shots throughout the film.[3][5][6] In 1997,Industrial Light & Magicworked on creating a virtual actor that was a composite of the bodily parts of several real actors.[2] By the 21st century, virtual actors had become a reality. The face ofBrandon Lee, who had died partway through the shooting ofThe Crowin 1994, had been digitally superimposed over the top of a body-double in order to complete those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans had been used inFinal Fantasy: The Spirits Within, and by 2004, a syntheticLaurence Olivierco-starred inSky Captain and the World of Tomorrow.[7][8] Since the mid-2010s, theStar Warsfranchise has become particularly notable for its prominent usage of virtual actors, driven by a desire in recent entries to reuse characters that first appeared in theoriginal trilogyduring the late 1970s and early 1980s. The 2016 Star Wars Anthology filmRogue One: A Star Wars Storyis a direct prequel to the 1977 filmStar Wars: A New Hope, with the ending scene ofRogue Oneleading almost immediately into the opening scene ofA New Hope. As such,Rogue Onecalled forIndustrial Light & Magicto make digital recreations of certain characters so they would look the same as they did inA New Hope, specifically the roles ofPeter CushingasGrand Moff Tarkin(played and voiced byGuy Henry) andCarrie FisherasPrincess Leia(played byIngvild Deilaand voiced by an archive recording of Fisher). Cushing had died in 1994, while Fisher was not available to play Leia during production and died a few days after the film's release.[9] Similarly, the 2020 second season ofThe Mandalorianbriefly featured a digital recreation ofMark Hamill's characterLuke Skywalker(played by an uncredited body double and voiced by anaudio deepfakerecreation of Hamill's voice[citation needed]) as portrayed in the 1983 filmReturn of the Jedi. Canonically,The Mandalorian's storyline takes place roughly five years after the events ofReturn of the Jedi. Critics such as Stuart Klawans in theNew York Timesexpressed worry about the loss of "the very thing that art was supposedly preserving: our point of contact with the irreplaceable, finite person". Even more problematic are the issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the United States, for instance, they must resort to database protection laws in order to exercise what control they have (The proposedDatabase and Collections of Information Misappropriation Actwould strengthen such laws). An actor does not own the copyright on their digital clones, unless the clones were created by them. Robert Patrick, for example, would not have any legal control over the liquid metal digital clone of himself that was created forTerminator 2: Judgment Day.[7][10] The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs. It is also a career difficulty, since a clone could be used in roles that a real actor would not accept for various reasons. BothTom WaitsandBette Midlerhave won actions for damages against people who employed their images in advertisements that they had refused to take part in themselves.[11] In the USA, the use of a digital clone in advertisements is required to be accurate and truthful (section 43(a) of theLanham Actand which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied endorsement. TheUnited States District Court for the Southern District of New Yorkheld that an advertisement employing aWoody Allenimpersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product.[11] Other concerns include posthumous use of digital clones. Even before Brandon Lee was digitally reanimated, theCalifornia Senatedrew up the Astaire Bill, in response to lobbying fromFred Astaire's widow and theScreen Actors Guild, who were seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the rights to create and use digital clones of various dead celebrities, such asMarlene Dietrich[12]andVincent Price.[2]
https://en.wikipedia.org/wiki/Virtual_actor
Inprobability theoryandstatistics,diffusion processesare a class of continuous-timeMarkov processwithalmost surelycontinuoussample paths. Diffusion process isstochasticin nature and hence is used to model many real-life stochastic systems.Brownian motion,reflected Brownian motionandOrnstein–Uhlenbeck processesare examples of diffusion processes. It is used heavily instatistical physics,statistical analysis,information theory,data science,neural networks,financeandmarketing. A sample path of a diffusion process models the trajectory of a particle embedded in a flowing fluid and subjected to random displacements due to collisions with other particles, which is calledBrownian motion. The position of the particle is then random; itsprobability density functionas afunction of space and timeis governed by aconvection–diffusion equation. Adiffusion processis aMarkov processwithcontinuous sample pathsfor which theKolmogorov forward equationis theFokker–Planck equation.[1] A diffusion process is defined by the following properties. Letaij(x,t){\displaystyle a^{ij}(x,t)}be uniformly continuous coefficients andbi(x,t){\displaystyle b^{i}(x,t)}be bounded, Borel measurable drift terms. There is a unique family of probability measuresPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}(forτ≥0{\displaystyle \tau \geq 0},ξ∈Rd{\displaystyle \xi \in \mathbb {R} ^{d}}) on the canonical spaceΩ=C([0,∞),Rd){\displaystyle \Omega =C([0,\infty ),\mathbb {R} ^{d})}, with its Borelσ{\displaystyle \sigma }-algebra, such that: 1. (Initial Condition) The process starts atξ{\displaystyle \xi }at timeτ{\displaystyle \tau }:Pa;bξ,τ[ψ∈Ω:ψ(t)=ξfor0≤t≤τ]=1.{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }[\psi \in \Omega :\psi (t)=\xi {\text{ for }}0\leq t\leq \tau ]=1.} 2. (Local Martingale Property) For everyf∈C2,1(Rd×[τ,∞)){\displaystyle f\in C^{2,1}(\mathbb {R} ^{d}\times [\tau ,\infty ))}, the processMt[f]=f(ψ(t),t)−f(ψ(τ),τ)−∫τt(La;b+∂∂s)f(ψ(s),s)ds{\displaystyle M_{t}^{[f]}=f(\psi (t),t)-f(\psi (\tau ),\tau )-\int _{\tau }^{t}{\bigl (}L_{a;b}+{\tfrac {\partial }{\partial s}}{\bigr )}f(\psi (s),s)\,ds}is a local martingale underPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}fort≥τ{\displaystyle t\geq \tau }, withMt[f]=0{\displaystyle M_{t}^{[f]}=0}fort≤τ{\displaystyle t\leq \tau }. This familyPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}is called theLa;b{\displaystyle {\mathcal {L}}_{a;b}}-diffusion. It is clear that if we have anLa;b{\displaystyle {\mathcal {L}}_{a;b}}-diffusion, i.e.(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}on(Ω,F,Ft,Pa;bξ,τ){\displaystyle (\Omega ,{\mathcal {F}},{\mathcal {F}}_{t},\mathbb {P} _{a;b}^{\xi ,\tau })}, thenXt{\displaystyle X_{t}}satisfies the SDEdXti=12∑k=1dσki(Xt)dBtk+bi(Xt)dt{\displaystyle dX_{t}^{i}={\frac {1}{2}}\,\sum _{k=1}^{d}\sigma _{k}^{i}(X_{t})\,dB_{t}^{k}+b^{i}(X_{t})\,dt}. In contrast, one can construct this diffusion from that SDE ifaij(x,t)=∑kσik(x,t)σjk(x,t){\displaystyle a^{ij}(x,t)=\sum _{k}\sigma _{i}^{k}(x,t)\,\sigma _{j}^{k}(x,t)}andσij(x,t){\displaystyle \sigma ^{ij}(x,t)},bi(x,t){\displaystyle b^{i}(x,t)}are Lipschitz continuous. To see this, letXt{\displaystyle X_{t}}solve the SDE starting atXτ=ξ{\displaystyle X_{\tau }=\xi }. Forf∈C2,1(Rd×[τ,∞)){\displaystyle f\in C^{2,1}(\mathbb {R} ^{d}\times [\tau ,\infty ))}, apply Itô's formula:df(Xt,t)=(∂f∂t+∑i=1dbi∂f∂xi+v∑i,j=1daij∂2f∂xi∂xj)dt+∑i,k=1d∂f∂xiσkidBtk.{\displaystyle df(X_{t},t)={\bigl (}{\frac {\partial f}{\partial t}}+\sum _{i=1}^{d}b^{i}{\frac {\partial f}{\partial x_{i}}}+v\sum _{i,j=1}^{d}a^{ij}\,{\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}{\bigr )}\,dt+\sum _{i,k=1}^{d}{\frac {\partial f}{\partial x_{i}}}\,\sigma _{k}^{i}\,dB_{t}^{k}.}Rearranging givesf(Xt,t)−f(Xτ,τ)−∫τt(∂f∂s+La;bf)ds=∫τt∑i,k=1d∂f∂xiσkidBsk,{\displaystyle f(X_{t},t)-f(X_{\tau },\tau )-\int _{\tau }^{t}{\bigl (}{\frac {\partial f}{\partial s}}+L_{a;b}f{\bigr )}\,ds=\int _{\tau }^{t}\sum _{i,k=1}^{d}{\frac {\partial f}{\partial x_{i}}}\,\sigma _{k}^{i}\,dB_{s}^{k},}whose right‐hand side is a local martingale, matching the local‐martingale property in the diffusion definition. The law ofXt{\displaystyle X_{t}}definesPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}onΩ=C([0,∞),Rd){\displaystyle \Omega =C([0,\infty ),\mathbb {R} ^{d})}with the correct initial condition and local martingale property. Uniqueness follows from the Lipschitz continuity ofσ,b{\displaystyle \sigma \!,\!b}. In fact,La;b+∂∂s{\displaystyle L_{a;b}+{\tfrac {\partial }{\partial s}}}coincides with the infinitesimal generatorA{\displaystyle {\mathcal {A}}}of this process. IfXt{\displaystyle X_{t}}solves the SDE, then forf(x,t)∈C2(Rd×R+){\displaystyle f(\mathbf {x} ,t)\in C^{2}(\mathbb {R} ^{d}\times \mathbb {R} ^{+})}, the generatorA{\displaystyle {\mathcal {A}}}isAf(x,t)=∑i=1dbi(x,t)∂f∂xi+v∑i,j=1daij(x,t)∂2f∂xi∂xj+∂f∂t.{\displaystyle {\mathcal {A}}f(\mathbf {x} ,t)=\sum _{i=1}^{d}b_{i}(\mathbf {x} ,t)\,{\frac {\partial f}{\partial x_{i}}}+v\sum _{i,j=1}^{d}a_{ij}(\mathbf {x} ,t)\,{\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}+{\frac {\partial f}{\partial t}}.} Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Diffusion_process
Variational Bayesian methodsare a family of techniques for approximating intractableintegralsarising inBayesian inferenceandmachine learning. They are typically used in complexstatistical modelsconsisting of observed variables (usually termed "data") as well as unknownparametersandlatent variables, with various sorts of relationships among the three types ofrandom variables, as might be described by agraphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative toMonte Carlo samplingmethods—particularly,Markov chain Monte Carlomethods such asGibbs sampling—for taking a fully Bayesian approach tostatistical inferenceover complexdistributionsthat are difficult to evaluate directly orsample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. Variational Bayes can be seen as an extension of theexpectation–maximization(EM) algorithm frommaximum likelihood(ML) ormaximum a posteriori(MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entireposterior distributionof the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically. For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables. Invariationalinference, the posterior distribution over a set of unobserved variablesZ={Z1…Zn}{\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}}given some dataX{\displaystyle \mathbf {X} }is approximated by a so-calledvariational distribution,Q(Z):{\displaystyle Q(\mathbf {Z} ):} The distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is restricted to belong to a family of distributions of simpler form thanP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}(e.g. a family of Gaussian distributions), selected with the intention of makingQ(Z){\displaystyle Q(\mathbf {Z} )}similar to the true posterior,P(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}. The similarity (or dissimilarity) is measured in terms of a dissimilarity functiond(Q;P){\displaystyle d(Q;P)}and hence inference is performed by selecting the distributionQ(Z){\displaystyle Q(\mathbf {Z} )}that minimizesd(Q;P){\displaystyle d(Q;P)}. The most common type of variational Bayes uses theKullback–Leibler divergence(KL-divergence) ofQfromPas the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as Note thatQandPare reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to theexpectation–maximization algorithm. (Using the KL-divergence in the other way produces theexpectation propagationalgorithm.) Variational techniques are typically used to form an approximation for: The marginalization overZ{\displaystyle \mathbf {Z} }to calculateP(X){\displaystyle P(\mathbf {X} )}in the denominator is typically intractable, because, for example, the search space ofZ{\displaystyle \mathbf {Z} }is combinatorially large. Therefore, we seek an approximation, usingQ(Z)≈P(Z∣X){\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )}. Given thatP(Z∣X)=P(X,Z)P(X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}}, the KL-divergence above can also be written as BecauseP(X){\displaystyle P(\mathbf {X} )}is a constant with respect toZ{\displaystyle \mathbf {Z} }and∑ZQ(Z)=1{\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1}becauseQ(Z){\displaystyle Q(\mathbf {Z} )}is a distribution, we have which, according to the definition ofexpected value(for a discreterandom variable), can be written as follows which can be rearranged to become As thelog-evidencelog⁡P(X){\displaystyle \log P(\mathbf {X} )}is fixed with respect toQ{\displaystyle Q}, maximizing the final termL(Q){\displaystyle {\mathcal {L}}(Q)}minimizes the KL divergence ofQ{\displaystyle Q}fromP{\displaystyle P}. By appropriate choice ofQ{\displaystyle Q},L(Q){\displaystyle {\mathcal {L}}(Q)}becomes tractable to compute and to maximize. Hence we have both an analytical approximationQ{\displaystyle Q}for the posteriorP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}, and a lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}for the log-evidencelog⁡P(X){\displaystyle \log P(\mathbf {X} )}(since the KL-divergence is non-negative). The lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}is known as the (negative)variational free energyin analogy withthermodynamic free energybecause it can also be expressed as a negative energyEQ⁡[log⁡P(Z,X)]{\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]}plus theentropyofQ{\displaystyle Q}. The termL(Q){\displaystyle {\mathcal {L}}(Q)}is also known asEvidence Lower Bound, abbreviated asELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data. By the generalizedPythagorean theoremofBregman divergence, of which KL-divergence is a special case, it can be shown that:[1][2] whereC{\displaystyle {\mathcal {C}}}is a convex set and the equality holds if: In this case, the global minimizerQ∗(Z)=q∗(Z1∣Z2)q∗(Z2)=q∗(Z2∣Z1)q∗(Z1),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),}withZ={Z1,Z2},{\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},}can be found as follows:[1] in which the normalizing constant is: The termζ(X){\displaystyle \zeta (\mathbf {X} )}is often called theevidencelower bound (ELBO) in practice, sinceP(X)≥ζ(X)=exp⁡(L(Q∗)){\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))},[1]as shown above. By interchanging the roles ofZ1{\displaystyle \mathbf {Z} _{1}}andZ2,{\displaystyle \mathbf {Z} _{2},}we can iteratively compute the approximatedq∗(Z1){\displaystyle q^{*}(\mathbf {Z} _{1})}andq∗(Z2){\displaystyle q^{*}(\mathbf {Z} _{2})}of the true model's marginalsP(Z1∣X){\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} )}andP(Z2∣X),{\displaystyle P(\mathbf {Z} _{2}\mid \mathbf {X} ),}respectively. Although this iterative scheme is guaranteed to converge monotonically,[1]the convergedQ∗{\displaystyle Q^{*}}is only a local minimizer ofDKL(Q∥P){\displaystyle D_{\mathrm {KL} }(Q\parallel P)}. If the constrained spaceC{\displaystyle {\mathcal {C}}}is confined within independent space, i.e.q∗(Z1∣Z2)=q∗(Z1),{\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),}the above iterative scheme will become the so-called mean field approximationQ∗(Z)=q∗(Z1)q∗(Z2),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),}as shown below. The variational distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is usually assumed to factorize over somepartitionof the latent variables, i.e. for some partition of the latent variablesZ{\displaystyle \mathbf {Z} }intoZ1…ZM{\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}}, It can be shown using thecalculus of variations(hence the name "variational Bayes") that the "best" distributionqj∗{\displaystyle q_{j}^{*}}for each of the factorsqj{\displaystyle q_{j}}(in terms of the distribution minimizing the KL divergence, as described above) satisfies:[3] whereEq−j∗⁡[ln⁡p(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}is theexpectationof the logarithm of thejoint probabilityof the data and latent variables, taken with respect toq∗{\displaystyle q^{*}}over all variables not in the partition: refer to Lemma 4.1 of[4]for a derivation of the distributionqj∗(Zj∣X){\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )}. In practice, we usually work in terms of logarithms, i.e.: The constant in the above expression is related to thenormalizing constant(the denominator in the expression above forqj∗{\displaystyle q_{j}^{*}}) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g.Gaussian,gamma, etc.). Using the properties of expectations, the expressionEq−j∗⁡[ln⁡p(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}can usually be simplified into a function of the fixedhyperparametersof theprior distributionsover the latent variables and of expectations (and sometimes highermomentssuch as thevariance) of latent variables not in the current partition (i.e. latent variables not included inZj{\displaystyle \mathbf {Z} _{j}}). This createscircular dependenciesbetween the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests aniterativealgorithm, much like EM (theexpectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed toconverge.[5] In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. themeans); sometimes expectations of squared variables (which can be related to thevarianceof the variables), or expectations of higher powers (i.e. highermoments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual,nonlineardependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer. The following theorem is referred to as a duality formula for variational inference.[4]It explains some important properties of the variational distributions used in variational Bayes methods. TheoremConsider twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}withQ≪P{\displaystyle Q\ll P}. Assume that there is a common dominatingprobability measureλ{\displaystyle \lambda }such thatP≪λ{\displaystyle P\ll \lambda }andQ≪λ{\displaystyle Q\ll \lambda }. Leth{\displaystyle h}denote any real-valuedrandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}that satisfiesh∈L1(P){\displaystyle h\in L_{1}(P)}. Then the following equality holds Further, the supremum on the right-hand side is attainedif and only ifit holds almost surely with respect to probability measureQ{\displaystyle Q}, wherep(θ)=dP/dλ{\displaystyle p(\theta )=dP/d\lambda }andq(θ)=dQ/dλ{\displaystyle q(\theta )=dQ/d\lambda }denote the Radon–Nikodym derivatives of the probability measuresP{\displaystyle P}andQ{\displaystyle Q}with respect toλ{\displaystyle \lambda }, respectively. Consider a simple non-hierarchical Bayesian model consisting of a set ofi.i.d.observations from aGaussian distribution, with unknownmeanandvariance.[6]In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method. For mathematical convenience, in the following example we work in terms of theprecision— i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of thecovariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is aone-to-one correspondencebetween the two.) We placeconjugate priordistributions on the unknown meanμ{\displaystyle \mu }and precisionτ{\displaystyle \tau }, i.e. the mean also follows a Gaussian distribution while the precision follows agamma distribution. In other words: Thehyperparametersμ0,λ0,a0{\displaystyle \mu _{0},\lambda _{0},a_{0}}andb0{\displaystyle b_{0}}in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions ofμ{\displaystyle \mu }andτ{\displaystyle \tau }. We are givenN{\displaystyle N}data pointsX={x1,…,xN}{\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}}and our goal is to infer theposterior distributionq(μ,τ)=p(μ,τ∣x1,…,xN){\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})}of the parametersμ{\displaystyle \mu }andτ.{\displaystyle \tau .} Thejoint probabilityof all variables can be rewritten as where the individual factors are where Assume thatq(μ,τ)=q(μ)q(τ){\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )}, i.e. that the posterior distribution factorizes into independent factors forμ{\displaystyle \mu }andτ{\displaystyle \tau }. This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be aGaussian-gamma distribution), and hence the result we obtain will be an approximation. Then In the above derivation,C{\displaystyle C},C2{\displaystyle C_{2}}andC3{\displaystyle C_{3}}refer to values that are constant with respect toμ{\displaystyle \mu }. Note that the termEτ⁡[ln⁡p(τ)]{\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]}is not a function ofμ{\displaystyle \mu }and will have the same value regardless of the value ofμ{\displaystyle \mu }. Hence in line 3 we can absorb it into theconstant termat the end. We do the same thing in line 7. The last line is simply a quadratic polynomial inμ{\displaystyle \mu }. Since this is the logarithm ofqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}, we can see thatqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}itself is aGaussian distribution. With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involvingμ{\displaystyle \mu }andμ2{\displaystyle \mu ^{2}}andcompleting the squareoverμ{\displaystyle \mu }), we can derive the parameters of the Gaussian distribution: Note that all of the above steps can be shortened by using the formula for thesum of two quadratics. In other words: The derivation ofqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is similar to above, although we omit some of the details for the sake of brevity. Exponentiating both sides, we can see thatqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is agamma distribution. Specifically: Let us recap the conclusions from the previous sections: and In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions: Applying these formulas to the above equations is trivial in most cases, but the equation forbN{\displaystyle b_{N}}takes more work: We can then write the parameter equations as follows, without any expectations: Note that there are circular dependencies among the formulas forλN{\displaystyle \lambda _{N}}andbN{\displaystyle b_{N}}. This naturally suggests anEM-like algorithm: We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc. It can be shown that this algorithm is guaranteed to converge to a local maximum. Note also that the posterior distributions have the same form as the corresponding prior distributions. We didnotassume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of theexponential family, which is the case for most of the standard distributions. The above example shows the method by which the variational-Bayesian approximation to aposterior probabilitydensity in a givenBayesian networkis derived: Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are: Variational Bayes (VB) is often compared withexpectation–maximization(EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations. However, there are a number of differences. Most important iswhatis being computed. Imagine a BayesianGaussian mixture modeldescribed as follows:[3] Note: The interpretation of the above variables is as follows: The joint probability of all variables can be rewritten as where the individual factors are where Assume thatq(Z,π,μ,Λ)=q(Z)q(π,μ,Λ){\displaystyle q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}. Then[3] where we have defined Exponentiating both sides of the formula forln⁡q∗(Z){\displaystyle \ln q^{*}(\mathbf {Z} )}yields Requiring that this be normalized ends up requiring that theρnk{\displaystyle \rho _{nk}}sum to 1 over all values ofk{\displaystyle k}, yielding where In other words,q∗(Z){\displaystyle q^{*}(\mathbf {Z} )}is a product of single-observationmultinomial distributions, and factors over each individualzn{\displaystyle \mathbf {z} _{n}}, which is distributed as a single-observation multinomial distribution with parametersrnk{\displaystyle r_{nk}}fork=1…K{\displaystyle k=1\dots K}. Furthermore, we note that which is a standard result for categorical distributions. Now, considering the factorq(π,μ,Λ){\displaystyle q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}, note that it automatically factors intoq(π)∏k=1Kq(μk,Λk){\displaystyle q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})}due to the structure of the graphical model defining our Gaussian mixture model, which is specified above. Then, Taking the exponential of both sides, we recognizeq∗(π){\displaystyle q^{*}(\mathbf {\pi } )}as aDirichlet distribution where where Finally Grouping and reading off terms involvingμk{\displaystyle \mathbf {\mu } _{k}}andΛk{\displaystyle \mathbf {\Lambda } _{k}}, the result is aGaussian-Wishart distributiongiven by given the definitions Finally, notice that these functions require the values ofrnk{\displaystyle r_{nk}}, which make use ofρnk{\displaystyle \rho _{nk}}, which is defined in turn based onE⁡[ln⁡πk]{\displaystyle \operatorname {E} [\ln \pi _{k}]},E⁡[ln⁡|Λk|]{\displaystyle \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]}, andEμk,Λk⁡[(xn−μk)TΛk(xn−μk)]{\displaystyle \operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]}. Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them: These results lead to These can be converted from proportional to absolute values by normalizing overk{\displaystyle k}so that the corresponding values sum to 1. Note that: This suggests an iterative procedure that alternates between two steps: Note that these steps correspond closely with the standard EM algorithm to derive amaximum likelihoodormaximum a posteriori(MAP) solution for the parameters of aGaussian mixture model. The responsibilitiesrnk{\displaystyle r_{nk}}in the E step correspond closely to theposterior probabilitiesof the latent variables given the data, i.e.p(Z∣X){\displaystyle p(\mathbf {Z} \mid \mathbf {X} )}; the computation of the statisticsNk{\displaystyle N_{k}},x¯k{\displaystyle {\bar {\mathbf {x} }}_{k}}, andSk{\displaystyle \mathbf {S} _{k}}corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model. Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from theexponential family.
https://en.wikipedia.org/wiki/Variational_inference
Season 2(2011–2012) Season 3(2012–2013) Season 4(2013–2014) Season 5(2015) Season 6(2016) Season 7(2017) Season 8(2018) Season 9(2019) Music Games Other media Fan conventions 15.aiis a free non-commercialweb applicationthat usesartificial intelligenceto generatetext-to-speechvoices of fictional characters frompopular media. Created by ananonymousartificial intelligence researcher known as15, who began developing the technology as afreshmanduring their undergraduate research at theMassachusetts Institute of Technology, the application allowed users to make characters fromvideo games,television shows, andmoviesspeak custom text with emotional inflections faster than real-time.[a]The platform was notable for its ability to generate convincing voice output using minimal training data—the name "15.ai" referenced the creator's claim that a voice could be cloned with just 15 seconds of audio, in contrast to contemporary deep learning speech models which typically required tens of hours of audio data. It was an early example of an application ofgenerative artificial intelligenceduring the initial stages of theAI boom. Launched in March 2020, 15.ai gained widespread attention in early 2021 when content utilizing it wentviralon social media platforms likeYouTubeandTwitter, and quickly became popular among Internet fandoms, such as theMy Little Pony: Friendship Is Magic,Team Fortress 2, andSpongeBob SquarePantsfandoms. The service distinguished itself through its support for emotional context in speech generation throughemojis, precise pronunciation control throughphonetic transcriptions, and multi-speaker capabilities that allowed a single model to generate diverse character voices. 15.ai is credited as the first mainstream platform to popularize AI voice cloning (audio deepfakes) inmemesandcontent creation.[1] Voice actorsand industry professionals debated 15.ai's merits for fan creativity versus its potential impact on the profession. While many critics praised the application's accessibility and emotional control, they also noted technical limitations in areas likeprosodyoptions and non-English language support. 15.ai prompted discussions about ethical implications, including concerns aboutreduction of employment opportunitiesfor voice actors,voice-related fraud, andmisuse in explicit content. In January 2022, Voiceverse generated controversy when it was discovered that the company had generated audio using 15.ai without attribution and sold it as anon-fungible token(NFT) without permission.[2]News publications universally characterized this incident as Voiceverse having "stolen" voice lines from 15.ai.[3]The service was ultimately taken offline in September 2022 due to legal issues surroundingartificial intelligence and copyright. Its shutdown was followed by the emergence of various commercial alternatives in subsequent years, with their founders acknowledging 15.ai's pioneering influence in the field ofdeep learning speech synthesis. On May 18, 2025, 15 launched15.dev, asequelto the original service that launched after nearly three years of inactivity. The field of artificialspeech synthesisunderwent a significant transformation with the introduction ofdeep learningapproaches. In 2016,DeepMind's publication of the seminal paperWaveNet: A Generative Model for Raw Audiomarked a pivotal shift towardneural network-based speech synthesis, demonstrating unprecedented audio quality through causalconvolutional neural networks. Previously,concatenative synthesis—which worked by stitching together pre-recorded segments of human speech—was the predominant method for generating artificial speech, but it often produced robotic-sounding results at the boundaries of sentences.[4]Two years later, this was followed byGoogle AI's Tacotron 2 in 2018, which demonstrated that neural networks could produce highly natural speech synthesis but required substantial training data—typically tens of hours of audio—to achieve acceptable quality. When trained on smaller datasets, such as 2 hours of speech, the output quality degraded while still being able to maintain intelligible speech, and with just 24 minutes of training data, Tacotron 2 failed to produce intelligible speech.[5]The same year saw the emergence of HiFi-GAN, agenerative adversarial network(GAN)-based vocoder that improved the efficiency of waveform generation while producing high-fidelity speech,[6]followed by Glow-TTS, which introduced aflow-basedapproach that allowed for both fast inference and voice style transfer capabilities.[7]Chinese tech companies also made significant contributions to the field, withBaiduandByteDancedeveloping proprietary text-to-speech frameworks that further advanced the technology, though specific technical details of their implementations remained largely undisclosed.[8] [...] The website has multiple purposes. It serves as a proof of concept of a platform that allows anyone to create content, even if they can't hire someone to voice their projects. It also demonstrates the progress of my research in a far more engaging manner – by being able to use the actual model, you can discover things about it that even I wasn't aware of (such as getting characters to make gasping noises or moans by placing commas in between certain phonemes). It also doesn't let me get away with picking and choosing the best results and showing off only the ones that work [...] Being able to interact with the model with no filter allows the user to judge exactly how good the current work is at face value. 15.ai was conceived in 2016 as a research project indeep learning speech synthesisby a developer known as"15"(at the age of 18[11]) during theirfreshmanyear at theMassachusetts Institute of Technology(MIT) as part of itsUndergraduate Research Opportunities Program(UROP).[12]The developer was inspired byDeepMind'sWaveNetpaper, with development continuing through their studies asGoogle AIreleased Tacotron 2 the following year. By 2019, the developer had demonstrated at MIT their ability to replicate WaveNet and Tacotron 2's results using 75% less training data than previously required.[8]The name15is a reference to the creator's claim that a voice can be cloned with as little as 15 seconds of data.[13] The developer had originally planned to pursue adoctoratebased on their undergraduate research, but opted to work in thetech industryinstead after theirstartupwas accepted into theY Combinatoraccelerator in 2019. After their departure in early 2020, the developer returned to their voice synthesis research, implementing it as aweb application. According to a 2024 post onXfrom the developer, instead of using conventional voice datasets like LJSpeech that contained simple, monotone recordings, they sought out more challenging voice samples that could demonstrate the model's ability to handle complex speech patterns and emotional undertones.[tweet 1]The Pony Preservation Project—a fan initiative originating from/mlp/,[8]4chan'sMy Little Ponyboard, that had compiled voice clips fromMy Little Pony: Friendship Is Magic—played a crucial role in the implementation. The project's contributors had manually trimmed, denoised, transcribed, and emotion-tagged every line from the show. This dataset provided ideal training material for 15.ai's deep learning model.[8] 15.ai was released in March 2020 with a limited selection of characters, including those fromMy Little Pony: Friendship Is MagicandTeam Fortress 2.[14]The system was designed to function efficiently with limitedtraining data—requiring only minutes of clean audio per character, in contrast to the 40+ hours typically needed by traditional deep learning models.[15]To overcome data constraints, the developer employed specificdata augmentationtechniques to improve generalization, including deliberate introduction of spelling variations, punctuation patterns, and pronunciation distortions during training.[15] Upon its launch, 15.ai was offered as a free[16]andnon-commercial[17]service that did not requireuser registrationoruser accountsto operate,[18]and required the user to accept theterms of usebefore proceeding.[19]Users were permitted to create any content with the synthesized voices under two specific conditions: they must properly credit 15.ai by including the website URL in any posts, videos, or projects using the generated audio;[20]and they were prohibited from mixing 15.ai outputs with other text-to-speech outputs in the same work,[21]to prevent misrepresentation of the technology's capabilities.[22] More voices were added to the website in the following months.[23]A significant technical advancement came in late 2020 with the implementation of a multi-speakerembeddingin the deep neural network, enabling simultaneous training of multiple voices rather than requiring individual models for each character voice.[8]This not only allowed rapid expansion from eight to over fifty character voices,[11]but also let the model recognize common emotional patterns across characters, even when certain emotions were missing from some characters' training data.[24] By May 2020, the site had served over 4.2 million audio files to users.[25]In early 2021, the application gained popularity after skits, memes, and fan content created using 15.ai went viral onTwitter,TikTok,Reddit,Twitch,Facebook, andYouTube.[26]At its peak, the platform incurred operational costs ofUS$12,000[13]per month fromAWSinfrastructure needed to handle millions of daily voice generations; despite receiving offers from companies toacquire15.ai and its underlying technology, the website remained independent and was funded out of the personal previous startup earnings of the developer[8]—then aged 23 at the time. On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT had taken credit for voice lines generated from 15.ai without permission[3]and sold them as NFTs (non-fungible tokens).[2]This came shortly after 15.ai's developer had explicitly stated in December 2021 that they had no interest in incorporating NFTs into their work.[27]Log filesshowed that Voiceverse had generated audio of characters fromMy Little Pony: Friendship Is Magicusing 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform[28]—in violation of 15.ai's terms of service which explicitly prohibited commercial use and required proper attribution.[29] Voiceverse initially claimed their platform would allow NFT owners to possess commercial rights to AI-generated voices for content creation, in-game chats, and video calls.[30]When confronted with evidence of their misappropriation, Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai[31]and explained in theirDiscordserver that their marketing team had been in such a rush to create apartnership demothat they used 15.ai without waiting for their own voice technology to be ready.[32]The controversial tweet was deleted thereafter.[33]In response to their apology, 15 tweeted "Go fuck yourself,"[34]which went viral, amassing hundreds of thousands of retweets and likes onTwitterin support of the developer.[8]15 later expressed deeper frustration, writing: "the entire field of vocal synthesis is now being misrepresented by charlatans who are only in it for the money."[21] I'm partnering with @VoiceverseNFT to explore ways where together we might bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IP's they create. We all have a story to tell. You can hate. Or you can create. What'll it be? Following continued backlash and the plagiarism revelation, voice actorTroy Baker(who had partnered with Voiceverse) faced criticism for supporting an NFT project[36]and his confrontational announcement tone.[37]Baker had described Voiceverse's service as allowing people to "create customized audiobooks, YouTube videos, e-learning lectures, or even podcasts with your favorite voice all without the hassle of additional legal work,"[19]which critics noted raised concerns about potentiallyreplacing professional voice actors with AI.[38]Baker subsequently acknowledged that his original announcement tweet ending with "You can hate. Or you can create. What'll it be?" may have been "antagonistic,"[39]and on January 31, announced he would discontinue his partnership with Voiceverse.[40] The event raised concerns about NFT projects, which critics observed were frequently associated withintellectual propertytheft and questionable business practices.[41]The incident was later documented in theAI Incident Database(AIID), cataloging it as an example of "an AI-synthetic audio sold as an NFT on Voiceverse's platform [that] was acknowledged by the company for having been created by 15.ai, a free web app specializing in text-to-speech and AI-voice generation, and reused without proper attribution."[42]The controversy was also featured in writer and crypto skepticMolly White'sWeb3 Is Going Just Greatproject, which documented how Baker's partnership announcement and its antagonistic tone exacerbatednegative reactionsto the NFT initiative.[43]White noted the vague nature of Voiceverse's offering, described only as "provid[ing] you an ownership to a unique voice in theMetaverse,"[44]and noted how the revelation of stolen work from 15.ai further damaged Voiceverse's credibility.[43]Russian educational platformSkillboxlisted the incident as an example offraud in NFTs.[45]Voice actor and YouTuberYong Yeacriticized voice NFTs for its potential impact on the voice acting industry,[40]and stated in a follow-up YouTube video: "This isn't just one of those things [Voiceverse] can go 'Whoopsies!' on. [They] plagiarized somebody else's work and used that as a means to falsely market the quality of [their] own products, by using somebody else's higher quality voice AI to promote [...] their] own company for [their] own benefit."[video 1] In a 2024class action lawsuitfiled against LOVO, Inc., court documents alleged that the founders of LOVO also created Voiceverse, with plaintiffs claiming that Voiceverse had "already been found to have stolen technology from [15.ai]".[46] In September 2022, 15.ai was taken offline[47]due to legal issues surroundingartificial intelligence and copyright.[8]In a post onTwitter, 15 suggested a potential future version that would better address copyright concerns from the outset.[8] On May 18, 2025, 15 launched15.devas the officialsequelto 15.ai.[48][49]Fandom news siteEquestria Dailyreported that "almost every voiced pony in the show seems available from varying levels of quality" and noted that the website included "a dropdown for various emotions you want to generate."[50] The platform was non-commercial,[17]and operated without requiring user registration or accounts.[18]Users generated speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produced three audio variations with distinct emotional deliveries sorted byconfidencescore.[51]Characters available included multiple characters fromTeam Fortress 2andMy Little Pony: Friendship Is Magic, includingTwilight Sparkle;GLaDOS,Wheatley, and theSentry Turretfrom thePortalseries;SpongeBob SquarePants; Kyu Sugardust fromHuniePop,Rise KujikawafromPersona 4;Daria MorgendorfferandJane LanefromDaria;Carl BrutananadilewskifromAqua Teen Hunger Force;Steven UniversefromSteven Universe;SansfromUndertale;Madelineand multiple characters fromCeleste; theTenth Doctor Who;the Narrator fromThe Stanley Parable; andHAL 9000from2001: A Space Odyssey.[52]Out of the over fifty[11]voices available, thirty were of characters fromMy Little Pony: Friendship Is Magic.[23]Certain "silent" characters likeChellandGordon Freemanwere able to be selected as a joke, and would emit silent audio files when any text was submitted.[53]Characters fromUndertaleandCelestedid not produce spoken words but instead generated their games' distinctive beeps when text was entered.[54] 15.ai generated audio at44.1 kHzsampling rate—higher than the 16 kHz standard used by most deep learning text-to-speech systems of that period. This higher fidelity created more detailed audiospectrogramsand greater audio resolution, though it also made any synthesis imperfections more noticeable. Users reported usingAudacitytodownsampleany generated audio in order to mask apparent robotic artifacts, though this came at the cost of lower audio quality.[15]The system processed speechfaster-than-real-timeusing customized deep neural networks combined with specialized audio synthesis algorithms.[56]While the underlying technology could produce 10 seconds of audio in less than 10 seconds of processing time (hence,faster-than-real-time), the actual user experience often involved longer waits as the servers managed thousands of simultaneous requests, sometimes taking more than a minute to deliver results.[57] The deep learning model's nondeterministic properties produced variations in speech output, creating different intonations with each generation, similar to how voice actors produce different takes.[58]15.ai introduced the concept ofemotional contextualizers,which allowed users to specify the emotional tone of generated speech through guiding phrases.[59][8]The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at theMIT Media Lab. Introduced in 2017, DeepMoji processedemojiembeddings from 1.2 billion Twitter posts (from 2013 to 2017) to analyze emotional content.[60]If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer.[61]For example, if the input wasToday is a great day!|I'm very sad., the selected character would speak the sentence "Today is a great day!" in the emotion one would expect from someone saying the sentence "I'm very sad."[62]Certain characters, such asTwilight SparklefromMy Little Pony: Friendship Is Magic, offered preset emotional modes, who had specific options to output text in different emotional states such as "happy".[63] The application used pronunciation data fromOxford Dictionaries API,Wiktionary, andCMU Pronouncing Dictionary, the last of which is based onARPABET, a set of English phonetic transcriptions originally developed by theAdvanced Research Projects Agencyin the 1970s. For modern and Internet-specific terminology, the system incorporated pronunciation data fromuser-generated contentwebsites, includingReddit,Urban Dictionary,4chan, andGoogle.[20]Inputting ARPABET transcriptions was also supported, allowing users to correct mispronunciations or specify the desired pronunciation betweenheteronyms—words that have the same spelling but have different pronunciations. Users could invoke ARPABET transcriptions by enclosing the phoneme string in curly braces within the input box (for example,{AA1 R P AH0 B EH2 T}to specify the pronunciation of the word "ARPABET" (/ˈɑːrpəˌbɛt/AR-pə-beht).[24]The interface displayed parsed words with color-coding to indicate pronunciation certainty: green for words found in the existing pronunciation lookup table, blue for manually entered ARPABET pronunciations, and red for words where the pronunciation had to be algorithmically predicted.[20] Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speakerembeddings–learned numerical representations that captured each character's unique vocal characteristics.[8]Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters' emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data.[24] The platform limited text input to 200 characters per generation, though users could create multiple clips for longer speech sequences.[64]The interface included technical metrics and graphs, which served to highlight the research aspect of the website.[11]The name of the underlyingalgorithmused by 15.ai was dubbedDeepThroat.[65]As of version v23 of 15.ai, the interface displayed comprehensive model analysis information, including word parsing results and emotional analysis data. Theflowandgenerative adversarial network(GAN) hybridvocoderanddenoiser, introduced in an earlier version, was streamlined to remove manual parameter inputs.[11] Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results.[66]Natalie Clayton ofPC Gamerwrote thatSpongeBob SquarePants' voice was replicated well, but noted challenges in mimicking theNarratorfrom theThe Stanley Parable: "the algorithm simply can't captureKevan Brighting's whimsically droll intonation."[67]Similarly, Russian gaming websiteRampagareflected that GLaDOS performed exceptionally well since "her voice was originally created to simulate human speech by artificial intelligence," while the Narrator fromThe Stanley Parablewas less convincing due to insufficient training data.[68]Zack Zwiezen ofKotakureported that "[his] girlfriend was convinced it was a new voice line from GLaDOS' voice actor".[69]Calvin Rugona of gaming news publicationGamezocommented that the tool's simplicity contributed significantly to its widespread adoption, as it allowed anyone online to easily create and save voice clips.[54]Taiwanese newspaperUnited Daily Newsalso highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options.[70]Yahoo! NewsTaiwanreported that "GLaDOS inPortalcan pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words."[71]Chris Button of AI newsletterBytesidecalled the ability to clone a voice with only 15 seconds of data "freaky," but also found the tech behind it impressive.[72]Robin Lamorlette of French online magazineClubicdescribed the technology as "devilishly fun" and noted how Twitter and YouTube were filled with creative content from users experimenting with the tool.[73]The platform's voice generation capabilities were regularly featured onEquestria Daily, afandom news sitededicated to the showMy Little Pony: Friendship Is Magicand its other generations, with documented updates, fan creations, and additions of new character voices.[23]In a post introducing new character additions to 15.ai,Equestria Daily's founderShaun Scotellaro—also known by his online moniker "Sethisto"—wrote that "some of [the voices] aren't great due to the lack of samples to draw from, but many are really impressive still anyway."[23]ChineseMy Little Ponyfan siteEquestriaCNalso documented 15.ai's development, highlighting its various updates, though they criticized some of thebugsand the long queue wait times of the application.[74] Multiple other critics also found the word count limit, prosody options, and English-only nature of the application as not entirely satisfactory.[75]Peter Paltridge ofanimeandsuperheronews outletAnime Superhero Newsopined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also noted that "In some ways,SAMis still more advanced than this. It was possible to affect SAM's inflections by using special characters, as well as change his pitch at will. With 15.ai, you're at the mercy of whatever random inflections you get."[76]Conversely, Lauren Morton ofRock, Paper, Shotgunpraised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it".[77]Similarly, Eugenio Moto of Spanish news websiteQore.comwrote that "the most experienced of users can change parameters like the stress or the tone."[78]Takayuki Furushima ofDen Fami Nico Gamerhighlighted the "smooth pronunciations", and Yuki Kurosawa ofAUTOMATONnoted its "rich emotional expression" as a major feature; both Japanese authors noted the lack of Japanese-language support.[79][80]Renan do Prado of Brazilian gaming news outletArkadeand José Villalobos of Spanish gaming outletLaPS4pointed out that while users could create amusing results in Portuguese and Spanish respectively, the generation performed best in English.[81]Chinese gaming news outletGamerSkycalled the app "interesting", but also criticized the word count limit of the text and the lack of intonations.[82]Frank Park of South Korean video game outletZuntatawrote that "the surprising thing about 15.ai is that [for some characters], there's only about 30 seconds of data, but it achieves pronunciation accuracy close to 100%".[83]Machine learning professor Yongqiang Li remarked in his blog that the application was still free despite having 5,000 people generating voices concurrently at the time of writing.[84]Marco Cocomello of South African gaming and pop culture websiteGLITCHEDremarked that despite the 200-character limitation, the results "blew [him] away" when testing the app with GLaDOS's voice.[85]Álvaro Ibáñez of Spanish technology publicationMicrosiervoswrote that he found the rhythm of the AI-generated voices noteworthy, observing that the system appeared to adapt its delivery based on the content's intended meaning.[86] Technical publications and outlets focusing on artificial intelligence provided more in-depth analysis of 15.ai's capabilities and limitations compared to other text-to-speech technologies of the time.[87]Rionaldi Chandraseta of AI newsletterTowards Data Scienceobserved that voice models trained on larger datasets created more convincing output with better phrasing and natural pauses, particularly for extended text.[59]Bai Feng of Chinese tech and AI media outletXinZhiYuanonQQ Newshighlighted the technical achievement of 15.ai's high-quality output (44.1 kHzsampling rate) despite using minimal training data, remarking that this was of significantly higher quality than typical deep learning text-to-speech implementations which used 16 kHz sampling rates. The outlet also acknowledged that while some pronunciation errors occurred due to the limited training data, this was understandable given that traditional deep learning models typically required 40 or more hours of training data.[88]Similarly, Parth Mahendra of AI newsletterAI Dailyobserved that while the system "does a good job at accurately replicating most basic words," it struggled with more complex terms, noting that characters would "absolutely butcher the pronunciation" of certain words.[25]Ji Yunyo of Chinese tech news websiteNetEase Newscalled the technology behind 15.ai "remarkably efficient," requiring only minimal data to accurately clone numerous voices while maintaining emotional nuance and natural intonation. However, he also pointed out limitations, noting that the emotional expression was relatively "neutral" and that "extreme" emotions couldn't be properly synthesized, making it less suitable fornot safe for workapplications.[89]Ji also mentioned that while manydeepfakevideos required creators to extract and edit material from hours of original content for very short results, 15.ai could achieve similar or better effects with only a few dozen minutes of training data per character, though server performance issues often meant synthesis could take over a minute to complete.[90] Some voice actors whose characters appeared on 15.ai have publicly shared their thoughts about the platform. In a 2021 interview on video game voice acting podcastThe VŌC,John Patrick Lowrie—who voices the Sniper inTeam Fortress 2—explained that he had discovered 15.ai when a prospective intern showed him a skit she had created using AI-generated voices of the Sniper and the Spy fromTeam Fortress 2. Lowrie commented: "The technology still has a long way to go before you really believe that these are just human beings, but I was impressed by how much [15.ai] could do. You certainly don't get the delivery that you get from an actual person who's analyzed the scene, [...] but I do think that as a fan source—for people wanting to put togethermodsand stuff like that—that it could be fun for fans to use the voice of characters they like." He drew an analogy tosynthesized music, adding: "If you want the sound of achoir, and you want the sound of anorchestra, and you have the money, you hire a choir and an orchestra. And if youdon'thave the money, you have something that sounds pretty nice; but it's not the same as a choir and an orchestra."[video 2] In a 2021 live broadcast on hisTwitchchannel,Nathan Vetterlein—the voice actor of theScoutfromTeam Fortress 2—listened to an AI recreation of his character's voice. He described the impression as "interesting" and noted that "there's some stuff in there."[video 3] Other voice actors had mixed reactions to 15.ai's capabilities. While some industry professionals acknowledged the technical innovation, others raised concerns about the technology's implications for their profession.[91]When voice actorTroy Bakerannounced his partnership with Voiceverse NFT, which had misappropriated 15.ai's technology, it sparked widespread controversy within the voice acting industry.[92]Critics raised concerns about automated voice acting's potentialreduction of employment opportunitiesfor voice actors,[93]risk ofvoice impersonation, and potentialmisuse in explicit content.[58]Ruby Innes ofKotaku Australianoted, "this practice could potentially put voice actors out of work considering you could just use their AI voice rather than getting them to voice act for a project and paying them."[19]In her coverage of the Voiceverse controversy, Edie WK ofCheckpoint Gamingraised the concern that "this kind of technology has the potential to push voice actors out of work if it becomes easier and cheaper to use AI voices instead of working with the actor directly."[94] While 15.ai limited its scope to fictional characters and did not reproduce voices of real people or celebrities,[20]computer scientistAndrew Ngnoted that similar technology could be used to do so, including for nefarious purposes. In his 2020 assessment of 15.ai, he wrote: "Voice cloning could be enormously productive. InHollywood, it could revolutionize the use of virtual actors. In cartoons and audiobooks, it could enable voice actors to participate in many more productions. In online education, kids might pay more attention to lessons delivered by the voices of favorite personalities. And how many YouTube how-to video producers would love to have a syntheticMorgan Freemannarrate their scripts? While discussing potential risks, he added: "...but synthesizing a human actor's voice without consent is arguably unethical and possibly illegal. And this technology will be catnip for deepfakers, who could scrape recordings from social networks to impersonate private individuals."[95] 15.ai was an early pioneer of audio deepfakes, leading to the emergence of AI speech synthesis-based memes during the initial stages of theAI boomin 2020. 15.ai is credited as the first mainstream platform to popularize AI voice cloning inInternet memesand content creation,[1]particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise.[96]The platform's impact was especially notable in fan communities, including theMy Little Pony: Friendship Is Magic,Portal,Team Fortress 2, andSpongeBob SquarePantsfandoms, where it enabled the creation of viral content that garnered millions of views across social media platforms likeTwitterandYouTube.[97]Team Fortress 2content creators also used the platform to produce both short-form memes and complex narrative animations usingSource Filmmaker. Fan creations included skits and new fan animations[20](such as the popularTeam Fortress 2Source Filmmaker videoSpy's Confession[11]), crossover content—such asGame Informerwriter Liana Ruppert's demonstration combiningPortalandMass Effectdialogue in her coverage of the platform[98]—recreations of viral videos (including the infamousBig Bill Hell's Cars car dealership parody[99]), adaptations offanfictionusing AI-generated character voices (such asThe Tax Breaks, a fully voiced 17-minute fan-made episode ofFriendship Is Magic),[100]music videos and new musical compositions—such as theTeam Fortress 2remixPootisHardbass[11]—and content where characters recitedsea shanties.[101]Some fan creations gained mainstream attention, such as a viral edit replacingDonald Trump's cameo inHome Alone 2: Lost in New Yorkwith theHeavy Weapons Guy's AI-generated voice, which was featured on a daytimeCNNsegment in January 2021.[102]Some users integrated 15.ai's voice synthesis with VoiceAttack, a voice command software, to create personal assistants.[103] Its influence has been noted in the years after it became defunct,[104]with several commercial alternatives emerging to fill the void, such asElevenLabs[b]andSpeechify.[106]Contemporary generative voice AI companies have acknowledged 15.ai's pioneering role.Y Combinatorstartup PlayHT called the debut of 15.ai "a breakthrough in the field of text-to-speech (TTS) and speech synthesis".[107]Cliff Weitzman, the founder and CEO ofSpeechify, credited 15.ai for "making AI voice cloning popular for content creation by being the first [...] to feature popular existing characters from fandoms".[108]Mati Staniszewski, co-founder and CEO ofElevenLabs, wrote that 15.ai was transformative in the field ofAI text-to-speech.[109] Atbrony conventions, 15.ai has been discussed inpresentationson the intersection of theMy Little Ponyfandomandartificial intelligence.[110] Prior to its shutdown, 15.ai established several technical precedents that influenced subsequent developments in AI voice synthesis. Its integration ofDeepMojifor emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation,[111]while its support forARPABETphonetic transcriptionsset a standard for precise pronunciation control in public-facing voice synthesis tools.[8]The platform's unified multi-speaker model, which enabled simultaneous training of diverse character voices, proved particularly influential. This approach allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets; for example, if one character had examples of joyful speech but no angry examples, while another had angry but no joyful samples, the system could learn to generate both emotions for both characters by understanding the common patterns of how emotions affect speech.[24] 15.ai also made a key contribution in reducing training data requirements for speech synthesis. Earlier systems likeGoogle AI's Tacotron andMicrosoft Research's FastSpeech required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data.[5][112]In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data—specifically, the name "15.ai" refers to the creator's claim that a voice could be cloned with just 15 seconds of data.[113]This approach to data efficiency influenced subsequent developments in AI voice synthesis technology, as the 15-second benchmark became a reference point for subsequent voice synthesis systems. The original claim that only 15 seconds of data is required to clone a human's voice was corroborated byOpenAIin 2024.[114]
https://en.wikipedia.org/wiki/15.ai
Artificial imaginationis anarrowsubcomponent ofartificial general intelligencewhichgenerates,simulates, andfacilitates[1]realorpossiblefictionmodelsto createpredictions,inventions,[2]orconscious experiences. The term artificial imagination is also used to describe a property ofmachinesorprograms. Some of the traits that researchers hope to simulate includecreativity, vision,digital art,humor, andsatire.[3] Practitioners in the field are researching various aspects of Artificial imagination, such as Artificial (visual) imagination,[4]Artificial (aural) Imagination,[5]modeling/filtering content based on human emotions and Interactive Search. Some articles on the topic speculate on how artificial imagination may evolve to create anartificial world"people may be comfortable enough to escape from the real world".[6] Someresearcherssuch as G. Schleis and M. Rizki have focused on using artificialneural networksto simulate artificial imagination.[7] Another important project is being led by Hiroharu Kato and Tatsuya Harada at theUniversity of Tokyoin Japan. They have developed a computer capable of translating a description of an object into an image, which could be the easiest way to define what imagination is. Their idea is based on the concept of an image as a series of pixels divided into short sequences that correspond to a specific part of an image. The scientists call this sequences “visual words” and those can be interpreted by the machine using statistical distribution to read an create an image of an object the machine has not encountered. The topic of artificial imagination has garnered interest from scholars outside the computer science domain, such as noted communications scholarErnest Bormann, who came up with theSymbolic Convergence Theoryand worked on a project to develop artificial imagination in computer systems.[8]An interdisciplinary research seminar organized by the artistGrégory Chatonskyon artificial imagination andpostdigitalart has taken place since 2017 at theEcole Normale Supérieurein Paris.[9] The typical application of artificial imagination is for aninteractive search. Interactive searching has been developed since the mid-1990s, accompanied by the World Wide Web's development and the optimization of search engines. Based on the first query and feedback from a user, the databases to be searched are reorganized to improve the searching results. Artificial imagination allows us to synthesize images and to develop a new image, whether it is in the database, regardless its existence in the real world. For example, the computer shows results that are based on the answer from the initial query. The user selects several relevant images, and then the technology analyzes these selections and reorganizes the images' ranks to fit the query. In this process, artificial imagination is used to synthesize the selected images and to improve the searching result with additional relevant synthesized images. This technique is based on several algorithms, including theRocchio algorithmand theevolutionary algorithm. TheRocchio algorithm,[10]locating a query point near relevant examples and far away from irrelevant examples, is simple and works well in a small system where the databases are arranged in certain ranks. Theevolutionary synthesisis composed of two steps: a standard algorithm and an enhancement of the standard algorithm.[11][12]Through feedback from the user, there would be additional images synthesized so as to be suited to what the user is looking for. Artificial imagination has a more general definition and wide applications.[13]The traditional fields of artificial imagination include visual imagination and aural imagination. More generally, all the actions to form ideas, images and concepts can be linked to imagination. Thus, artificial imagination means more than only generating graphs.[14]For example, moral imagination is an important research subfield of artificial imagination, although classification of artificial imagination is difficult. Morals are an important part to human beings' logic, while artificial morals are important in artificial imagination and artificial intelligence. A common criticism of artificial intelligence is whether human beings should take responsibility for machines‘ mistakes or decisions and how to develop well-behaved machines.[15][16]As nobody can give a clear description of the best moral rules, it is impossible to create machines with commonly accepted moral rules.[17]However, recent research about artificial morals circumvent the definition of moral. Instead, machine learning methods are applied to train machines to imitate human morals.[18][19]As the data about moral decisions from thousands of different people are considered, the trained moral model can reflect widely accepted rules.[19] Memory is another major field of artificial imagination. Researchers such asAude Olivahave performed extensive work on artificial memory, especially visual memory.[20]Compared to visual imagination, the visual memory focuses more on how machine understand, analyse and store pictures in a human way. In addition, characters like spatial features are also considered. As this field is based on the brains' biological structures, extensive research on neuroscience has also been performed, which makes it a large intersection between biology and computer science.
https://en.wikipedia.org/wiki/Artificial_imagination
Automated journalism, also known asalgorithmic journalismorrobot journalism,[1][2][3]is a term that attempts to describe modern technological processes that have infiltrated the journalistic profession, such as news articles and videos generated by computer programs.[3][4][5]There are four main fields of application for automated journalism, namely automated content production, Data Mining, news dissemination and content optimization.[6]Throughartificial intelligence(AI) software, stories are produced automatically by computers rather than human reporters. These programs interpret, organize, and present data in human-readable ways. Typically, the process involves an algorithm that scans large amounts of provided data, selects from an assortment of pre-programmed article structures, orders key points, and inserts details such as names, places, amounts, rankings, statistics, and other figures.[4]The output can also be customized to fit a certain voice, tone, or style.[2][3][4] Data science and AI companies such asAutomated Insights,Narrative Science,United RobotsandMonokdevelop and provide these algorithms to news outlets.[4][7][8][9]As of 2016, only a few media organizations have used automated journalism. Early adopters include news providers such as theAssociated Press,Forbes,ProPublica, and theLos Angeles Times.[3] Early implementations were mainly used for stories based on statistics and numerical figures. Common topics include sports recaps, weather, financial reports, real estate analysis, and earnings reviews.[3]StatSheet, an online platform covering college basketball, runs entirely on an automated program.[4]TheAssociated Pressbegan using automation to cover 10,000 minor baseball leagues games annually, using a program fromAutomated Insightsand statistics from MLB Advanced Media.[10]Outside of sports, theAssociated Pressalso uses automation to produce stories on corporate earnings.[4]In 2006,Thomson Reutersannounced their switch to automation to generate financial news stories on its online news platform.[11]More famously, an algorithm called Quakebot published a story about a 2014 California earthquake onThe Los Angeles Timeswebsite within three minutes after the shaking had stopped.[4][7] Automated journalism is sometimes seen as an opportunity to free journalists from routine reporting, providing them with more time for complex tasks. It also allows efficiency and cost-cutting, alleviating some financial burden that many news organizations face. However, automated journalism is also perceived as a threat to the authorship and quality of news and a threat to the livelihoods of human journalists.[2][3] Robot reporters are built to produce large quantities of information at quicker speeds. TheAssociated Pressannounced that their use of automation has increased the volume of earnings reports from customers by more than ten times. With software fromAutomated Insightsand data from other companies, they can produce 150 to 300-word articles in the same time it takes journalists to crunch numbers and prepare information.[4]By automating routine stories and tasks, journalists are promised more time for complex jobs such as investigative reporting and in-depth analysis of events.[2][3] Francesco Marconi[12]of the Associated Press stated that, through automation, the news agency freed up 20 percent[13]of reporters’ time to focus on higher-impact projects. Automated journalism is cheaper because more content can be produced within less time. It also lowers labour costs for news organizations. Reduced human input means less expenses on wages or salaries, paid leaves, vacations, and employment insurance. Automation serves as a cost-cutting tool for news outlets struggling with tight budgets but still wish to maintain the scope and quality of their coverage.[3][11] In an automated story, there is often confusion about who should be credited as the author. Several participants of a study on algorithmic authorship[3]attributed the credit to the programmer; others perceived the news organization as the author, emphasizing the collaborative nature of the work. There is also no way for the reader to verify whether an article was written by a robot or human, which raises issues of transparency although such issues also arise with respect to authorship attribution between human authors too.[3][14] Concerns about the perceived credibility of automated news is similar to concerns about the perceived credibility of news in general. Critics doubt if algorithms are "fair and accurate, free from subjectivity, error, or attempted influence."[15]Again, these issues about fairness, accuracy, subjectivity, error, and attempts at influence or propaganda has also been present in articles written by humans over thousands of years. A common criticism is that machines do not replace human capabilities such as creativity, humour, and critical-thinking. However, as the technology evolves, the aim is to mimic human characteristics. When the UK's Guardian newspaper used an AI to write an entire article in September 2020, commentators pointed out that the AI still relied on human editorial content. Austin Tanney, the head of AI at Kainos said: "The Guardian got three or four different articles and spliced them together. They also gave it the opening paragraph. It doesn’t belittle what it is. It was written by AI, but there was human editorial on that."[3][14][16] The largest single study of readers' evaluations of news articles produced with and without the help of automation exposed 3,135 online news consumers to 24 articles. It found articles that had been automated were significantly less comprehensible, in part because they were considered to contain too many numbers. However, the automated articles were evaluated equally on other criteria including tone, narrative flow, and narrative structure.[17] Beyond human evaluation, there are now numerous algorithmic methods to identify machine written articles[18]although some articles may still contain errors that are obvious for a human to identify, they can at times score better with these automatic identifiers than human-written articles.[19] Among the concerns about automation is the loss of employment for journalists as publishers switch to using AIs.[3][4][20]The use of automation has become a near necessity in newsrooms nowadays, in order to keep up with the ever-increasing demand for news stories, which in turn has affected the very nature of the journalistic profession.[6]In 2014, an annual census from The American Society of News Editors announced that the newspaper industry lost 3,800 full-time, professional editors.[21]Falling by more than 10% within a year, this is the biggest drop since the industry cut over 10,000 jobs in 2007 and 2008.[21][22] There has been a significant amount of recent scholarship on the relationship between platform companies, such as Google and Facebook, and the news industry with researchers examining the impact of these platforms on the distribution and monetization of news content, as well as the implications for journalism and democracy.[23][24][25]Some scholars have extended this line of thinking to automated journalism and the use of AI in the news. A 2022 paper by theOxford Universityacademic Felix Simon, for example, argues that the concentration of AI tools and infrastructure in the hands of a few major technology companies, such as Google, Microsoft, and Amazon Web Services, is a significant issue for the news industry, as it risks shifting more control to these companies and increasing the industry's dependence on them.[26]Simon argues that this could lead to vendor lock-in, where news organizations become structurally dependent on AI provided by these companies and are unable to switch to another vendor without incurring significant costs. The companies also possess artefactual and contractual control[27]over their AI infrastructure and services, which could expose news organizations to the risk of unforeseen changes or the stopping of their AI solutions entirely. Additionally, the author argues the reliance on these companies for AI can make it more difficult for news organizations to understand the decisions or predictions made by the systems and can limit their ability to protect sources or proprietary business information. A 2017Nieman Reportsarticle by Nicola Bruno[28]discusses whether or not machines will replace journalists and addresses concerns around the concept of automated journalism practices. Ultimately, Bruno came to the conclusion that AI would assist journalists, not replace them. "No automated software or amateur reporter will ever replace a good journalist", she said. In 2020, however,Microsoftdid just that - replacing 27 journalists with AI. One staff member was quoted by The Guardian as saying: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.” The journalist went on to say that replacing humans with software was risky, as existing staff were careful to stick to “very strict editorial guidelines” which ensured that users were not presented with violent or inappropriate content when opening their browser, for example.[29]
https://en.wikipedia.org/wiki/Automated_journalism
Computer musicis the application ofcomputing technologyinmusic composition, to help human composers create new music or to have computers independently create music, such as withalgorithmic compositionprograms. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such assound synthesis,digital signal processing,sound design, sonic diffusion,acoustics,electrical engineering, andpsychoacoustics.[1]The field of computer music can trace its roots back to the origins ofelectronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.[2] Much of the work on computer music has drawn on the relationship betweenmusic and mathematics, a relationship that has been noted since theAncient Greeksdescribed the "harmony of the spheres". Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamedCSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were speculative). Research has shown that peoplespeculatedabout computers playing music, possibly because computers would make noises,[3]but there is no evidence that they did it.[4][5] The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built byTrevor Pearceyand Maston Beard in the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for that purpose. The music was never recorded, but it has been accurately reconstructed.[6][7]In 1951 it publicly played the "Colonel Bogey March"[8]of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, asMax Mathewsdid, which is current computer-music practice. The first music to be performed in England was a performance of theBritish National Anthemthat was programmed byChristopher Stracheyon theFerranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by aBBCoutside broadcasting unit: the National Anthem, "Baa, Baa, Black Sheep", and "In the Mood"; this is recognized as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the Manchester University site.[9]Researchers at theUniversity of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard onSoundCloud.[10][11][6] Two further major 1950s developments were the origins of digital sound synthesis by computer, and ofalgorithmic compositionprograms beyond rote playback. Amongst other pioneers, the musical chemistsLejaren Hillerand Leonard Isaacson worked on a series of algorithmic composition experiments from 1956 to 1959, manifested in the 1957 premiere of theIlliac Suitefor string quartet.[12]Max Mathews at Bell Laboratories developed the influentialMUSIC Iprogram and its descendants, further popularising computer music through a 1963 article inScience.[13]The first professional composer to work with digital synthesis wasJames Tenney, who created a series of digitally synthesized and/or algorithmically composed pieces at Bell Labs using Mathews' MUSIC III system, beginning withAnalog #1 (Noise Study)(1961).[14][15]After Tenney left Bell Labs in 1964, he was replaced by composerJean-Claude Risset, who conducted research on the synthesis of instrumental timbres and composedComputer Suite from Little Boy(1968). Early computer-music programs typically did not run inreal time, although the first experiments on CSIRAC and theFerranti Mark 1did operate inreal time. From the late 1950s, with increasingly sophisticated programming, programs would run for hours or days, on multi million-dollar computers, to generate a few minutes of music.[16][17]One way around this was to use a 'hybrid system' of digital control of ananalog synthesiserand early examples of this were Max Mathews' GROOVE system (1969) and also MUSYS byPeter Zinovieff(1969). Until now partial use has been exploited for musical research into the substance and form of sound (convincing examples are those of Hiller and Isaacson in Urbana, Illinois, US;Iannis Xenakisin Paris andPietro Grossiin Florence, Italy).[18] In May 1967 the first experiments in computer music in Italy were carried out by theS 2F M studioin Florence[19]in collaboration withGeneral Electric Information SystemsItaly.[20]Olivetti-General Electric GE 115(Olivetti S.p.A.) is used by Grossi as aperformer: three programmes were prepared for these experiments. The programmes were written by Ferruccio Zulian[21]and used byPietro Grossifor playing Bach, Paganini, and Webern works and for studying new sound structures.[22] John Chowning's work onFM synthesisfrom the 1960s to the 1970s allowed much more efficient digital synthesis,[23]eventually leading to the development of the affordable FM synthesis-basedYamaha DX7digital synthesizer, released in 1983.[24] Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.[25] In Japan, experiments in computer music date back to 1962, whenKeio Universityprofessor Sekine andToshibaengineer Hayashi experimented with theTOSBAC[jp]computer. This resulted in a piece entitledTOSBAC Suite, influenced by theIlliac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented duringOsaka Expo '70and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes inpopular music, though some of the more serious Japanese musicians used large computer systems such as theFairlightin the 1970s.[26] In the late 1970s these systems became commercialized, including systems like theRoland MC-8 Microcomposer, where amicroprocessor-based system controls ananalog synthesizer, released in 1978.[26]In addition to the Yamaha DX7, the advent of inexpensive digitalchipsandmicrocomputersopened the door to real-time generation of computer music.[24]In the 1980s, Japanese personal computers such as theNEC PC-88came installed with FM synthesissound chipsand featuredaudio programming languagessuch asMusic Macro Language(MML) andMIDIinterfaces, which were most often used to producevideo game music, orchiptunes.[26]By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.[27] Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.[28] There is considerable activity in the field of computer music as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer andelectronic musicstudy and research, including theCCRMA(Center of Computer Research in Music and Acoustic, Stanford, USA),ICMA(International Computer Music Association), C4DM (Centre for Digital Music),IRCAM, GRAME,SEAMUS(Society for Electro Acoustic Music in the United States),CEC(Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world. Later, composers such asGottfried Michael KoenigandIannis Xenakishad computers generate the sounds of the composition as well as the score. Koenig producedalgorithmic compositionprograms which were a generalization of his ownserial compositionpractice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at theInstitute of SonologyinUtrechtin the 1970s.[29]In the 2000s,Andranik Tangiandeveloped a computer algorithm to determine the time event structures forrhythmic canonsand rhythmic fugues, which were then "manually" worked out into harmonic compositionsEine kleine Mathmusik IandEine kleine Mathmusik IIperformed by computer;[30][31]for scores and recordings see.[32] Computers have also been used in an attempt to imitate the music of great composers of the past, such asMozart. A present exponent of this technique isDavid Cope, whose computer programs analyses works of other composers to produce new works in a similar style. Cope's best-known program isEmily Howell.[33][34][35] Melomics, a research project from theUniversity of Málaga(Spain), developed a computer composition cluster namedIamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception,Iamushas composed a full album in 2012, also namedIamus, whichNew Scientistdescribed as "the first major work composed by a computer and performed by a full orchestra".[36]The group has also developed anAPIfor developers to utilize the technology, and makes its music available on its website. Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use ofalgorithmic compositiontechniques in software. This label is derived from the combination of two labels, each too vague for continued use. The labelcomputer-aided compositionlacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The labelalgorithmic compositionis likewise too broad, particularly in that it does not specify the use of a computer. The termcomputer-aided, rather than computer-assisted, is used in the same manner ascomputer-aided design.[37] Machine improvisation uses computer algorithms to createimprovisationon existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation usesmachine learningandpattern matchingalgorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection. This is different from other improvisation methods with computers that usealgorithmic compositionto generate new music without performing analysis of existing music examples.[38] Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson'sIlliac Suite for String Quartet(1957) and Xenakis' uses ofMarkov chainsandstochastic processes. Modern methods include the use oflossless data compressionfor incremental parsing, predictionsuffix tree,string searchingand more.[39]Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model.[40]Later the use offactor oraclealgorithm (basically afactor oracleis a finite state automaton constructed in linear time and space in an incremental fashion)[41]was adopted for music by Assayag and Dubnov[42]and became the basis for several systems that use stylistic re-injection.[43] The first implementation of statistical style modeling was the LZify method in Open Music,[44]followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms ofMarkov modelsand used it for real time style modeling[45]developed byFrançois Pachetat Sony CSL Paris in 2002.[46][47]Matlab implementation of the Factor Oracle machine improvisation can be found as part ofComputer Auditiontoolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.[48] OMax is a software environment developed in IRCAM. OMax usesOpenMusicand Max. It is based on researches on stylistic modeling carried out by Gerard Assayag andShlomo Dubnovand on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. theOMax Brothers) in the Ircam Music Representations group.[49]One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation,[50]using an information rate criteria for finding the optimal or most informative representation.[51] The use ofartificial intelligenceto generate new melodies,[52]coverpre-existing music,[53]and clone artists' voices, is a recent phenomenon that has been reported to disrupt themusic industry.[54] Live coding[55](sometimes known as 'interactive programming', 'on-the-fly programming',[56]'just in time programming') is the name given to the process of writing software in real time as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.[57]
https://en.wikipedia.org/wiki/Computer_music
DALL-E,DALL-E 2, andDALL-E 3(stylisedDALL·E) aretext-to-image modelsdeveloped byOpenAIusingdeep learningmethodologies to generatedigital imagesfromnatural languagedescriptions known asprompts. The first version of DALL-E was announced in January 2021. In the following year, its successor DALL-E 2 was released. DALL-E 3 was released natively intoChatGPTfor ChatGPT Plus and ChatGPT Enterprise customers in October 2023,[1]with availability via OpenAI's API[2]and "Labs" platform provided in early November.[3]Microsoft implemented the model in Bing's Image Creator tool and plans to implement it into their Designer app.[4]With Bing's Image Creator tool,Microsoft Copilotruns on DALL-E 3.[5]In March 2025, DALL-E-3 was replaced in ChatGPT byGPT-4o's native image-generation capabilities.[6] DALL-E was revealed by OpenAI in a blog post on 5 January 2021, and uses a version ofGPT-3[7]modified to generate images. On 6 April 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles".[8]On 20 July 2022, DALL-E 2 entered into a beta phase with invitations sent to 1 million waitlisted individuals;[9]users could generate a certain number of images for free every month and may purchase more.[10]Access had previously been restricted to pre-selected users for a research preview due to concerns aboutethicsand safety.[11][12]On 28 September 2022, DALL-E 2 was opened to everyone and the waitlist requirement was removed.[13]In September 2023, OpenAI announced their latest image model, DALL-E 3, capable of understanding "significantly more nuance and detail" than previous iterations.[14]In early November 2022, OpenAI released DALL-E 2 as anAPI, allowing developers to integrate the model into their own applications.Microsoftunveiled their implementation of DALL-E 2 in their Designer app and Image Creator tool included inBingandMicrosoft Edge.[15]The API operates on a cost-per-image basis, with prices varying depending on image resolution. Volume discounts are available to companies working with OpenAI's enterprise team.[16] The software's name is aportmanteauof the names of animated robotPixarcharacterWALL-Eand the Spanish surrealist artistSalvador Dalí.[17][7] In February 2024, OpenAI began adding watermarks to DALL-E generated images, containing metadata in the C2PA (Coalition for Content Provenance and Authenticity) standard promoted by theContent Authenticity Initiative.[18] The firstgenerative pre-trained transformer(GPT) model was initially developed by OpenAI in 2018,[19]using aTransformerarchitecture. The first iteration, GPT-1,[20]was scaled up to produceGPT-2in 2019;[21]in 2020, it was scaled up again to produceGPT-3, with 175 billion parameters.[22][7][23] DALL-E has three components: a discreteVAE, an autoregressive decoder-only Transformer (12 billion parameters) similar to GPT-3, and a CLIP pair of image encoder and text encoder.[24] The discrete VAE can convert an image to a sequence of tokens, and conversely, convert a sequence of tokens back to an image. This is necessary as the Transformer does not directly process image data.[24] The input to the Transformer model is a sequence of tokenised image caption followed by tokenised image patches. The image caption is in English, tokenised bybyte pair encoding(vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256×256 RGB image, divided into 32×32 patches of 4×4 each. Each patch is then converted by a discretevariational autoencoderto a token (vocabulary size 8192).[24] DALL-E was developed and announced to the public in conjunction withCLIP (Contrastive Language-Image Pre-training).[25]CLIP is a separate model based oncontrastive learningthat was trained on 400 million pairs of images with text captionsscrapedfrom the Internet. Its role is to "understand and rank" DALL-E's output by predicting which caption from a list of 32,768 captions randomly selected from the dataset (of which one was the correct answer) is most appropriate for an image.[26] A trained CLIP pair is used to filter a larger initial list of images generated by DALL-E to select the image that is closest to the text prompt.[24] DALL-E 2 uses 3.5 billion parameters, a smaller number than its predecessor.[24]Instead of an autoregressive Transformer, DALL-E 2 uses adiffusion modelconditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model.[24]This is the same architecture as that ofStable Diffusion, released a few months later. While a technical report was written for DALL-E 3, it does not include training or implementation details of the model, instead focusing on the improved prompt following capabilities developed for DALL-E 3.[27] DALL-E can generate imagery in multiple styles, includingphotorealisticimagery,paintings, andemoji.[7]It can "manipulate and rearrange" objects in its images,[7]and can correctly place design elements in novel compositions without explicit instruction. Thom Dunn writing forBoingBoingremarked that "For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL-E often draws the handkerchief, hands, and feet in plausible locations."[28]DALL-E showed the ability to "fill in the blanks" to infer appropriate details without specific prompts, such as adding Christmas imagery to prompts commonly associated with the celebration,[29]and appropriately placed shadows to images that did not mention them.[30]Furthermore, DALL-E exhibits a broad understanding of visual and design trends.[citation needed] DALL-E can produce images for a wide variety of arbitrary descriptions from various viewpoints[31]with only rare failures.[17]Mark Riedl, an associate professor at theGeorgia TechSchool of Interactive Computing, found that DALL-E could blend concepts (described as a key element of humancreativity).[32][33] Its visual reasoning ability is sufficient to solveRaven's Matrices(visual tests often administered to humans to measure intelligence).[34][35] DALL-E 3 follows complex prompts with more accuracy and detail than its predecessors, and is able to generate more coherent and accurate text.[36][14]DALL-E 3 is integrated into ChatGPT Plus.[14] Given an existing image, DALL-E 2 and DALL-E 3 can produce "variations" of the image as individual outputs based on the original, as well as edit the image to modify or expand upon it. The "inpainting" and "outpainting" abilities of these models use context from an image to fill in missing areas using amediumconsistent with the original, following a given prompt. For example, this can be used to insert a new subject into an image, or expand an image beyond its original borders.[37]According to OpenAI, "Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image."[38] DALL-E 2's language understanding has limits. It is sometimes unable to distinguish "A yellow book and a red vase" from "A red book and a yellow vase" or "A panda making latte art" from "Latte art of a panda".[39]It generates images of an astronaut riding a horse when presented with the prompt "a horse riding an astronaut".[40]It also fails to generate the correct images in a variety of circumstances. Requesting more than three objects, negation, numbers, andconnected sentencesmay result in mistakes, and object features may appear on the wrong object.[31]Additional limitations include handling text — which, even with legible lettering, almost invariably results in dream-like gibberish — and its limited capacity to address scientific information, such as astronomy or medical imagery.[41] DALL-E 2's reliance on public datasets influences its results and leads toalgorithmic biasin some cases, such as generating higher numbers of men than women for requests that do not mention gender.[41]DALL-E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias in some cases such as reducing the frequency of women being generated.[42]OpenAI hypothesise that this may be because women were more likely to be sexualised in training data which caused the filter to influence results.[42]In September 2022, OpenAI confirmed toThe Vergethat DALL-E invisibly inserts phrases into user prompts to address bias in results; for instance, "black man" and "Asian woman" are inserted into prompts that do not specify gender or race.[43]OpenAI claims to address concerns for potential "racy content" - containing nudity or sexual content generation, with DALL-E 3 through input/output filters, blocklists, ChatGPT refusals, and model level interventions.[44]However, DALL-E 3 continues to disproportionally represent people as White, female, and youthful. Users are able to somewhat remedy this through more specific prompts for image generation. A concern about DALL-E 2 and similar image generation models is that they could be used to propagatedeepfakesand other forms of misinformation.[45][46]As an attempt to mitigate this, the software rejects prompts involving public figures and uploads containing human faces.[47]Prompts containing potentially objectionable content are blocked, and uploaded images are analysed to detect offensive material.[48]A disadvantage of prompt-based filtering is that it is easy to bypass using alternative phrases that result in a similar output. For example, the word "blood" is filtered, but "ketchup" and "red liquid" are not.[49][48] Another concern about DALL-E 2 and similar models is that they could causetechnological unemploymentfor artists, photographers, and graphic designers due to their accuracy and popularity.[50][51]DALL-E 3 is designed to block users from generating art in the style of currently-living artists.[14]While OpenAI states that images produced using these models do not require permission to reprint, sell, or merchandise,[52]legal concerns have been raised regarding who owns those images.[53][54] In 2023 Microsoft pitched theUnited States Department of Defenseto use DALL-E models to trainbattlefield management systems.[55]In January 2024OpenAI removed its blanket banon military and warfare use from its usage policies.[56] Most coverage of DALL-E focuses on a small subset of "surreal"[25]or "quirky"[32]outputs. DALL-E's output for "an illustration of a baby daikon radish in a tutu walking a dog" was mentioned in pieces fromInput,[57]NBC,[58]Nature,[59]and other publications.[7][60][61]Its output for "an armchair in the shape of an avocado" was also widely covered.[25][33] ExtremeTechstated "you can ask DALL-E for a picture of a phone or vacuum cleaner from a specified period of time, and it understands how those objects have changed".[29]Engadgetalso noted its unusual capacity for "understanding how telephones and other objects change over time".[30] According toMIT Technology Review, one of OpenAI's objectives was to "give language models a better grasp of the everyday concepts that humans use to make sense of things".[25] Wall Street investors have had a positive reception of DALL-E 2, with some firms thinking it could represent a turning point for a future multi-trillion dollar industry. By mid-2019, OpenAI had already received over $1 billion in funding fromMicrosoftand Khosla Ventures,[62][63][64]and in January 2023, following the launch of DALL-E 2 and ChatGPT, received an additional $10 billion in funding from Microsoft.[65] Japan'sanimecommunity has had a negative reaction to DALL-E 2 and similar models.[66][67][68]Two arguments are typically presented by artists against the software. The first is that AI art is not art because it is not created by a human with intent. "The juxtaposition of AI-generated images with their own work is degrading and undermines the time and skill that goes into their art. AI-driven image generation tools have been heavily criticized by artists because they are trained on human-made art scraped from the web."[9]The second is the trouble withcopyright lawand data text-to-image models are trained on. OpenAI has not released information about what dataset(s) were used to train DALL-E 2, inciting concern from some that the work of artists has been used for training without permission. Copyright laws surrounding these topics are inconclusive at the moment.[10] After integrating DALL-E 3 into Bing Chat and ChatGPT, Microsoft and OpenAI faced criticism for excessive content filtering, with critics saying DALL-E had been "lobotomized."[69]The flagging of images generated by prompts such as "man breaks server rack with sledgehammer" was cited as evidence. Over the first days of its launch, filtering was reportedly increased to the point where images generated by some of Bing's own suggested prompts were being blocked.[69][70]TechRadarargued that leaning too heavily on the side of caution could limit DALL-E's value as a creative tool.[70] Since OpenAI has not releasedsource codefor any of the three models, there have been several attempts to createopen-sourcemodels offering similar capabilities.[71][72]Released in 2022 onHugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL-E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery.[73][74][75]Another example of an open source text-to-image model isStable Diffusionby Stability AI.[76]
https://en.wikipedia.org/wiki/DALL-E
Generative Pre-trained Transformer 3(GPT-3) is alarge language modelreleased byOpenAIin 2020. Like its predecessor,GPT-2, it is a decoder-only[2]transformer modelof deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention".[3]This attention mechanism allows the model to focus selectively on segments of input text it predicts to be most relevant.[4]GPT-3 has 175 billionparameters, each with 16-bit precision, requiring 350GB of storage since each parameter occupies 2 bytes. It has acontext windowsize of 2048tokens, and has demonstrated strong "zero-shot" and "few-shot" learning abilities on many tasks.[2] On September 22, 2020,Microsoftannounced that it had licensed GPT-3 exclusively. Others can still receive output from its public API, but only Microsoft has access to the underlying model.[5] According toThe Economist, improved algorithms, more powerful computers, and a recent increase in the amount of digitized material have fueled a revolution inmachine learning. New techniques in the 2010s resulted in "rapid improvements in tasks", including manipulating language.[6] Software models are trained to learn by using thousands or millions of examples in a "structure... loosely based on the neural architecture of the brain".[6]One architecture used innatural language processing(NLP) is aneural networkbased on adeep learningmodel that was introduced in 2017—thetransformerarchitecture.[7]There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.[8] On June 11, 2018, OpenAI researchers and engineers published a paper introducing the firstgenerative pre-trained transformer(GPT)—a type ofgenerativelarge language modelthat is pre-trained with an enormous and diversetext corpusindatasets, followed by discriminativefine-tuningto focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employedsupervised learningfrom large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models.[2]The first GPT model was known as "GPT-1," and it was followed by "GPT-2" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.[9] In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was "largest language model ever published at 17 billion parameters."[10]It performed better than any other language model at a variety of tasks, includingsummarizing textsandanswering questions. The construct of "learning styles" is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience. On May 28, 2020, anarXivpreprint by a group of 31 engineers and researchers at OpenAI described the achievement and development of GPT-3, a third-generation "state-of-the-art language model".[1][12]The team increased the capacity of GPT-3 by over two orders of magnitude from that of its predecessor, GPT-2,[13]making GPT-3 the largest non-sparse language model to date.[1]:14[14]Because GPT-3 is structurally similar to its predecessors,[1]its greater accuracy is attributed to its increased capacity and greater number of parameters.[15]GPT-3's capacity is ten times larger than that of Microsoft's Turing NLG, the next largest NLP model known at the time.[12] Lambdalabs estimated a hypothetical cost of around $4.6 million US dollars and 355 years to train GPT-3 on a singleGPUin 2020,[16]with lower actual training time by using more GPUs in parallel. Sixty percent of the weighted pre-training dataset for GPT-3 comes from a filtered version ofCommon Crawlconsisting of 410 billionbyte-pair-encodedtokens. Fuzzy deduplication usedApache Spark'sMinHashLSH.[1]: 9Other sources are 19 billion tokens from WebText2 representing 22% of the weighted total, 12 billion tokens from Books1 representing 8%, 55 billion tokens from Books2 representing 8%, and 3 billion tokens from Wikipedia representing 3%.[1]: 9GPT-3 was trained on hundreds of billions of words and is also capable of coding inCSS,JSX, andPython, among others.[citation needed] Since GPT-3's training data was all-encompassing, it does not require further training for distinct language tasks.[citation needed]The training data contains occasional toxic language and GPT-3 occasionally generates toxic language as a result of mimicking its training data. A study from theUniversity of Washingtonfound that GPT-3 produced toxic language at a toxicity level comparable to the similar natural language processing models ofGPT-2and CTRL. OpenAI has implemented several strategies to limit the amount of toxic language generated by GPT-3. As a result, GPT-3 produced less toxic language compared to its predecessor model, GPT-1, although it produced both more generations and a higher toxicity of toxic language compared to CTRL Wiki, a language model trained entirely on Wikipedia data.[17] On June 11, 2020,OpenAIannounced that users could request access to its user-friendly GPT-3 API—a "machine learning toolset"—to help OpenAI "explore the strengths and limits" of this new technology.[18][19]The invitation described how this API had a general-purpose "text in, text out" interface that can complete almost "any English language task", instead of the usual single use-case.[18]According to one user, who had access to a private early release of the OpenAI GPT-3 API, GPT-3 was "eerily good" at writing "amazingly coherent text" with only a few simple prompts.[20]In an initial experiment 80 US subjects were asked to judge if short ~200 word articles were written by humans or GPT-3. The participants judged correctly 52% of the time, doing only slightly better than random guessing.[1] On November 18, 2021, OpenAI announced that enough safeguards had been implemented that access to its API would be unrestricted.[21]OpenAI provided developers with a content moderation tool that helps them abide by OpenAI's content policy.[22]On January 27, 2022, OpenAI announced that its newest GPT-3 language models (collectively referred to as InstructGPT) were now the default language model used on theirAPI. According to OpenAI, InstructGPT produced content that was better aligned to user intentions by following instructions better, generating fewer made-up facts, and producing somewhat less toxic content.[23] Because GPT-3 can "generate news articles which human evaluators have difficulty distinguishing from articles written by humans,"[12]GPT-3 has the "potential to advance both the beneficial and harmful applications of language models."[1]: 34In their May 28, 2020 paper, the researchers described in detail the potential "harmful effects of GPT-3"[12]which include "misinformation,spam,phishing,abuse of legal and governmental processes,fraudulent academic essay writingand social engineeringpretexting".[1]The authors draw attention to these dangers to call for research onrisk mitigation.[1]: 34 GPT-3 is capable of performing zero-shot and few-shot learning (including one-shot).[1] In June 2022, Almira Osmanovic Thunström wrote that GPT-3 was the primary author on an article on itself, that they had submitted it for publication,[24]and that it had been pre-published while waiting for completion of its review.[25] There are many models in the GPT-3 family, some serving different purposes than others. In the initial research paper published by OpenAI, they mentioned 8 different sizes of the main GPT-3 model (Table 2.1): Half of the models are accessible through the API, namely GPT-3-medium, GPT-3-xl, GPT-3-6.7B and GPT-3-175b, which are referred to as ada, babbage, curie and davinci respectively. While the size of the API models was not originally disclosed by OpenAI,EleutherAIannounced the mapping between model sizes and API names in May 2021.[26]These model sizes were later confirmed by OpenAI,[27]but the sizes of subsequent models have not been disclosed. babbage-002 davinci-002 code-davinci-002 gpt-3.5-turbo-instruct gpt-3.5-turbo-16k Generative Pre-trained Transformer 3.5(GPT-3.5) is a sub class of GPT-3 Models created byOpenAIin 2022. On March 15, 2022, OpenAI made available new versions of GPT-3 andCodexin its API with edit and insert capabilities under the names "text-davinci-002" and "code-davinci-002".[28]These models were described as more capable than previous versions and were trained on data up to June 2021.[29]On November 28, 2022, OpenAI introduced text-davinci-003.[30]On November 30, 2022, OpenAI began referring to these models as belonging to the "GPT-3.5" series,[29]and releasedChatGPT, which wasfine-tunedfrom a model in the GPT-3.5 series.[31]OpenAI does not include GPT-3.5 in GPT-3.[32] There are three models:[33] On April 10, 2023,OpenAIintroduced a new variant of its GPT-3.5 series model, known as GPT-3.5 with Browsing (ALPHA).[34]This updated model was described to build upon the capabilities of its predecessors "text-davinci-002" and "code-davinci-002".[35]The GPT-3.5 with Browsing (ALPHA) model incorporated the ability to access and browse online information. This has led to more accurate and up-to-date responses to user queries.[34] The GPT-3.5 with Browsing (ALPHA) model has been trained on data up to September 2021, giving it more information compared to previous GPT-3.5 models, which were trained on data up until June 2021. The model attempted to provide developers and users with an advanced natural language processing tool that can effectively retrieve and synthesize online information.[34] To enable browsing capabilities, OpenAI implemented a newAPIthat allows the GPT-3.5 with Browsing (ALPHA) model to access selected online resources during operation.[36]This feature allows users to ask questions or request information with the expectation that the model will deliver updated, accurate, and relevant answers based on the latest online sources available to it. On April 27, 2023, OpenAI made the GPT-3.5 with Browsing (ALPHA) model publicly available to GPT Plus users. This allowed more people to access to its new features.[36] InstructGPT is a fine-tuned version of GPT-3.5 trained on a dataset of human-written instructions.[37] GPT-3's builder,OpenAI, was initially founded as anon-profitin 2015.[62]In 2019, OpenAI broke from its usual open-source standards by not publicly releasing GPT-3's predecessor model, citing concerns that the model could facilitate the propagation of fake news. OpenAI eventually released a version ofGPT-2that was 8% of the original model's size.[63]In the same year, OpenAI restructured to be a for-profit company.[64]In 2020, Microsoft announced the company had exclusive licensing of GPT-3 for Microsoft's products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to GPT-3 to receive the model's output, but only Microsoft will have access to GPT-3's source code.[5] Large language models, such as GPT-3, have come under criticism from a few of Google's AI ethics researchers for the environmental impact of training and storing the models, detailed in a paper co-authored byTimnit GebruandEmily M. Benderin 2021.[65] The growing[when?]use of automated writing technologies based on GPT-3 and other language generators, has raised concerns regarding academic integrity[66]and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism.[67] OpenAI's GPT series was built with data from theCommon Crawldataset,[68]a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years.TechCrunchreports this training data includes copyrighted material from the BBC,The New York Times,Reddit, the full text of online books, and more.[69]In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from theUnited States Patent and Trademark Office(USPTO), OpenAI argued that "Under current law, training AI systems [such as its GPT models] constitutesfair use," but that "given the lack ofcase lawon point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs."[70]
https://en.wikipedia.org/wiki/GPT-3
Human image synthesisis technology that can be applied to make believable and evenphotorealisticrenditions[1][2]of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films usingcomputer generated imageryhave featured synthetic images of human-like charactersdigitally compositedonto the real or other simulated film material. Towards the end of the 2010sdeep learningartificial intelligencehas been applied tosynthesize images and videothat look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work . In 1999Paul Debevecet al. of USC did the first knownreflectance captureover the human face with their extremely simplelight stage. They presented their method and results inSIGGRAPH2000.[5] The scientific breakthrough required finding thesubsurface light component(the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains itspolarizationand the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.[5] For a believable result both lightreflectedfrom skin (BRDF) and within the skin (a special case ofBTDF) which together make up theBSDFmust be captured and simulated. The whole process of making digital look-alikes i.e. characters so lifelike and realistic that they can be passed off as pictures of humans is a very complex task as it requires photorealisticallymodeling, animating,cross-mapping, andrenderingthesoft body dynamicsof the human appearance. Synthesis with an actor and suitablealgorithmsis applied using powerful computers. The actor's part in the synthesis is to take care of mimicking humanexpressionsin still picture synthesizing and also human movement in motion picture synthesizing. Algorithms are needed to simulate laws ofphysicsandphysiologyand to map the models and their appearance, movements and interaction accordingly. Often bothphysics/physiologybased (i.e.skeletal animation) andimage-based modeling and renderingare employed in the synthesis part. Hybrid models employing both approaches have shown best results in realism and ease-of-use.Morph target animationreduces the workload by giving higher level control, where different facial expressions are defined as deformations of the model, which facial allows expressions to be tuned intuitively. Morph target animation can then morph the model between different defined facial expressions or body poses without much need for human intervention. Usingdisplacement mappingplays an important part in getting a realistic result with fine detail of skin such asporesandwrinklesas small as 100μm. In the late 2010s,machine learning, and more preciselygenerative adversarial networks(GAN), were used byNVIDIAto produce random yet photorealistic human-like portraits. The system, namedStyleGAN, was trained on a database of 70,000 images from the images depository websiteFlickr. The source code was made public onGitHubin 2019.[32]Outputs of the generator network from random input were made publicly available on a number of websites.[33][34] Similarly, since 2018,deepfaketechnology has allowed GANs to swap faces between actors; combined with the ability to fake voices, GANs can thus generate fake videos that seem convincing.[35] Main applications fall within the domains ofstock photography,synthetic datasets,virtual cinematography, computer andvideo gamesandcovertdisinformationattacks.[36][34]Some facial-recognition AI use images generated by other AI assynthetic datafor training.[37] Furthermore, some research suggests that it can havetherapeutic effectsas "psychologistsandcounselorshave also begun usingavatarsto deliver therapy to clients who havephobias, a history oftrauma, addictions,Asperger’s syndromeorsocial anxiety."[38]The strong memory imprint and brain activation effects caused by watching a digital look-alike avatar of yourself is dubbed theDoppelgängereffect.[38]The doppelgänger effect can heal when covert disinformation attack is exposed as such to the targets of the attack. Thespeech synthesishas been verging on being completely indistinguishable from a recording of a real human's voice since the 2016 introduction of the voice editing and generation softwareAdobe Voco, a prototype slated to be a part of theAdobe Creative SuiteandDeepMindWaveNet, a prototype from Google.[39]Ability to steal and manipulate other peoples voices raises obvious ethical concerns.[40] At the 2018Conference on Neural Information Processing Systems(NeurIPS) researchers from Google presented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', whichtransfers learningfromspeaker verificationto achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds(listen).[41] Sourcing images for AI training raises a question of privacy as people who are used for training didn't consent.[42] Digital sound-alikestechnology found its way to the hands of criminals as in 2019Symantecresearchers knew of 3 cases where technology has been used for crime.[43][44] This coupled with the fact that (as of 2016) techniques which allownear real-timecounterfeitingoffacial expressionsin existing 2D video have been believably demonstrated increases the stress on the disinformation situation.[15]
https://en.wikipedia.org/wiki/Human_image_synthesis
"AI slop", often simply "slop", is a derogatory term for low-quality media, including writing and images, made usinggenerative artificial intelligencetechnology, characterized by an inherent lack of effort, logic, or purpose.[1][4][5]Coined in the 2020s, the term has a pejorative connotation akin to "spam".[4] It has been variously defined as "digital clutter", "filler content produced by AI tools that prioritize speed and quantity over substance and quality",[6]and "shoddy or unwanted AI content insocial media, art, books and, increasingly, in search results".[7] Jonathan Gilmore, a philosophy professor at theCity University of New York, describes the "incredibly banal, realistic style" of AI slop as being "very easy to process".[8] As earlylarge language models(LLMs) andimage diffusion modelsaccelerated the creation of high-volume but low-quality written content and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross".[5]Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among4chan,Hacker News, andYouTubecommentators as a form of in-groupslang.[7] The British computer programmerSimon Willisonis credited with being an early champion of the term "slop" in the mainstream,[1][7]having used it on his personal blog in May 2024.[9]However, he has said it was in use long before he began pushing for the term.[7] The term gained increased popularity in the second quarter of 2024 in part because ofGoogle's use of itsGeminiAI model to generate responses to search queries,[7]and was widely criticized in media headlines during the fourth quarter of 2024.[1][4] Research found that training LLMs on slop causesmodel collapse: a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.[10]AI slop is similarly produced when the same content is continuously refined, paraphrased, or reprocessed through LLMs, with each output becoming the input for the next iteration. Research has shown that this process causes information to gradually distort as it passes through a chain of LLMs, a phenomenon reminiscent of a classic communication exercise known as thetelephone game.[11] AI image and video slop proliferated on social media in part because it was revenue-generating for its creators onFacebookandTikTok, with the issue affecting Facebook most notably. This incentivizes individuals fromdeveloping countriesto create images that appeal to audiences in the United States which attract higher advertising rates.[12][13][14] The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model'straining data), or using erraticspeech-to-textmethods to translate their intentions into English.[12] Speaking toNew Yorkmagazine, a Kenyan creator of slop images described givingChatGPTprompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK [sic]", and then feeding those created prompts into atext-to-imageAI service such asMidjourney.[4] In August 2024,The Atlanticnoted that AI slop was becoming associated with the political right in the United States, who were using it forshitpostingandengagement farmingon social media, with the technology offering "cheap, fast, on-demand fodder for content".[15] AI slop is frequently used in political campaigns in an attempt at gaining attention throughcontent farming.[16]In August 2024, the American politicianDonald Trumpposted a series of AI-generated images on his social media platform,Truth Social, that portrayed fans of the singerTaylor Swiftin "Swifties for Trump" T-shirts, as well as a photo of the singer herself appearing to endorseTrump's 2024 presidential campaign. The images originated from the conservativeTwitteraccount@amuse, which posted numerous AI slop images leading up to the2024 United States electionsthat were shared by other high-profile figures within the AmericanRepublican Party, such asElon Musk, who has publicly endorsed the utilization of generative AI, furthering this association.[17] In the aftermath ofHurricane Helenein the United States, members of the Republican Party circulated an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of PresidentJoe Bidento respond to the disaster.[18][3]Some, likeAmy Kremer, shared the image on social media even while acknowledging that it was not genuine.[19][20] In November 2024,Coca-Colaused artificial intelligence to create three commercials as part of their annualholiday campaign. These videos were immediately met with negative reception from both casual viewers and artists;[21]the animatorAlex Hirsch, the creator ofGravity Falls, criticized the company's decision not to employ human artists to create the commercial.[22]In response to the negative feedback, the company defended their decision to use generative artificial intelligence stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology".[23] In March 2025,Paramount Pictureswas criticized for using AI scripting and narration in anInstagramvideo promoting the filmNovocaine.[24]The ad uses a robotic AI voice in a style similar to low-quality AI spam videos produced by content farms.A24received similar backlash for releasing a series of AI-generated posters for the 2024 filmCivil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.[25][26] In the same month,Activisionposted various advertisements and posters for fake video games such as "Guitar HeroMobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such asFacebookand Instagram, which many labelled as AI slop.[27]The intention of the posts was later stated to act as a survey for interest in possible titles by the company.[28]TheItalian brainrotAI trend was widely adopted by advertisers to adjust well to younger audiences.[29] Fantastical promotional graphics for the 2024Willy's Chocolate Experienceevent, characterized as "AI-generated slop",[30]misled audiences into attending an event that was held in a lightly decorated warehouse. Tickets were marketed throughFacebookadvertisements showing AI-generated imagery, with no genuine photographs of the venue.[31] In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade inDublinas a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content.[32][33]The listing went viral on TikTok andInstagram.[34]While a similar parade had been held inGalway, and Dublin had hosted parades in prior years, there was no parade in Dublin in 2024.[33]One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using artificial intelligence "to create content quickly and cheaply where opportunities are found".[35]The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue.[36] Online booksellers and library vendors now have many titles that are written by AI and are not curated into collections by librarians. The digital media providerHoopla, which supplies libraries withebooksand downloadable content, has generative AI books with fictional authors and dubious quality, which cost libraries money when checked out by unsuspecting patrons.[37] The 2024 video gameCall of Duty: Black Ops 6includes assets generated by artificial intelligence. Since the game's initial release, many players had accusedTreyarchandRaven Softwareof using AI to create in-game assets, including loading screens, emblems, and calling cards. A particular example was a loading screen for the zombies game mode that depicted "Necroclaus", a zombifiedSanta Clauswith six fingers on one hand, an image which also had other irregularities.[38]Theprevious entryin theCall of Dutyfranchise was also accused of selling AI-generatedcosmetics.[39] In February 2025, Activision disclosedBlack Ops 6's usage of generative artificial intelligence to comply withValve's policies on AI-generated or assisted products onSteam. Activision states on the game's product page on Steam that "Our team uses generative AI tools to help develop some in game assets."[40] Foamstars, amultiplayerthird-person shooterreleased bySquare Enixin 2024, features in-game music withcover artthat was generated usingMidjourney. Square Enix confirmed the use of AI, but defended the decision, saying that they wanted to "experiment" with artificial intelligence technologies and claiming that the generated assets make up "about 0.01% or even less" of game content.[41][42][43]Previously, on January 1, 2024, Square Enix president Takashi Kiryu stated in a new year letter that the company will be "aggressive in applying AI and other cutting-edge technologies to both [their] content development and [their] publishing functions".[44][45] In 2024,Rovio Entertainmentreleased a demo of a mobile game called Angry Birds: Block Quest onAndroid. The game featured AI-generated images for loading screens and backgrounds.[46]It was heavily criticized by players, who called itshovelwareand disapproved of Rovio's use of AI images.[47][48]It was eventually discontinued and removed from thePlay Store. Some films have received backlash for including AI-generated content. The filmLate Night with the Devilwas notable for its use of AI, which some criticized as being AI slop.[49][50]Several low-quality AI-generated images were used as interstitial title cards, with one image featuring a skeleton with inaccurate bone structure and poorly-generated fingers that appear disconnected from its hands.[51] Some streaming services such asAmazon Prime Videohave used AI to generate posters and thumbnail images in a manner that can be described as slop. A low-quality AI poster was used for the 1922 filmNosferatu, depictingCount Orlokin a way that does not resemble his look in the film.[52]A thumbnail image for12 Angry MenonAmazon Freeveeused AI to depict 19 men with smudged faces, none of whom appeared to bear any similarities to the characters in the film.[53][54]Additionally, some viewers have noticed that many plot descriptions appear to be generated by AI, which some people have characterized as slop. One synopsis briefly listed on the site for the filmDog Day Afternoonread: "A man takes hostages at a bank in Brooklyn. Unfortunately I do not have enough information to summarize further within the provided guidelines."[55] In one case Deutsche Telekom removed a series from their media offer after viewers complained about the bad quality and monotonous German voice dubbing (translated from original Polish) and it was found out that it was done via AI.[56]
https://en.wikipedia.org/wiki/Slop_(artificial_intelligence)
Atext-to-image modelis amachine learning modelwhich takes an inputnatural languagedescription and produces an image matching that description. Text-to-image models began to be developed in the mid-2010s during the beginnings of theAI boom, as a result of advances indeep neural networks. In 2022, the output of state-of-the-art text-to-image models—such as OpenAI'sDALL-E 2,Google Brain'sImagen, Stability AI'sStable Diffusion, andMidjourney—began to be considered to approach the quality ofreal photographsand human-drawnart. Text-to-image models are generallylatent diffusion models, which combine alanguage model, which transforms the input text into alatent representation, and agenerative image model, which produces an image conditioned on that representation. The most effective models have generally been trained on massive amounts of image and text datascraped from the web.[1] Before the rise ofdeep learning,[when?]attempts to build text-to-image models were limited tocollagesby arranging existing component images, such as from a database ofclip art.[2][3] The inverse task,image captioning, was more tractable, and a number of image captioning deep learning models came prior to the first text-to-image models.[4] The first modern text-to-image model, alignDRAW, was introduced in 2015 by researchers from theUniversity of Toronto. alignDRAW extended the previously-introduced DRAW architecture (which used arecurrentvariational autoencoderwith anattention mechanism) to be conditioned on text sequences.[4]Images generated by alignDRAW were in smallresolution(32×32 pixels, attained fromresizing) and were considered to be 'low in diversity'. The model was able to generalize to objects not represented in the training data (such as a red school bus) and appropriately handled novel prompts such as "a stop sign is flying in blue skies", exhibiting output that it was not merely "memorizing" data from thetraining set.[4][6] In 2016, Reed, Akata, Yan et al. became the first to usegenerative adversarial networksfor the text-to-image task.[6][7]With models trained on narrow, domain-specific datasets, they were able to generate "visually plausible" images of birds and flowers from text captions like"an all black bird with a distinct thick, rounded bill". A model trained on the more diverseCOCO(Common Objects in Context) dataset produced images which were "from a distance... encouraging", but which lacked coherence in their details.[6]Later systems include VQGAN-CLIP,[8]XMC-GAN, and GauGAN2.[9] One of the first text-to-image models to capture widespread public attention wasOpenAI'sDALL-E, atransformersystem announced in January 2021.[10]A successor capable of generating more complex and realistic images, DALL-E 2, was unveiled in April 2022,[11]followed byStable Diffusionthat was publicly released in August 2022.[12]In August 2022,text-to-image personalizationallows to teach the model a new concept using a small set of images of a new object that was not included in the training set of the text-to-image foundation model. This is achieved bytextual inversion, namely, finding a new text term that correspond to these images. Following other text-to-image models,language model-poweredtext-to-videoplatforms such as Runway, Make-A-Video,[13]Imagen Video,[14]Midjourney,[15]and Phenaki[16]can generate video from text and/or text/image prompts.[17] Text-to-image models have been built using a variety of architectures. The text encoding step may be performed with arecurrent neural networksuch as along short-term memory(LSTM) network, thoughtransformermodels have since become a more popular option. For the image generation step, conditionalgenerative adversarial networks(GANs) have been commonly used, withdiffusion modelsalso becoming a popular option in recent years. Rather than directly training a model to output a high-resolution image conditioned on a text embedding, a popular technique is to train a model to generate low-resolution images, and use one or more auxiliary deep learning models to upscale it, filling in finer details. Text-to-image models are trained on large datasets of (text, image) pairs, often scraped from the web. With their 2022 Imagen model, Google Brain reported positive results from using alarge language modeltrained separately on a text-only corpus (with its weights subsequently frozen), a departure from the theretofore standard approach.[18] Training a text-to-image model requires a dataset of images paired with text captions. One dataset commonly used for this purpose is the COCO dataset. Released by Microsoft in 2014, COCO consists of around 123,000 images depicting a diversity of objects with five captions per image, generated by human annotators. Originally, the main focus of COCO was on the recognition of objects and scenes in images. Oxford-120 Flowers and CUB-200 Birds are smaller datasets of around 10,000 images each, restricted to flowers and birds, respectively. It is considered less difficult to train a high-quality text-to-image model with these datasets because of their narrow range of subject matter.[7] One of the largest open datasets for training text-to-image models is LAION-5B, containing more than 5 billion image-text pairs. This dataset was created using web scraping and automatic filtering based on similarity to high-quality artwork and professional photographs. Because of this, however, it also contains controversial content, which has led to discussions about the ethics of its use. Some modern AI platforms not only generate images from text but also create synthetic datasets to improve model training and fine-tuning. These datasets help avoid copyright issues and expand the diversity of training data.[19] Evaluating and comparing the quality of text-to-image models is a problem involving assessing multiple desirable properties. A desideratum specific to text-to-image models is that generated images semantically align with the text captions used to generate them. A number of schemes have been devised for assessing these qualities, some automated and others based on human judgement.[7] A common algorithmic metric for assessing image quality and diversity is theInception Score(IS), which is based on the distribution of labels predicted by a pretrainedInceptionv3image classification modelwhen applied to a sample of images generated by the text-to-image model. The score is increased when the image classification model predicts a single label with high probability, a scheme intended to favour "distinct" generated images. Another popular metric is the relatedFréchet inception distance, which compares the distribution of generated images and real training images according to features extracted by one of the final layers of a pretrained image classification model.[7]
https://en.wikipedia.org/wiki/Text-to-image_model
Atext-to-video modelis amachine learning modelthat uses anatural languagedescription as input to produce avideorelevant to the input text.[1]Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of videodiffusion models.[2] There are different models, includingopen sourcemodels. Chinese-language input[3]CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented onGitHubin 2022.[4]That year,Meta Platformsreleased a partial text-to-video model called "Make-A-Video",[5][6][7]andGoogle'sBrain(laterGoogle DeepMind) introduced Imagen Video, a text-to-video model with 3DU-Net.[8][9][10][11][12] In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation.[13]The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences.[14]In the same month,Adobeintroduced Firefly AI as part of its features.[15] In January 2024,Googleannounced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities.[16]Matthias NiessnerandLourdes Agapitoat AI companySynthesiawork on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars.[17]In June 2024, Luma Labs launched itsDream Machinevideo tool.[18][19]That same month,[20]Kuaishouextended its Kling AI text-to-video model to international users. In July 2024,TikTokownerByteDancereleased Jimeng AI in China, through its subsidiary, Faceu Technology.[21]By September 2024, the Chinese AI companyMiniMaxdebuted its video-01 model, joining other established AI model companies likeZhipu AI,Baichuan, andMoonshot AI, which contribute to China’s involvement in AI technology.[22] Alternative approaches to text-to-video models include[23]Google's Phenaki, Hour One,Colossyan,[3]Runway's Gen-3 Alpha,[24][25]and OpenAI'sSora,[26][27]Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged.[28]Googleis also preparing to launch a video generation tool named Veo forYouTube Shortsin 2025.[29]FLUX.1developer Black Forest Labs has announced its text-to-video model SOTA.[30] There are several architectures that have been used to create Text-to-Video models. Similar toText-to-Imagemodels, these models can be trained usingRecurrent Neural Networks(RNNs) such aslong short-term memory(LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively.[31]An alternative for these include transformer models.Generative adversarial networks(GANs),Variational autoencoders(VAEs), — which can aid in the prediction of human motion[32]— and diffusion models have also been used to develop the image generation aspects of the model.[33] Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M.[34][35]These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM.[34][35]These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts. The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence.[35]This predictive process is subject to decline in quality as the length of the video increases due to resource limitations.[35] Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs.[36][37]Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility.[37][36] Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message.[37][35]Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation.[37] Another issue with the outputs is that text or fine details in AI-generated videos often appear garbled, a problem thatstable diffusionmodels also struggle with. Examples include distorted hands and unreadable text. The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent.[38]Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences.[38] Text-to-video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate content.[39]
https://en.wikipedia.org/wiki/Text-to-video_model
WaveNetis a deepneural networkfor generating raw audio. It was created by researchers at London-basedAIfirmDeepMind. The technique, outlined in a paper in September 2016,[1]is able to generate relatively realistic-sounding human-like voices by directly modelling waveforms using aneural networkmethod trained with recordings of real speech. Tests with US English and Mandarin reportedly showed that the system outperforms Google's best existingtext-to-speech(TTS) systems, although as of 2016 its text-to-speech synthesis still was less convincing than actual human speech.[2]WaveNet's ability to generate raw waveforms means that it can model any kind of audio, including music.[3] Generating speech from text is an increasingly common task thanks to the popularity of software such as Apple'sSiri, Microsoft'sCortana,Amazon Alexaand theGoogle Assistant.[4] Most such systems use a variation of a technique that involves concatenated sound fragments together to form recognisable sounds and words.[5]The most common of these is called concatenative TTS.[6]It consists of large library of speech fragments, recorded from a single speaker that are then concatenated to produce complete words and sounds. The result sounds unnatural, with an odd cadence and tone.[7]The reliance on a recorded library also makes it difficult to modify or change the voice.[8] Another technique, known as parametric TTS,[9]uses mathematical models to recreate sounds that are then assembled into words and sentences. The information required to generate the sounds is stored in the parameters of the model. The characteristics of the output speech are controlled via the inputs to the model, while the speech is typically created using a voice synthesiser known as avocoder. This can also result in unnatural sounding audio. WaveNet is a type offeedforward neural networkknown as a deepconvolutional neural network(CNN). In WaveNet, the CNN takes a raw signal as an input and synthesises an output one sample at a time. It does so by sampling from asoftmax(i.e.categorical) distribution of a signal value that is encoded usingμ-lawcompanding transformation andquantizedto 256 possible values.[11] According to the original September 2016 DeepMind research paperWaveNet: A Generative Model for Raw Audio,[12]the network was fed real waveforms of speech in English and Mandarin. As these pass through the network, it learns a set of rules to describe how the audio waveform evolves over time. The trained network can then be used to create new speech-like waveforms at 16,000 samples per second. These waveforms include realistic breaths and lip smacks – but do not conform to any language.[13] WaveNet is able to accurately model different voices, with the accent and tone of the input correlating with the output. For example, if it is trained with German, it produces German speech.[14]The capability also means that if the WaveNet is fed other inputs – such as music – its output will be musical. At the time of its release, DeepMind showed that WaveNet could produce waveforms that sound likeclassical music.[15] According to the June 2018 paperDisentangled SequentialAutoencoder,[16]DeepMind has successfully used WaveNet for audio and voice "content swapping": the network can swap the voice on an audio recording for another, pre-existing voice while maintaining the text and other features from the original recording. "We also experiment on audio sequence data. Our disentangled representation allows us to convert speaker identities into each other while conditioning on the content of the speech." (p. 5) "For audio, this allows us to convert a male speaker into a female speaker and vice versa[...]." (p. 1) According to the paper, a two-digit minimum amount of hours (c. 50 hours) of pre-existing speech recordings of both source and target voice are required to be fed into WaveNet for the program to learn their individual features before it is able to perform the conversion from one voice to another at a satisfying quality. The authors stress that "[a]n advantage of the model is that it separates dynamical from static features[...]." (p. 8), i. e. WaveNet is capable of distinguishing between the spoken text and modes of delivery (modulation, speed, pitch, mood, etc.) to maintain during the conversion from one voice to another on the one hand, and the basic features of both source and target voices that it is required to swap on the other. The January 2019 follow-up paperUnsupervised speech representation learning using WaveNet autoencoders[17]details a method to successfully enhance the proper automatic recognition and discrimination between dynamical and static features for "content swapping", notably including swapping voices on existing audio recordings, in order to make it more reliable. Another follow-up paper,Sample Efficient Adaptive Text-to-Speech,[18]dated September 2018 (latest revision January 2019), states that DeepMind has successfully reduced the minimum amount of real-life recordings required to sample an existing voice via WaveNet to "merely a few minutes of audio data" while maintaining high-quality results. Its ability toclone voiceshas raised ethical concerns about WaveNet's ability to mimic the voices of living and dead persons. According to a 2016BBCarticle, companies working on similar voice-cloning technologies (such asAdobe Voco) intend to insert watermarking inaudible to humans to prevent counterfeiting, while maintaining that voice cloning satisfying, for instance, the needs of entertainment-industry purposes would be of a far lower complexity and use different methods than required to fool forensic evidencing methods and electronic ID devices, so that natural voices and voices cloned for entertainment-industry purposes could still be easily told apart by technological analysis.[19] At the time of its release, DeepMind said that WaveNet required too much computational processing power to be used in real world applications.[20]As of October 2017, Google announced a 1,000-fold performance improvement along with better voice quality. WaveNet was then used to generateGoogle Assistantvoices for US English and Japanese across all Google platforms.[21]In November 2017, DeepMind researchers released a research paper detailing a proposed method of "generating high-fidelity speech samples at more than 20 times faster than real-time", called "Probability Density Distillation".[22]At the annualI/O developer conferencein May 2018, it was announced that new Google Assistant voices were available and made possible by WaveNet; WaveNet greatly reduced the number of audio recordings that were required to create a voice model by modeling the raw audio of the voice actor samples.[23]
https://en.wikipedia.org/wiki/WaveNet
Instatistical mechanicsandmathematics, aBoltzmann distribution(also calledGibbs distribution[1]) is aprobability distributionorprobability measurethat gives the probability that a system will be in a certainstateas a function of that state's energy and the temperature of the system. The distribution is expressed in the form: wherepiis the probability of the system being in statei,expis theexponential function,εiis the energy of that state, and a constantkTof the distribution is the product of theBoltzmann constantkandthermodynamic temperatureT. The symbol∝{\textstyle \propto }denotesproportionality(see§ The distributionfor the proportionality constant). The termsystemhere has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom[1]to a macroscopic system such as anatural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. Theratioof probabilities of two states is known as theBoltzmann factorand characteristically only depends on the states' energy difference: The Boltzmann distribution is named afterLudwig Boltzmannwho first formulated it in 1868 during his studies of thestatistical mechanicsof gases inthermal equilibrium.[2]Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"[3]The distribution was later investigated extensively, in its modern generic form, byJosiah Willard Gibbsin 1902.[4] The Boltzmann distribution should not be confused with theMaxwell–Boltzmann distributionorMaxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certainstateas a function of that state's energy,[5]while the Maxwell-Boltzmann distributions give the probabilities of particlespeedsorenergiesin ideal gases. The distribution of energies in aone-dimensionalgas however, does follow the Boltzmann distribution. The Boltzmann distribution is aprobability distributionthat gives the probability of a certain state as a function of that state's energy and temperature of thesystemto which the distribution is applied.[6]It is given aspi=1Qexp⁡(−εikT)=exp⁡(−εikT)∑j=1Mexp⁡(−εjkT){\displaystyle p_{i}={\frac {1}{Q}}\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)={\frac {\exp \left(-{\tfrac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} where: UsingLagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes theentropyS(p1,p2,⋯,pM)=−∑i=1Mpilog2⁡pi{\displaystyle S(p_{1},p_{2},\cdots ,p_{M})=-\sum _{i=1}^{M}p_{i}\log _{2}p_{i}} subject to the normalization constraint that∑pi=1{\textstyle \sum p_{i}=1}and the constraint that∑piεi{\textstyle \sum {p_{i}{\varepsilon }_{i}}}equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energiesεi. In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions whereTapproaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in theNISTAtomic Spectra Database.[7] The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for statesiandjis given aspipj=exp⁡(εj−εikT){\displaystyle {\frac {p_{i}}{p_{j}}}=\exp \left({\frac {\varepsilon _{j}-\varepsilon _{i}}{kT}}\right)} where: The corresponding ratio of populations of energy levels must also take theirdegeneraciesinto account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in stateiis practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in statei. This probability is equal to the number of particles in stateidivided by the total number of particles in the system, that is the fraction of particles that occupy statei. whereNiis the number of particles in stateiandNis the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in stateias a function of the energy of that state is[5]NiN=exp⁡(−εikT)∑j=1Mexp⁡(−εjkT){\displaystyle {\frac {N_{i}}{N}}={\frac {\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} This equation is of great importance tospectroscopy. In spectroscopy we observe aspectral lineof atoms or molecules undergoing transitions from one state to another.[5][8]In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state.[9]This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or aforbidden transition. Thesoftmax functioncommonly used in machine learning is related to the Boltzmann distribution: A distribution of the form is calledgeneralized Boltzmann distributionby some authors.[10] The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describecanonical ensemble,grand canonical ensembleandisothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from theprinciple of maximum entropy, but there are other derivations.[10][11] The generalized Boltzmann distribution has the following properties: The Boltzmann distribution appears instatistical mechanicswhen considering closed systems of fixed composition that are inthermal equilibrium(equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: The Boltzmann distribution can be introduced to allocate permits inemissions trading.[13][14]The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as themultinomial logitmodel. As adiscrete choicemodel, this is very well known in economics sinceDaniel McFaddenmade the connection to random utility maximization.[15]
https://en.wikipedia.org/wiki/Boltzmann_distribution
Instatistical mechanics, theGibbs algorithm, introduced byJ. Willard Gibbsin 1902, is a criterion for choosing aprobability distributionfor thestatistical ensembleofmicrostatesof athermodynamic systemby minimizing the average log probability subject to the probability distributionpisatisfying a set of constraints (usually expectation values) corresponding to the knownmacroscopicquantities.[1]in 1948,Claude Shannoninterpreted the negative of this quantity, which he calledinformation entropy, as a measure of the uncertainty in a probability distribution.[1]In 1957,E.T. Jaynesrealized that this quantity could be interpreted as missing information about anything, and generalized the Gibbs algorithm to non-equilibrium systems with theprinciple of maximum entropyandmaximum entropy thermodynamics.[1] Physicists call the result of applying the Gibbs algorithm theGibbs distributionfor the given constraints, most notably Gibbs'sgrand canonical ensemblefor open systems when the average energy and the average number of particles are given. (See alsopartition function). This general result of the Gibbs algorithm is then amaximum entropy probability distribution. Statisticians identify such distributions as belonging toexponential families. This article aboutstatistical mechanicsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Gibbs_algorithm
Instatistics,Gibbs samplingor aGibbs sampleris aMarkov chain Monte Carlo(MCMC)algorithmfor sampling from a specifiedmultivariateprobability distributionwhen direct sampling from the joint distribution is difficult, but sampling from theconditional distributionis more practical. This sequence can be used to approximate the joint distribution (e.g., to generate a histogram of the distribution); to approximate themarginal distributionof one of the variables, or some subset of the variables (for example, the unknownparametersorlatent variables); or to compute anintegral(such as theexpected valueof one of the variables). Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled. Gibbs sampling is commonly used as a means ofstatistical inference, especiallyBayesian inference. It is arandomized algorithm(i.e. an algorithm that makes use ofrandom numbers), and is an alternative todeterministic algorithmsfor statistical inference such as theexpectation–maximization algorithm(EM). As with other MCMC algorithms, Gibbs sampling generates aMarkov chainof samples, each of which iscorrelatedwith nearby samples. As a result, care must be taken if independent samples are desired. Samples from the beginning of the chain (theburn-in period) may not accurately represent the desired distribution and are usually discarded. Gibbs sampling is named after the physicistJosiah Willard Gibbs, in reference to an analogy between thesamplingalgorithm andstatistical physics. The algorithm was described by brothersStuartandDonald Gemanin 1984, some eight decades after the death of Gibbs,[1]and became popularized in the statistics community for calculating marginal probability distribution, especially the posterior distribution.[2] In its basic version, Gibbs sampling is a special case of theMetropolis–Hastings algorithm. However, in its extended versions (seebelow), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate theMetropolis–Hastings algorithm(or methods such asslice sampling) to implement one or more of the sampling steps. Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but theconditional distributionof each variable is known and is easy (or at least, easier) to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes aMarkov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.[3] Gibbs sampling is particularly well-adapted to sampling theposterior distributionof aBayesian network, since Bayesian networks are typically specified as a collection of conditional distributions. Gibbs sampling, in its basic incarnation, is a special case of theMetropolis–Hastings algorithm. The point of Gibbs sampling is that given amultivariate distributionit is simpler to sample from a conditional distribution than tomarginalizeby integrating over ajoint distribution. Suppose we want to obtaink{\displaystyle k}samples of an{\displaystyle n}-dimensional random vectorX=(X1,…,Xn){\displaystyle \mathbf {X} =(X_{1},\dots ,X_{n})}. We proceed iteratively: If such sampling is performed, these important facts hold: When performing the sampling: Furthermore, the conditional distribution of one variable given all others is proportional to the joint distribution, i.e., for all possible value(xi)1≤i≤n{\displaystyle (x_{i})_{1\leq i\leq n}}ofX{\displaystyle \mathbf {X} }: "Proportional to" in this case means that the denominator is not a function ofxj{\displaystyle x_{j}}and thus is the same for all values ofxj{\displaystyle x_{j}}; it forms part of thenormalization constantfor the distribution overxj{\displaystyle x_{j}}. In practice, to determine the nature of the conditional distribution of a factorxj{\displaystyle x_{j}}, it is easiest to factor the joint distribution according to the individual conditional distributions defined by thegraphical modelover the variables, ignore all factors that are not functions ofxj{\displaystyle x_{j}}(all of which, together with the denominator above, constitute the normalization constant), and then reinstate the normalization constant at the end, as necessary. In practice, this means doing one of three things: Gibbs sampling is commonly used forstatistical inference(e.g. determining the best value of a parameter, such as determining the number of people likely to shop at a particular store on a given day, the candidate a voter will most likely vote for, etc.). The idea is that observed data is incorporated into the sampling process by creating separate variables for each piece of observed data and fixing the variables in question to their observed values, rather than sampling from those variables. The distribution of the remaining variables is then effectively aposterior distributionconditioned on the observed data. The most likely value of a desired parameter (themode) could then simply be selected by choosing the sample value that occurs most commonly; this is essentially equivalent tomaximum a posterioriestimation of a parameter. (Since the parameters are usually continuous, it is often necessary to "bin" the sampled values into one of a finite number of ranges or "bins" in order to get a meaningful estimate of the mode.) More commonly, however, theexpected value(meanor average) of the sampled values is chosen; this is aBayes estimatorthat takes advantage of the additional data about the entire distribution that is available from Bayesian sampling, whereas a maximization algorithm such asexpectation maximization(EM) is capable of only returning a single point from the distribution. For example, for a unimodal distribution the mean (expected value) is usually similar to the mode (most common value), but if the distribution isskewedin one direction, the mean will be moved in that direction, which effectively accounts for the extra probability mass in that direction. (If a distribution is multimodal, the expected value may not return a meaningful point, and any of the modes is typically a better choice.) Although some of the variables typically correspond to parameters of interest, others are uninteresting ("nuisance") variables introduced into the model to properly express the relationships among variables. Although the sampled values represent thejoint distributionover all variables, the nuisance variables can simply be ignored when computing expected values or modes; this is equivalent tomarginalizingover the nuisance variables. When a value for multiple variables is desired, the expected value is simply computed over each variable separately. (When computing the mode, however, all variables must be considered together.) Supervised learning,unsupervised learningandsemi-supervised learning(aka learning with missing values) can all be handled by simply fixing the values of all variables whose values are known, and sampling from the remainder. For observed data, there will be one variable for each observation—rather than, for example, one variable corresponding to thesample meanorsample varianceof a set of observations. In fact, there generally will be no variables at all corresponding to concepts such as "sample mean" or "sample variance". Instead, in such a case there will be variables representing the unknown true mean and true variance, and the determination of sample values for these variables results automatically from the operation of the Gibbs sampler. Generalized linear models(i.e. variations oflinear regression) can sometimes be handled by Gibbs sampling as well. For example,probit regressionfor determining the probability of a given binary (yes/no) choice, withnormally distributedpriors placed over the regression coefficients, can be implemented with Gibbs sampling because it is possible to add additional variables and take advantage ofconjugacy. However,logistic regressioncannot be handled this way. One possibility is to approximate thelogistic functionwith a mixture (typically 7–9) of normal distributions. More commonly, however,Metropolis–Hastingsis used instead of Gibbs sampling. Suppose that a sampleX{\displaystyle \left.X\right.}is taken from a distribution depending on a parameter vectorθ∈Θ{\displaystyle \theta \in \Theta \,\!}of lengthd{\displaystyle \left.d\right.}, with prior distributiong(θ1,…,θd){\displaystyle g(\theta _{1},\ldots ,\theta _{d})}. It may be thatd{\displaystyle \left.d\right.}is very large and that numerical integration to find the marginal densities of theθi{\displaystyle \left.\theta _{i}\right.}would be computationally expensive. Then an alternative method of calculating the marginal densities is to create a Markov chain on the spaceΘ{\displaystyle \left.\Theta \right.}by repeating these two steps: These steps define areversible Markov chainwith the desired invariant distributiong{\displaystyle \left.g\right.}. This can be proved as follows. Definex∼jy{\displaystyle x\sim _{j}y}ifxi=yi{\displaystyle \left.x_{i}=y_{i}\right.}for alli≠j{\displaystyle i\neq j}and letpxy{\displaystyle \left.p_{xy}\right.}denote the probability of a jump fromx∈Θ{\displaystyle x\in \Theta }toy∈Θ{\displaystyle y\in \Theta }. Then, the transition probabilities are So sincex∼jy{\displaystyle x\sim _{j}y}is anequivalence relation. Thus thedetailed balance equationsare satisfied, implying the chain is reversible and it has invariant distributiong{\displaystyle \left.g\right.}. In practice, the indexj{\displaystyle \left.j\right.}is not chosen at random, and the chain cycles through the indexes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering). Lety{\displaystyle y}denote observations generated from the sampling distributionf(y|θ){\displaystyle f(y|\theta )}andπ(θ){\displaystyle \pi (\theta )}be a prior supported on the parameter spaceΘ{\displaystyle \Theta }. Then one of the central goals of theBayesian statisticsis to approximate the posterior density where the marginal likelihoodm(y)=∫Θf(y|θ)⋅π(θ)dθ{\displaystyle m(y)=\int _{\Theta }f(y|\theta )\cdot \pi (\theta )d\theta }is assumed to be finite for ally{\displaystyle y}. To explain the Gibbs sampler, we additionally assume that the parameter spaceΘ{\displaystyle \Theta }is decomposed as where×{\displaystyle \times }represents theCartesian product. Each component parameter spaceΘi{\displaystyle \Theta _{i}}can be a set of scalar components, subvectors, or matrices. Define a setΘ−i{\displaystyle \Theta _{-i}}that complements theΘi{\displaystyle \Theta _{i}}. Essential ingredients of the Gibbs sampler is thei{\displaystyle i}-th full conditional posterior distribution for eachi=1,⋯,K{\displaystyle i=1,\cdots ,K} The following algorithm details a generic Gibbs sampler: Initialize: pick arbitrary starting valueθ(1)=(θ1(1),θ2(1),⋯,θi(1),θi+1(1),⋯,θK(1)){\displaystyle {\text{Initialize: pick arbitrary starting value}}\,\,\theta ^{(1)}=(\theta _{1}^{(1)},\theta _{2}^{(1)},\cdots ,\theta _{i}^{(1)},\theta _{i+1}^{(1)},\cdots ,\theta _{K}^{(1)})} Iterate a Cycle:{\displaystyle {\text{Iterate a Cycle:}}\,} Step 1. drawθ1(s+1)∼π(θ1|θ2(s),θ3(s),⋯,θK(s),y){\displaystyle \quad \quad {\text{Step 1. draw}}\,\,\theta _{1}^{(s+1)}\sim \pi (\theta _{1}|\theta _{2}^{(s)},\theta _{3}^{(s)},\cdots ,\theta _{K}^{(s)},y)} Step 2. drawθ2(s+1)∼π(θ2|θ1(s+1),θ3(s),⋯,θK(s),y){\displaystyle \quad \quad {\text{Step 2. draw}}\,\,\theta _{2}^{(s+1)}\sim \pi (\theta _{2}|\theta _{1}^{(s+1)},\theta _{3}^{(s)},\cdots ,\theta _{K}^{(s)},y)} ⋮{\displaystyle \quad \quad \quad \vdots } Step i. drawθi(s+1)∼π(θi|θ1(s+1),θ2(s+1),⋯,θi−1(s+1),θi+1(s),⋯,θK(s),y){\displaystyle \quad \quad {\text{Step i. draw}}\,\,\theta _{i}^{(s+1)}\sim \pi (\theta _{i}|\theta _{1}^{(s+1)},\theta _{2}^{(s+1)},\cdots ,\theta _{i-1}^{(s+1)},\theta _{i+1}^{(s)},\cdots ,\theta _{K}^{(s)},y)} Step i+1. drawθi+1(s+1)∼π(θi+1|θ1(s+1),θ2(s+1),⋯,θi(s+1),θi+2(s),⋯,θK(s),y){\displaystyle \quad \quad {\text{Step i+1. draw}}\,\,\theta _{i+1}^{(s+1)}\sim \pi (\theta _{i+1}|\theta _{1}^{(s+1)},\theta _{2}^{(s+1)},\cdots ,\theta _{i}^{(s+1)},\theta _{i+2}^{(s)},\cdots ,\theta _{K}^{(s)},y)} ⋮{\displaystyle \quad \quad \quad \vdots } Step K. drawθK(s+1)∼π(θK|θ1(s+1),θ2(s+1),⋯,θK−1(s+1),y){\displaystyle \quad \quad {\text{Step K. draw}}\,\,\theta _{K}^{(s+1)}\sim \pi (\theta _{K}|\theta _{1}^{(s+1)},\theta _{2}^{(s+1)},\cdots ,\theta _{K-1}^{(s+1)},y)} end Iterate{\displaystyle {\text{end Iterate}}} Note that Gibbs sampler is operated by the iterative Monte Carlo scheme within a cycle. TheS{\displaystyle S}number of samples{θ(s)}s=1S{\displaystyle \{\theta ^{(s)}\}_{s=1}^{S}}drawn by the above algorithm formulates Markov Chains with the invariant distribution to be the target densityπ(θ|y){\displaystyle \pi (\theta |y)}. Now, for eachi=1,⋯,K{\displaystyle i=1,\cdots ,K}, define the following information theoretic quantities: I(θi;θ−i)=KL(π(θ|y)||π(θi|y)⋅π(θ−i|y))=∫Θπ(θ|y)log⁡(π(θ|y)π(θi|y)⋅π(θ−i|y))dθ,{\displaystyle I(\theta _{i};\theta _{-i})={\text{KL}}(\pi (\theta |y)||\pi (\theta _{i}|y)\cdot \pi (\theta _{-i}|y))=\int _{\Theta }\pi (\theta |y)\log {\bigg (}{\frac {\pi (\theta |y)}{\pi (\theta _{i}|y)\cdot \pi (\theta _{-i}|y)}}{\bigg )}d\theta ,} H(θ−i)=−∫Θ−iπ(θ−i|y)log⁡π(θ−i|y)dθ−i,{\displaystyle H(\theta _{-i})=-\int _{\Theta _{-i}}\pi (\theta _{-i}|y)\log \pi (\theta _{-i}|y)d\theta _{-i},} H(θ−i|θi)=−∫Θπ(θ|y)log⁡π(θ−i|θi,y)dθ,{\displaystyle H(\theta _{-i}|\theta _{i})=-\int _{\Theta }\pi (\theta |y)\log \pi (\theta _{-i}|\theta _{i},y)d\theta ,} namely, posterior mutual information, posterior differential entropy, and posterior conditional differential entropy, respectively. We can similarly define information theoretic quantitiesI(θ−i;θi){\displaystyle I(\theta _{-i};\theta _{i})},H(θi){\displaystyle H(\theta _{i})}, andH(θi|θ−i){\displaystyle H(\theta _{i}|\theta _{-i})}by interchanging thei{\displaystyle i}and−i{\displaystyle -i}in the defined quantities. Then, the followingK{\displaystyle K}equations hold.[4] I(θi;θ−i)=H(θ−i)−H(θ−i|θi)=H(θi)−H(θi|θ−i)=I(θ−i;θi),(i=1,⋯,K){\displaystyle I(\theta _{i};\theta _{-i})=H(\theta _{-i})-H(\theta _{-i}|\theta _{i})=H(\theta _{i})-H(\theta _{i}|\theta _{-i})=I(\theta _{-i};\theta _{i}),\quad (i=1,\cdots ,K)}. The mutual informationI(θi;θ−i){\displaystyle I(\theta _{i};\theta _{-i})}quantifies the reduction in uncertainty of random quantityθi{\displaystyle \theta _{i}}once we knowθ−i{\displaystyle \theta _{-i}}, a posteriori. It vanishes if and only ifθi{\displaystyle \theta _{i}}andθ−i{\displaystyle \theta _{-i}}are marginally independent, a posterior. The mutual informationI(θi;θ−i){\displaystyle I(\theta _{i};\theta _{-i})}can be interpreted as the quantity that is transmitted from thei{\displaystyle i}-th step to thei+1{\displaystyle i+1}-th step within a single cycle of the Gibbs sampler. Numerous variations of the basic Gibbs sampler exist. The goal of these variations is to reduce theautocorrelationbetween samples sufficiently to overcome any added computational costs. Inhierarchical Bayesian modelswithcategorical variables, such aslatent Dirichlet allocationand various other models used innatural language processing, it is quite common to collapse out theDirichlet distributionsthat are typically used asprior distributionsover the categorical variables. The result of this collapsing introduces dependencies among all the categorical variables dependent on a given Dirichlet prior, and the joint distribution of these variables after collapsing is aDirichlet-multinomial distribution. The conditional distribution of a given categorical variable in this distribution, conditioned on the others, assumes an extremely simple form that makes Gibbs sampling even easier than if the collapsing had not been done. The rules are as follows: In general, any conjugate prior can be collapsed out, if its only children have distributions conjugate to it. The relevant math is discussed in the article oncompound distributions. If there is only one child node, the result will often assume a known distribution. For example, collapsing aninverse-gamma-distributedvarianceout of a network with a singleGaussianchild will yield aStudent's t-distribution. (For that matter, collapsing both the mean and variance of a single Gaussian child will still yield a Student's t-distribution, provided both are conjugate, i.e. Gaussian mean, inverse-gamma variance.) If there are multiple child nodes, they will all become dependent, as in theDirichlet-categoricalcase. The resultingjoint distributionwill have a closed form that resembles in some ways the compound distribution, although it will have a product of a number of factors, one for each child node, in it. In addition, and most importantly, the resultingconditional distributionof one of the child nodes given the others (and also given the parents of the collapsed node(s), butnotgiven the children of the child nodes) will have the same density as theposterior predictive distributionof all the remaining child nodes. Furthermore, the posterior predictive distribution has the same density as the basic compound distribution of a single node, although with different parameters. The general formula is given in the article oncompound distributions. For example, given a Bayes network with a set of conditionallyindependent identically distributedGaussian-distributednodes withconjugate priordistributions placed on the mean and variance, the conditional distribution of one node given the others after compounding out both the mean and variance will be aStudent's t-distribution. Similarly, the result of compounding out thegammaprior of a number ofPoisson-distributednodes causes the conditional distribution of one node given the others to assume anegative binomial distribution. In these cases where compounding produces a well-known distribution, efficient sampling procedures often exist, and using them will often (although not necessarily) be more efficient than not collapsing, and instead sampling both prior and child nodes separately. However, in the case where the compound distribution is not well-known, it may not be easy to sample from, since it generally will not belong to theexponential familyand typically will not belog-concave(which would make it easy to sample usingadaptive rejection sampling, since a closed form always exists). In the case where the child nodes of the collapsed nodes themselves have children, the conditional distribution of one of these child nodes given all other nodes in the graph will have to take into account the distribution of these second-level children. In particular, the resulting conditional distribution will be proportional to a product of the compound distribution as defined above, and the conditional distributions of all of the child nodes given their parents (but not given their own children). This follows from the fact that the full conditional distribution is proportional to the joint distribution. If the child nodes of the collapsed nodes arecontinuous, this distribution will generally not be of a known form, and may well be difficult to sample from despite the fact that a closed form can be written, for the same reasons as described above for non-well-known compound distributions. However, in the particular case that the child nodes arediscrete, sampling is feasible, regardless of whether the children of these child nodes are continuous or discrete. In fact, the principle involved here is described in fair detail in the article on theDirichlet-multinomial distribution. It is also possible to extend Gibbs sampling in various ways. For example, in the case of variables whose conditional distribution is not easy to sample from, a single iteration ofslice samplingor theMetropolis–Hastings algorithmcan be used to sample from the variables in question. It is also possible to incorporate variables that are notrandom variables, but whose value isdeterministicallycomputed from other variables.Generalized linear models, e.g.logistic regression(aka "maximum entropymodels"), can be incorporated in this fashion. (BUGS, for example, allows this type of mixing of models.) There are two ways that Gibbs sampling can fail. The first is when there are islands of high-probability states, with no paths between them. For example, consider a probability distribution over 2-bit vectors, where the vectors (0,0) and (1,1) each have probability⁠1/2⁠, but the other two vectors (0,1) and (1,0) have probability zero. Gibbs sampling will become trapped in one of the two high-probability vectors, and will never reach the other one. More generally, for any distribution over high-dimensional, real-valued vectors, if two particular elements of the vector are perfectly correlated (or perfectly anti-correlated), those two elements will become stuck, and Gibbs sampling will never be able to change them. The second problem can happen even when all states have nonzero probability and there is only a single island of high-probability states. For example, consider a probability distribution over 100-bit vectors, where the all-zeros vector occurs with probability⁠1/2⁠, and all other vectors are equally probable, and so have a probability of12(2100−1){\displaystyle {\frac {1}{2(2^{100}-1)}}}each. If you want to estimate the probability of the zero vector, it would be sufficient to take 100 or 1000 samples from the true distribution. That would very likely give an answer very close to⁠1/2⁠. But you would probably have to take more than2100{\displaystyle 2^{100}}samples from Gibbs sampling to get the same result. No computer could do this in a lifetime. This problem occurs no matter how long the burn-in period is. This is because in the true distribution, the zero vector occurs half the time, and those occurrences are randomly mixed in with the nonzero vectors. Even a small sample will see both zero and nonzero vectors. But Gibbs sampling will alternate between returning only the zero vector for long periods (about299{\displaystyle 2^{99}}in a row), then only nonzero vectors for long periods (about299{\displaystyle 2^{99}}in a row). Thus convergence to the true distribution is extremely slow, requiring much more than299{\displaystyle 2^{99}}steps; taking this many steps is not computationally feasible in a reasonable time period. The slow convergence here can be seen as a consequence of thecurse of dimensionality. A problem like this can be solved by block sampling the entire 100-bit vector at once. (This assumes that the 100-bit vector is part of a larger set of variables. If this vector is the only thing being sampled, then block sampling is equivalent to not doing Gibbs sampling at all, which by hypothesis would be difficult.)
https://en.wikipedia.org/wiki/Gibbs_sampling
Inprobability theory, aninteracting particle system(IPS) is astochastic process(X(t))t∈R+{\displaystyle (X(t))_{t\in \mathbb {R} ^{+}}}on some configuration spaceΩ=SG{\displaystyle \Omega =S^{G}}given by a site space, acountably-infinite-ordergraphG{\displaystyle G}and a local state space, acompactmetric spaceS{\displaystyle S}. More precisely IPS are continuous-timeMarkov jump processesdescribing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue ofstochastic cellular automata. Among the main examples are thevoter model, thecontact process, theasymmetric simple exclusion process(ASEP), theGlauber dynamicsand in particular the stochasticIsing model. IPS are usually defined via theirMarkov generatorgiving rise to a uniqueMarkov processusing Markovsemigroupsand theHille-Yosida theorem. The generator again is given via so-called transition ratescΛ(η,ξ)>0{\displaystyle c_{\Lambda }(\eta ,\xi )>0}whereΛ⊂G{\displaystyle \Lambda \subset G}is a finite set of sites andη,ξ∈Ω{\displaystyle \eta ,\xi \in \Omega }withηi=ξi{\displaystyle \eta _{i}=\xi _{i}}for alli∉Λ{\displaystyle i\notin \Lambda }. The rates describe exponential waiting times of the process to jump from configurationη{\displaystyle \eta }into configurationξ{\displaystyle \xi }. More generally the transition rates are given in form of a finite measurecΛ(η,dξ){\displaystyle c_{\Lambda }(\eta ,d\xi )}onSΛ{\displaystyle S^{\Lambda }}. The generatorL{\displaystyle L}of an IPS has the following form. First, the domain ofL{\displaystyle L}is a subset of the space of "observables", that is, the set of real valuedcontinuous functionson the configuration spaceΩ{\displaystyle \Omega }. Then for any observablef{\displaystyle f}in the domain ofL{\displaystyle L}, one has Lf(η)=∑Λ∫ξ:ξΛc=ηΛccΛ(η,dξ)[f(ξ)−f(η)]{\displaystyle Lf(\eta )=\sum _{\Lambda }\int _{\xi :\xi _{\Lambda ^{c}}=\eta _{\Lambda ^{c}}}c_{\Lambda }(\eta ,d\xi )[f(\xi )-f(\eta )]}. For example, for the stochasticIsing modelwe haveG=Zd{\displaystyle G=\mathbb {Z} ^{d}},S={−1,+1}{\displaystyle S=\{-1,+1\}},cΛ=0{\displaystyle c_{\Lambda }=0}ifΛ≠{i}{\displaystyle \Lambda \neq \{i\}}for somei∈G{\displaystyle i\in G}and whereηi{\displaystyle \eta ^{i}}is the configuration equal toη{\displaystyle \eta }except it is flipped at sitei{\displaystyle i}.β{\displaystyle \beta }is a new parameter modeling the inverse temperature. Thevoter model(usually in continuous time, but there are discrete versions as well) is a process similar to thecontact process. In this processη(x){\displaystyle \eta (x)}is taken to represent a voter's attitude on a particular topic. Voters reconsider their opinions at times distributed according to independent exponential random variables (this gives a Poisson process locally – note that there are in general infinitely many voters so no global Poisson process can be used). At times of reconsideration, a voter chooses one neighbor uniformly from amongst all neighbors and takes that neighbor's opinion. One can generalize the process by allowing the picking of neighbors to be something other than uniform. In the discrete time voter model in one dimension,ξt(x):Z→{0,1}{\displaystyle \xi _{t}(x):\mathbb {Z} \to \{0,1\}}represents the state of particlex{\displaystyle x}at timet{\displaystyle t}. Informally each individual is arranged on a line and can "see" other individuals that are within a radius,r{\displaystyle r}. If more than a certain proportion,θ{\displaystyle \theta }of these people disagree then the individual changes her attitude, otherwise she keeps it the same.Durrettand Steif (1993) and Steif (1994) show that for large radii there is a critical valueθc{\displaystyle \theta _{c}}such that ifθ>θc{\displaystyle \theta >\theta _{c}}most individuals never change, and forθ∈(1/2,θc){\displaystyle \theta \in (1/2,\theta _{c})}in the limit most sites agree. (Both of these results assume the probability ofξ0(x)=1{\displaystyle \xi _{0}(x)=1}is one half.) This process has a natural generalization to more dimensions, some results for this are discussed inDurrettand Steif (1993). The continuous time process is similar in that it imagines each individual has a belief at a time and changes it based on the attitudes of its neighbors. The process is described informally byLiggett(1985, 226), "Periodically (i.e., at independent exponential times), an individual reassesses his view in a rather simple way: he chooses a 'friend' at random with certain probabilities and adopts his position." A model was constructed with this interpretation by Holley andLiggett(1975). This process is equivalent to a process first suggested by Clifford and Sudbury (1973) where animals are in conflict over territory and are equally matched. A site is selected to be invaded by a neighbor at a given time.
https://en.wikipedia.org/wiki/Interacting_particle_system
Ingame theory, a game is said to be apotential gameif the incentive of all players to change theirstrategycan be expressed using a single global function called thepotential function. The concept originated in a 1996 paper by Dov Monderer andLloyd Shapley.[1] The properties of several types of potential games have since been studied. Games can be eitherordinalorcardinalpotential games. In cardinal games, the difference in individualpayoffsfor each player from individually changing one's strategy, other things equal, has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same. The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pureNash equilibriacan be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function. Potential games can be studied asrepeated gameswith state so that every round played has a direct consequence on game's state in the next round.[2]This approach has applications in distributed control such as distributed resource allocation, where players without a central correlation mechanism can cooperate to achieve a globally optimal resource distribution. LetN{\displaystyle N}be the number of players,A{\displaystyle A}the set of action profiles over the action setsAi{\displaystyle A_{i}}of each player andui:A→R{\displaystyle u_{i}:A\to \mathbb {R} }be the payoff function for player1≤i≤N{\displaystyle 1\leq i\leq N}. Given a gameG=(N,A=A1×…×AN,u:A→RN){\displaystyle G=(N,A=A_{1}\times \ldots \times A_{N},u:A\rightarrow \mathbb {R} ^{N})}, we say thatG{\displaystyle G}is apotential gamewith an exact (weighted, ordinal, generalized ordinal, best response)potential functionifΦ:A→R{\displaystyle \Phi :A\rightarrow \mathbb {R} }is an exact (weighted, ordinal, generalized ordinal, best response, respectively) potential function forG{\displaystyle G}. Here,Φ{\displaystyle \Phi }is called Note that while there areN{\displaystyle N}utility functions, one for each player, there is only one potential function. Thus, through the lens of potential functions, the players become interchangeable (in the sense of one of the definitions above). Because of thissymmetryof the game, decentralized algorithms based on the shared potential function often lead to convergence (in some of sense) to a Nash equilibria. Ina2-player, 2-action game with externalities, individual players' payoffs are given by the functionui(ai,aj) =biai+waiaj, whereaiis players i's action,ajis the opponent's action, andwisapositiveexternalityfrom choosing the same action. The action choices are +1 and −1, as seen in thepayoff matrixin Figure 1. This game hasapotential functionP(a1,a2) =b1a1+b2a2+wa1a2. If player 1 moves from −1 to +1, the payoff difference isΔu1=u1(+1,a2) –u1(–1,a2) = 2b1+ 2wa2. The change in potential isΔP = P(+1,a2) – P(–1,a2) = (b1+b2a2+wa2) – (–b1+b2a2–wa2) = 2b1+ 2wa2= Δu1. The solution for player 2 is equivalent. Using numerical valuesb1= 2,b2= −1,w= 3, this example transforms intoasimplebattle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria,(+1, +1)and(−1, −1). These are also the local maxima of the potential function (Figure 3). The onlystochastically stable equilibriumis(+1, +1), the global maximum of the potential function. A 2-player, 2-action game cannot beapotential game unless Exact potential games are equivalent tocongestion games: Rosenthal[3]proved that everycongestion gamehas an exact potential; Monderer and Shapley[1]proved the opposite direction: every game with an exact potential function is acongestion game. Animprovement path(also calledNash dynamics) is a sequence of strategy-vectors, in which each vector is attained from the previous vector by a single player switching his strategy to a strategy that strictly increases his utility. If a game has a generalized-ordinal-potential functionΦ{\displaystyle \Phi }, thenΦ{\displaystyle \Phi }is strictly increasing in every improvement path, so every improvement path is acyclic. If, in addition, the game has finitely many strategies, then every improvement path must be finite. This property is called thefinite improvement property (FIP). We have just proved that every finite generalized-ordinal-potential game has the FIP. The opposite is also true: every finite game has the FIP has a generalized-ordinal-potential function.[4][clarification needed]The terminal state in every finite improvement path is a Nash equilibrium, so FIP implies the existence of a pure-strategy Nash equilibrium. Moreover, it implies that a Nash equilibrium can be computed by a distributed process, in which each agent only has to improve his own strategy. Abest-response pathis a special case of an improvement path, in which each vector is attained from the previous vector by a single player switching his strategy to a best-response strategy. The property that every best-response path is finite is called thefinite best-response property (FBRP). FBRP is weaker than FIP, and it still implies the existence of a pure-strategy Nash equilibrium. It also implies that a Nash equilibrium can be computed by a distributed process, but the computational burden on the agents is higher than with FIP, since they have to compute a best-response. An even weaker property isweak-acyclicity (WA).[5]It means that, for any initial strategy-vector,there existsa finite best-response path starting at that vector. Weak-acyclicity is not sufficient for existence of a potential function (since some improvement-paths may be cyclic), but it is sufficient for the existence of pure-strategy Nash equilibrium. It implies that a Nash equilibrium can be computedalmost-surelyby a stochastic distributed process, in which at each point, a player is chosen at random, and this player chooses a best-strategy at random.[4]
https://en.wikipedia.org/wiki/Potential_game#Bounded_Rational_Models
Stochastic cellular automataorprobabilistic cellular automata(PCA) orrandom cellular automataorlocally interactingMarkov chains[1][2]are an important extension ofcellular automaton. Cellular automata are a discrete-timedynamical systemof interacting entities, whose state is discrete. The state of the collection of entities is updated at each discrete time according to some simple homogeneous rule. All entities' states are updated in parallel or synchronously. Stochastic cellular automata are CA whose updating rule is astochasticone, which means the new entities' states are chosen according to some probability distributions. It is a discrete-timerandom dynamical system. From the spatial interaction between the entities, despite the simplicity of the updating rules,complex behaviourmayemergelikeself-organization. As mathematical object, it may be considered in the framework ofstochastic processesas aninteracting particle systemin discrete-time. See[3]for a more detailed introduction. As discrete-time Markov process, PCA are defined on aproduct spaceE=∏k∈GSk{\displaystyle E=\prod _{k\in G}S_{k}}(cartesian product) whereG{\displaystyle G}is a finite or infinite graph, likeZ{\displaystyle \mathbb {Z} }and whereSk{\displaystyle S_{k}}is a finite space, like for instanceSk={−1,+1}{\displaystyle S_{k}=\{-1,+1\}}orSk={0,1}{\displaystyle S_{k}=\{0,1\}}. The transition probability has a product formP(dσ|η)=⊗k∈Gpk(dσk|η){\displaystyle P(d\sigma |\eta )=\otimes _{k\in G}p_{k}(d\sigma _{k}|\eta )}whereη∈E{\displaystyle \eta \in E}andpk(dσk|η){\displaystyle p_{k}(d\sigma _{k}|\eta )}is a probability distribution onSk{\displaystyle S_{k}}. In general some locality is requiredpk(dσk|η)=pk(dσk|ηVk){\displaystyle p_{k}(d\sigma _{k}|\eta )=p_{k}(d\sigma _{k}|\eta _{V_{k}})}whereηVk=(ηj)j∈Vk{\displaystyle \eta _{V_{k}}=(\eta _{j})_{j\in V_{k}}}withVk{\displaystyle {V_{k}}}a finite neighbourhood of k. See[4]for a more detailed introduction following the probability theory's point of view. There is a version of themajority cellular automatonwith probabilistic updating rules. See theToom's rule. PCA may be used to simulate theIsing modelofferromagnetisminstatistical mechanics.[5]Some categories of models were studied from a statistical mechanics point of view. There is a strong connection[6]between probabilistic cellular automata and thecellular Potts modelin particular when it is implemented in parallel. TheGalves–Löcherbach modelis an example of a generalized PCA with a non Markovian aspect.
https://en.wikipedia.org/wiki/Stochastic_cellular_automata
ThePearson distributionis a family ofcontinuousprobability distributions. It was first published byKarl Pearsonin 1895 and subsequently extended by him in 1901 and 1916 in a series of articles onbiostatistics. The Pearson system was originally devised in an effort to model visiblyskewedobservations. It was well known at the time how to adjust a theoretical model to fit the first twocumulantsormomentsof observed data: Anyprobability distributioncan be extended straightforwardly to form alocation-scale family. Except inpathologicalcases, a location-scale family can be made to fit the observedmean(first cumulant) andvariance(second cumulant) arbitrarily well. However, it was not known how to construct probability distributions in which theskewness(standardized third cumulant) andkurtosis(standardized fourth cumulant) could be adjusted equally freely. This need became apparent when trying to fit known theoretical models to observed data that exhibited skewness. Pearson's examples include survival data, which are usually asymmetric. In his original paper, Pearson (1895, p. 360) identified four types of distributions (numbered I through IV) in addition to thenormal distribution(which was originally known as type V). The classification depended on whether the distributions weresupportedon a bounded interval, on a half-line, or on the wholereal line; and whether they were potentially skewed or necessarily symmetric. A second paper (Pearson 1901) fixed two omissions: it redefined the type V distribution (originally just thenormal distribution, but now theinverse-gamma distribution) and introduced the type VI distribution. Together the first two papers cover the five main types of the Pearson system (I, III, IV, V, and VI). In a third paper, Pearson (1916) introduced further special cases and subtypes (VII through XII). Rhind (1909, pp. 430–432) devised a simple way of visualizing the parameter space of the Pearson system, which was subsequently adopted by Pearson (1916, plate 1 and pp. 430ff., 448ff.). The Pearson types are characterized by two quantities, commonly referred to as β1and β2. The first is the square of theskewness: β1= γ1where γ1is the skewness, or thirdstandardized moment. The second is the traditionalkurtosis, or fourth standardized moment: β2= γ2+ 3. (Modern treatments define kurtosis γ2in terms of cumulants instead of moments, so that for a normal distribution we have γ2= 0 and β2= 3. Here we follow the historical precedent and use β2.) The diagram shows which Pearson type a given concrete distribution (identified by a point (β1, β2)) belongs to. Many of the skewed or non-mesokurticdistributions familiar to statisticians today were still unknown in the early 1890s. What is now known as thebeta distributionhad been used byThomas Bayesas aposterior distributionof the parameter of aBernoulli distributionin his 1763 work oninverse probability. The Beta distribution gained prominence due to its membership in Pearson's system and was known until the 1940s as the Pearson type I distribution.[1](Pearson's type II distribution is a special case of type I, but is usually no longer singled out.) Thegamma distributionoriginated from Pearson's work (Pearson 1893, p. 331; Pearson 1895, pp. 357, 360, 373–376) and was known as the Pearson type III distribution, before acquiring its modern name in the 1930s and 1940s.[2]Pearson's 1895 paper introduced the type IV distribution, which containsStudent'st-distributionas a special case, predatingWilliam Sealy Gosset's subsequent use by several years. His 1901 paper introduced theinverse-gamma distribution(type V) and thebeta prime distribution(type VI). A Pearsondensitypis defined to be any valid solution to thedifferential equation(cf. Pearson 1895, p. 381) with: According to Ord,[3]Pearson devised the underlying form of Equation (1) on the basis of, firstly, the formula for the derivative of the logarithm of the density function of thenormal distribution(which gives a linear function) and, secondly, from a recurrence relation for values in theprobability mass functionof thehypergeometric distribution(which yields the linear-divided-by-quadratic structure). In Equation (1), the parameteradetermines astationary point, and hence under some conditions amodeof the distribution, since follows directly from the differential equation. Since we are confronted with afirst-order linear differential equation with variable coefficients, its solution is straightforward: The integral in this solution simplifies considerably when certain special cases of the integrand are considered. Pearson (1895, p. 367) distinguished two main cases, determined by the sign of thediscriminant(and hence the number of realroots) of thequadratic function If the discriminant of the quadratic function (2) is negative (b12−4b2b0<0{\displaystyle b_{1}^{2}-4b_{2}b_{0}<0}), it has no real roots. Then define Observe thatαis a well-defined real number andα≠ 0, because by assumption4b2b0−b12>0{\displaystyle 4b_{2}b_{0}-b_{1}^{2}>0}and thereforeb2≠ 0. Applying these substitutions, the quadratic function (2) is transformed into The absence of real roots is obvious from this formulation, because α2is necessarily positive. We now express the solution to the differential equation (1) as a function ofy: Pearson (1895, p. 362) called this the "trigonometrical case", because the integral involves theinversetrigonometricarctan function. Then Finally, let Applying these substitutions, we obtain the parametric function: This unnormalized density hassupporton the entirereal line. It depends on ascale parameterα > 0 andshape parametersm> 1/2 andν. One parameter was lost when we chose to find the solution to the differential equation (1) as a function ofyrather thanx. We therefore reintroduce a fourth parameter, namely thelocation parameterλ. We have thus derived the density of thePearson type IV distribution: Thenormalizing constantinvolves thecomplexGamma function(Γ) and theBeta function(B). Notice that thelocation parameterλhere is not the same as the original location parameter introduced in the general formulation, but is related via The shape parameterνof the Pearson type IV distribution controls itsskewness. If we fix its value at zero, we obtain a symmetric three-parameter family. This special case is known as thePearson type VII distribution(cf. Pearson 1916, p. 450). Its density is where B is theBeta function. An alternative parameterization (and slight specialization) of the type VII distribution is obtained by letting which requiresm> 3/2. This entails a minor loss of generality but ensures that thevarianceof the distribution exists and is equal to σ2. Now the parametermonly controls thekurtosisof the distribution. Ifmapproaches infinity asλandσare held constant, thenormal distributionarises as a special case: This is the density of a normal distribution with meanλand standard deviationσ. It is convenient to require thatm> 5/2 and to let This is another specialization, and it guarantees that the first four moments of the distribution exist. More specifically, the Pearson type VII distribution parameterized in terms of (λ, σ, γ2) has a mean ofλ,standard deviationofσ,skewnessof zero, and positiveexcess kurtosisof γ2. The Pearson type VII distribution is equivalent to the non-standardizedStudent'st-distributionwith parameters ν > 0, μ, σ2by applying the following substitutions to its original parameterization: Observe that the constraintm> 1/2is satisfied. The resulting density is which is easily recognized as the density of a Student'st-distribution. This implies that the Pearson type VII distribution subsumes the standardStudent'st-distributionand also the standardCauchy distribution. In particular, the standard Student'st-distribution arises as a subcase, whenμ= 0 andσ2= 1, equivalent to the following substitutions: The density of this restricted one-parameter family is a standard Student'st: If the quadratic function (2) has a non-negative discriminant (b12−4b2b0≥0{\displaystyle b_{1}^{2}-4b_{2}b_{0}\geq 0}), it has real rootsa1anda2(not necessarily distinct): In the presence of real roots the quadratic function (2) can be written as and the solution to the differential equation is therefore Pearson (1895, p. 362) called this the "logarithmic case", because the integral involves only thelogarithmfunction and not the arctan function as in the previous case. Using the substitution we obtain the following solution to the differential equation (1): Since this density is only known up to a hidden constant of proportionality, that constant can be changed and the density written as follows: ThePearson type I distribution(a generalization of thebeta distribution) arises when the roots of the quadratic equation (2) are of opposite sign, that is,a1<0<a2{\displaystyle a_{1}<0<a_{2}}. Then the solutionpis supported on the interval(a1,a2){\displaystyle (a_{1},a_{2})}. Apply the substitution where0<y<1{\displaystyle 0<y<1}, which yields a solution in terms ofythat is supported on the interval (0, 1): One may define: Regrouping constants and parameters, this simplifies to: Thusx−λ−a1a2−a1{\displaystyle {\frac {x-\lambda -a_{1}}{a_{2}-a_{1}}}}follows aB(m1+1,m2+1){\displaystyle \mathrm {B} (m_{1}+1,m_{2}+1)}withλ=μ1−(a2−a1)m1+1m1+m2+2−a1{\displaystyle \lambda =\mu _{1}-(a_{2}-a_{1}){\frac {m_{1}+1}{m_{1}+m_{2}+2}}-a_{1}}. It turns out thatm1,m2> −1 is necessary and sufficient forpto be a proper probability density function. ThePearson type II distributionis a special case of the Pearson type I family restricted to symmetric distributions. For the Pearson type II curve,[4] where The ordinate,y, is the frequency of∑d2{\displaystyle \sum d^{2}}. The Pearson type II distribution is used in computing the table of significant correlation coefficients forSpearman's rank correlation coefficientwhen the number of items in a series is less than 100 (or 30, depending on some sources). After that, the distribution mimics a standardStudent's t-distribution. For the table of values, certain values are used as the constants in the previous equation: The moments ofxused are Defining b0+b1(x−λ){\displaystyle b_{0}+b_{1}(x-\lambda )}isGamma⁡(m+1,b12){\displaystyle \operatorname {Gamma} (m+1,b_{1}^{2})}. The Pearson type III distribution is agamma distributionorchi-squared distribution. Defining new parameters: x−λ{\displaystyle x-\lambda }follows anInverseGamma⁡(1b2−1,a−C1b2){\displaystyle \operatorname {InverseGamma} ({\frac {1}{b_{2}}}-1,{\frac {a-C_{1}}{b_{2}}})}. The Pearson type V distribution is aninverse-gamma distribution. Defining x−λ−a2a2−a1{\displaystyle {\frac {x-\lambda -a_{2}}{a_{2}-a_{1}}}}follows aβ′(m2+1,−m2−m1−1){\displaystyle \beta ^{\prime }(m_{2}+1,-m_{2}-m_{1}-1)}. The Pearson type VI distribution is abeta prime distributionorF-distribution. The Pearson family subsumes the following distributions, among others: Alternatives to the Pearson system of distributions for the purpose of fitting distributions to data are thequantile-parameterized distributions(QPDs) and themetalog distributions. QPDs and metalogs can provide greater shape and bounds flexibility than the Pearson system. Instead of fitting moments, QPDs are typically fit toempirical CDFor other data withlinear least squares. Examples of modern alternatives to the Pearson skewness-vs-kurtosis diagram are: (i)https://github.com/SchildCode/PearsonPlotand (ii) the "Cullen and Frey graph" in the statistical application R. These models are used in financial markets, given their ability to be parametrized in a way that has intuitive meaning for market traders. A number of models are in current use that capture the stochastic nature of the volatility of rates, stocks, etc.,[which?][citation needed]and this family of distributions may prove to be one of the more important. In the United States, the Log-Pearson III is the default distribution for flood frequency analysis.[5] Recently, there have been alternatives developed to the Pearson distributions that are more flexible and easier to fit to data. See themetalog distributions.
https://en.wikipedia.org/wiki/Pearson_distribution
Inmathematics, aSheffer sequenceorpoweroidis apolynomial sequence, i.e., asequence(pn(x) :n= 0, 1, 2, 3, ...)ofpolynomialsin which the index of each polynomial equals itsdegree, satisfying conditions related to theumbral calculusincombinatorics. They are named forIsador M. Sheffer. Fix a polynomial sequence (pn). Define alinear operatorQon polynomials inxbyQpn(x)=npn−1(x).{\displaystyle Qp_{n}(x)=np_{n-1}(x)\,.} This determinesQon all polynomials. The polynomial sequencepnis aSheffer sequenceif the linear operatorQjust defined isshift-equivariant; such aQis then adelta operator. Here, we define a linear operatorQon polynomials to beshift-equivariantif, wheneverf(x) =g(x+a) =Tag(x) is a "shift" ofg(x), then (Qf)(x) = (Qg)(x+a); i.e.,Qcommutes with everyshift operator:TaQ=QTa. The set of all Sheffer sequences is agroupunder the operation ofumbral compositionof polynomial sequences, defined as follows. Suppose (pn(x) :n= 0, 1, 2, 3, ... ) and (qn(x) :n= 0, 1, 2, 3, ... ) are polynomial sequences, given bypn(x)=∑k=0nan,kxkandqn(x)=∑k=0nbn,kxk.{\displaystyle p_{n}(x)=\sum _{k=0}^{n}a_{n,k}x^{k}\ {\mbox{and}}\ q_{n}(x)=\sum _{k=0}^{n}b_{n,k}x^{k}.} Then the umbral compositionp∘q{\displaystyle p\circ q}is the polynomial sequence whosenth term is(pn∘q)(x)=∑k=0nan,kqk(x)=∑0≤ℓ≤k≤nan,kbk,ℓxℓ{\displaystyle (p_{n}\circ q)(x)=\sum _{k=0}^{n}a_{n,k}q_{k}(x)=\sum _{0\leq \ell \leq k\leq n}a_{n,k}b_{k,\ell }x^{\ell }}(the subscriptnappears inpn, since this is thenterm of that sequence, but not inq, since this refers to the sequence as a whole rather than one of its terms). The identity element of this group is the standard monomial basisen(x)=xn=∑k=0nδn,kxk.{\displaystyle e_{n}(x)=x^{n}=\sum _{k=0}^{n}\delta _{n,k}x^{k}.} Two importantsubgroupsare the group ofAppell sequences, which are those sequences for which the operatorQis meredifferentiation, and the group of sequences ofbinomial type, which are those that satisfy the identitypn(x+y)=∑k=0n(nk)pk(x)pn−k(y).{\displaystyle p_{n}(x+y)=\sum _{k=0}^{n}{n \choose k}p_{k}(x)p_{n-k}(y).}A Sheffer sequence (pn(x) :n= 0, 1, 2, ... ) is of binomial type if and only if bothp0(x)=1{\displaystyle p_{0}(x)=1\,}andpn(0)=0forn≥1.{\displaystyle p_{n}(0)=0{\mbox{ for }}n\geq 1.\,} The group of Appell sequences isabelian; the group of sequences of binomial type is not. The group of Appell sequences is anormal subgroup; the group of sequences of binomial type is not. The group of Sheffer sequences is asemidirect productof the group of Appell sequences and the group of sequences of binomial type. It follows that eachcosetof the group of Appell sequences contains exactly one sequence of binomial type. Two Sheffer sequences are in the same such coset if and only if the operatorQdescribed above – called the "delta operator" of that sequence – is the same linear operator in both cases. (Generally, adelta operatoris a shift-equivariant linear operator on polynomials that reduces degree by one. The term is due to F. Hildebrandt.) Ifsn(x) is a Sheffer sequence andpn(x) is the one sequence of binomial type that shares the same delta operator, thensn(x+y)=∑k=0n(nk)pk(x)sn−k(y).{\displaystyle s_{n}(x+y)=\sum _{k=0}^{n}{n \choose k}p_{k}(x)s_{n-k}(y).} Sometimes the termSheffer sequenceisdefinedto mean a sequence that bears this relation to some sequence of binomial type. In particular, if (sn(x) ) is an Appell sequence, thensn(x+y)=∑k=0n(nk)xksn−k(y).{\displaystyle s_{n}(x+y)=\sum _{k=0}^{n}{n \choose k}x^{k}s_{n-k}(y).} The sequence ofHermite polynomials, the sequence ofBernoulli polynomials, and themonomials(xn:n= 0, 1, 2, ... ) are examples of Appell sequences. A Sheffer sequencepnis characterised by itsexponential generating function∑n=0∞pn(x)n!tn=A(t)exp⁡(xB(t)){\displaystyle \sum _{n=0}^{\infty }{\frac {p_{n}(x)}{n!}}t^{n}=A(t)\exp(xB(t))\,}whereAandBare (formal)power seriesint. Sheffer sequences are thus examples ofgeneralized Appell polynomialsand hence have an associatedrecurrence relation. Examples of polynomial sequences which are Sheffer sequences include:
https://en.wikipedia.org/wiki/Sheffer_sequence
Inmathematics, anorthogonal polynomial sequenceis a family ofpolynomialssuch that any two different polynomials in the sequence areorthogonalto each other under someinner product. The most widely used orthogonal polynomials are theclassical orthogonal polynomials, consisting of theHermite polynomials, theLaguerre polynomialsand theJacobi polynomials. TheGegenbauer polynomialsform the most important class of Jacobi polynomials; they include theChebyshev polynomials, and theLegendre polynomialsas special cases. These are frequently given by theRodrigues' formula. The field of orthogonal polynomials developed in the late 19th century from a study ofcontinued fractionsbyP. L. Chebyshevand was pursued byA. A. MarkovandT. J. Stieltjes. They appear in a wide variety of fields:numerical analysis(quadrature rules),probability theory,representation theory(ofLie groups,quantum groups, and related objects),enumerative combinatorics,algebraic combinatorics,mathematical physics(the theory ofrandom matrices,integrable systems, etc.), andnumber theory. Some of the mathematicians who have worked on orthogonal polynomials includeGábor Szegő,Sergei Bernstein,Naum Akhiezer,Arthur Erdélyi,Yakov Geronimus,Wolfgang Hahn,Theodore Seio Chihara,Mourad Ismail,Waleed Al-Salam,Richard Askey, andRehuel Lobatto. Given any non-decreasing functionαon the real numbers, we can define theLebesgue–Stieltjes integral∫f(x)dα(x){\displaystyle \int f(x)\,d\alpha (x)}of a functionf. If this integral is finite for all polynomialsf, we can define an inner product on pairs of polynomialsfandgby⟨f,g⟩=∫f(x)g(x)dα(x).{\displaystyle \langle f,g\rangle =\int f(x)g(x)\,d\alpha (x).} This operation is a positive semidefiniteinner producton thevector spaceof all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion oforthogonalityin the usual way, namely that two polynomials are orthogonal if their inner product is zero. Then the sequence(Pn)∞n=0of orthogonal polynomials is defined by the relationsdeg⁡Pn=n,⟨Pm,Pn⟩=0form≠n.{\displaystyle \deg P_{n}=n~,\quad \langle P_{m},\,P_{n}\rangle =0\quad {\text{for}}\quad m\neq n~.} In other words, the sequence is obtained from the sequence of monomials 1,x,x2, … by theGram–Schmidt processwith respect to this inner product. Usually the sequence is required to beorthonormal, namely,⟨Pn,Pn⟩=1,{\displaystyle \langle P_{n},P_{n}\rangle =1,}however, other normalisations are sometimes used. Sometimes we havedα(x)=W(x)dx{\displaystyle d\alpha (x)=W(x)\,dx}whereW:[x1,x2]→R{\displaystyle W:[x_{1},x_{2}]\to \mathbb {R} }is a non-negative function with support on some interval[x1,x2]in the real line (wherex1= −∞andx2= ∞are allowed). Such aWis called aweight function.[1]Then the inner product is given by⟨f,g⟩=∫x1x2f(x)g(x)W(x)dx.{\displaystyle \langle f,g\rangle =\int _{x_{1}}^{x_{2}}f(x)g(x)W(x)\,dx.}However, there are many examples of orthogonal polynomials where the measuredα(x)has points with non-zero measure where the functionαis discontinuous, so cannot be given by a weight functionWas above. The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes: Discrete orthogonal polynomialsare orthogonal with respect to some discrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. TheRacah polynomialsare examples of discrete orthogonal polynomials, and include as special cases theHahn polynomialsanddual Hahn polynomials, which in turn include as special cases theMeixner polynomials,Krawtchouk polynomials, andCharlier polynomials. Meixner classified all the orthogonalSheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to theNEF-QVFsand aremartingalepolynomials for certainLévy processes. Sieved orthogonal polynomials, such as thesieved ultraspherical polynomials,sieved Jacobi polynomials, andsieved Pollaczek polynomials, have modified recurrence relations. One can also consider orthogonal polynomials for some curve in thecomplex plane. The most important case (other than real intervals) is when the curve is the unit circle, givingorthogonal polynomials on the unit circle, such as theRogers–Szegő polynomials. There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example,Zernike polynomialsare orthogonal on theunit disk. The advantage of orthogonality between different orders ofHermite polynomialsis applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice.[2] Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties. The orthogonal polynomialsPncan be expressed in terms of themoments as follows: where the constantscnare arbitrary (depend on the normalization ofPn). This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality withP0{\displaystyle P_{0}}prescribes thatP1{\displaystyle P_{1}}must have the formP1(x)=c1(x−⟨P0,x⟩P0⟨P0,P0⟩)=c1(x−m1),{\displaystyle P_{1}(x)=c_{1}\left(x-{\frac {\langle P_{0},x\rangle P_{0}}{\langle P_{0},P_{0}\rangle }}\right)=c_{1}(x-m_{1}),}which can be seen to be consistent with the previously given expression with the determinant. The polynomialsPnsatisfy a recurrence relation of the form whereAnis not 0. The converse is also true; seeFavard's theorem. If the measure dαis supported on an interval [a,b], all the zeros ofPnlie in [a,b]. Moreover, the zeros have the following interlacing property: ifm<n, there is a zero ofPnbetween any two zeros ofPm.Electrostaticinterpretations of the zeros can be given.[citation needed] From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials.[3] TheMacdonald polynomialsare orthogonal polynomials in several variables, depending on the choice of an affine root system. They include many other families of multivariable orthogonal polynomials as special cases, including theJack polynomials, theHall–Littlewood polynomials, theHeckman–Opdam polynomials, and theKoornwinder polynomials. TheAskey–Wilson polynomialsare the special case of Macdonald polynomials for a certain non-reduced root system of rank 1. Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures. These are orthogonal polynomials with respect to aSobolevinner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials. Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix. There are two popular examples: either the coefficients{ai}{\displaystyle \{a_{i}\}}are matrices orx{\displaystyle x}: Quantum polynomials or q-polynomials are theq-analogsof orthogonal polynomials.
https://en.wikipedia.org/wiki/Orthogonal_polynomials
Acompound Poisson processis a continuous-timestochastic processwith jumps. The jumps arrive randomly according to aPoisson processand the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rateλ>0{\displaystyle \lambda >0}and jump size distributionG, is a process{Y(t):t≥0}{\displaystyle \{\,Y(t):t\geq 0\,\}}given by where,{N(t):t≥0}{\displaystyle \{\,N(t):t\geq 0\,\}}is the counting variable of aPoisson processwith rateλ{\displaystyle \lambda }, and{Di:i≥1}{\displaystyle \{\,D_{i}:i\geq 1\,\}}are independent and identically distributed random variables, with distribution functionG, which are also independent of{N(t):t≥0}.{\displaystyle \{\,N(t):t\geq 0\,\}.\,} WhenDi{\displaystyle D_{i}}are non-negative integer-valued random variables, then this compound Poisson process is known as astuttering Poisson process.[citation needed] Theexpected valueof a compound Poisson process can be calculated using a result known asWald's equationas: Making similar use of thelaw of total variance, thevariancecan be calculated as: Lastly, using thelaw of total probability, themoment generating functioncan be given as follows: LetN,Y, andDbe as above. Letμbe the probability measure according to whichDis distributed, i.e. Letδ0be the trivial probability distribution putting all of the mass at zero. Then theprobability distributionofY(t) is the measure where the exponential exp(ν) of a finite measureνonBorel subsetsof thereal lineis defined by and is aconvolutionof measures, and the series convergesweakly.
https://en.wikipedia.org/wiki/Compound_Poisson_process
Inprobability theoryandstatistics, thegeometric Poisson distribution(also called thePólya–Aeppli distribution) is used for describing objects that come in clusters, where the number of clusters follows aPoisson distributionand the number of objects within a cluster follows ageometric distribution.[1]It is a particular case of thecompound Poisson distribution.[2] Theprobability mass functionof a random variableNdistributed according to the geometric Poisson distributionPG(λ,θ){\displaystyle {\mathcal {PG}}(\lambda ,\theta )}is given by whereλis the parameter of the underlyingPoisson distributionand θ is the parameter of the geometric distribution.[2] The distribution was described byGeorge Pólyain 1930. Pólya credited his studentAlfred Aeppli's 1924 dissertation as the original source. It was called the geometric Poisson distribution by Sherbrooke in 1968, who gave probability tables with a precision of four decimal places.[3] The geometric Poisson distribution has been used to describe systems modelled by aMarkov model, such as biological processes[2]or traffic accidents.[4]
https://en.wikipedia.org/wiki/Geometric_Poisson_distribution
Aphase-type distributionis aprobability distributionconstructed by a convolution or mixture ofexponential distributions.[1]It results from a system of one or more inter-relatedPoisson processesoccurring insequence, or phases. The sequence in which each of the phases occurs may itself be astochastic process. The distribution can be represented by arandom variabledescribing the time until absorption of aMarkov processwith one absorbing state. Each of thestatesof the Markov process represents one of the phases. It has adiscrete-timeequivalent – thediscrete phase-type distribution. The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution. Consider acontinuous-time Markov processwithm+ 1 states, wherem≥ 1, such that the states 1,...,mare transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of them+ 1 phases given by the probability vector (α0,α) whereα0is a scalar andαis a 1 ×mvector. Thecontinuous phase-type distributionis the distribution of time from the above process's starting until absorption in the absorbing state. This process can be written in the form of atransition rate matrix, whereSis anm×mmatrix andS0= –S1. Here1represents anm× 1 column vector with every element being 1. The distribution of timeXuntil the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S). The distribution function ofXis given by, and the density function, for allx> 0, where exp( · ) is thematrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by TheLaplace transformof the phase type distribution is given by whereIis the identity matrix. The following probability distributions are all considered special cases of a continuous phase-type distribution: As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want. In all the following examples it is assumed that there is no probability mass at zero, that is α0= 0. The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are :S= -λ andα= 1. The mixture of exponential orhyperexponential distributionwith λ1,λ2,...,λn>0 can be represented as a phase type distribution with with∑i=1nαi=1{\displaystyle \sum _{i=1}^{n}\alpha _{i}=1}and This mixture of densities of exponential distributed random variables can be characterized through or its cumulative distribution function withXi∼Exp(λi){\displaystyle X_{i}\sim Exp(\lambda _{i})} The Erlang distribution has two parameters, the shape an integerk> 0 and the rate λ > 0. This is sometimes denotedE(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by makingSak×kmatrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example,E(5,λ), and For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation.[2] Thehypoexponential distributionis a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case). The mixture of two Erlang distributions with parameterE(3,β1),E(3,β2) and (α1,α2) (such that α1+ α2= 1 and for eachi, αi≥ 0) can be represented as a phase type distribution with and TheCoxian distributionis a generalisation of theErlang distribution. Instead of only being able to enter the absorbing state from statekit can be reached from any phase. The phase-type representation is given by, and where 0 <p1,...,pk-1≤ 1. In the case where allpi= 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation. Thegeneralised Coxian distributionrelaxes the condition that requires starting in the first phase. Similarly to theexponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this ishere. BuToolsincludes methods for generating samples from phase-type distributed random variables.[3] Any distribution can be arbitrarily well approximated by a phase type distribution.[4][5]In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance[2]). Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods.[8]Fitting a phase type distribution toheavy-tailed distributionshas been shown to be practical in some situations.[9]
https://en.wikipedia.org/wiki/Phase-type_distribution#Coxian_distribution
Inqueueing theory, theEngset formulais used to determine the blocking probability of an M/M/c/c/N queue (inKendall's notation). The formula is named after its developer,T. O. Engset. Consider a fleet ofc{\displaystyle c}vehicles andN{\displaystyle N}operators. Operators enter the system randomly to request the use of a vehicle. If no vehicles are available, a requesting operator is "blocked" (i.e., the operator leaves without a vehicle). The owner of the fleet would like to pickc{\displaystyle c}small so as to minimize costs, but large enough to ensure that the blocking probability is tolerable. Let Then, theprobabilityof blocking is given by[1] By rearranging terms, one can rewrite the above formula as[2] where2F1{\displaystyle {}_{2}F_{1}}is the GaussianHypergeometric function. There are several recursions[3]that can be used to computeP{\displaystyle P}in a numerically stable manner. Alternatively, any numerical package that supports thehypergeometric functioncan be used. Some examples are given below. PythonwithSciPy MATLABwith theSymbolic Math Toolbox In practice, it is often the case that the source arrival rateλ{\displaystyle \lambda }is unknown (or hard to estimate) whileα>0{\displaystyle \alpha >0}, theoffered trafficper-source, is known. In this case, one can substitute the relationship between the source arrival rate and blocking probability into the Engset formula to arrive at the fixed point equation where While the above removes the unknownλ{\displaystyle \lambda }from the formula, it introduces an additional point of complexity: we can no longer compute the blocking probability directly, and must use aniterative methodinstead. While afixed-point iterationis tempting, it has been shown that such an iteration is sometimesdivergentwhen applied tof{\displaystyle f}.[2]Alternatively, it is possible to use one ofbisectionorNewton's method, for which anopen source implementationis available.
https://en.wikipedia.org/wiki/Engset_calculation
Theerlang(symbolE[1]) is adimensionless unitthat is used intelephonyas a measure ofoffered loador carried load on service-providing elements such as telephone circuits or telephone switching equipment. A singlecord circuithas the capacity to be used for 60 minutes in one hour. Full utilization of that capacity, 60 minutes of traffic, constitutes 1 erlang.[2] Carried traffic in erlangs is the average number of concurrent calls measured over a given period (often one hour), while offered traffic is the traffic that would be carried if all call-attempts succeeded. How much offered traffic is carried in practice will depend on what happens to unanswered calls when all servers are busy. TheCCITTnamed the international unit of telephone traffic the erlang in 1946 in honor ofAgner Krarup Erlang.[3][4]In Erlang's analysis of efficient telephone line usage, he derived the formulae for two important cases, Erlang-B and Erlang-C, which became foundational results inteletraffic engineeringandqueueing theory. His results, which are still used today, relate quality of service to the number of available servers. Both formulae take offered load as one of their main inputs (in erlangs), which is often expressed as call arrival rate times average call length. A distinguishing assumption behind the Erlang B formula is that there is no queue, so that if all service elements are already in use then a newly arriving call will be blocked and subsequently lost. The formula gives the probability of this occurring. In contrast, the Erlang C formula provides for the possibility of an unlimited queue and it gives the probability that a new call will need to wait in the queue due to all servers being in use. Erlang's formulae apply quite widely, but they may fail when congestion is especially high causing unsuccessful traffic to repeatedly retry. One way of accounting for retries when no queue is available is the Extended Erlang B method. When used to representcarried traffic, a value (which can be a non-integer such as 43.5) followed by "erlangs" represents the average number of concurrent calls carried by the circuits (or other service-providing elements), where that average is calculated over some reasonable period of time. The period over which the average is calculated is often one hour, but shorter periods (e.g., 15 minutes) may be used where it is known that there are short spurts of demand and a traffic measurement is desired that does not mask these spurts. One erlang of carried traffic refers to a single resource being in continuous use, or two channels each being in use fifty percent of the time, and so on. For example, if an office has two telephone operators who are both busy all the time, that would represent two erlangs (2 E) of traffic; or a radio channel that is occupied continuously during the period of interest (e.g. one hour) is said to have a load of 1 erlang. When used to describeoffered traffic, a value followed by "erlangs" represents the average number of concurrent calls that would have been carried if there were an unlimited number of circuits (that is, if the call-attempts that were made when all circuits were in use had not been rejected). The relationship between offered traffic and carried traffic depends on the design of the system and user behavior. Three common models are (a) callers whose call-attempts are rejected go away and never come back, (b) callers whose call-attempts are rejected try again within a fairly short space of time, and (c) the system allows users to wait in queue until a circuit becomes available. A third measurement of traffic isinstantaneous traffic, expressed as a certain number of erlangs, meaning the exact number of calls taking place at a point in time. In this case, the number is a non-negative integer. Traffic-level-recording devices, such as moving-pen recorders, plot instantaneous traffic. The concepts and mathematics introduced byAgner Krarup Erlanghave broad applicability beyond telephony. They apply wherever users arrive more or less at random to receive exclusive service from any one of a group of service-providing elements without prior reservation, for example, where the service-providing elements are ticket-sales windows, toilets on an airplane, or motel rooms. (Erlang's models do not apply where the service-providing elements are shared between several concurrent users or different amounts of service are consumed by different users, for instance, on circuits carrying data traffic.) The goal of Erlang's traffic theory is to determine exactly how many service-providing elements should be provided in order to satisfy users, without wasteful over-provisioning. To do this, a target is set for thegrade of service(GoS) orquality of service(QoS). For example, in a system where there is no queuing, the GoS may be that no more than 1 call in 100 is blocked (i.e., rejected) due to all circuits being in use (a GoS of 0.01), which becomes the target probability of call blocking,Pb, when using the Erlang B formula. There are several resulting formulae, includingErlang B,Erlang Cand the relatedEngset formula, based on different models of user behavior and system operation. These may each be derived by means of a special case ofcontinuous-time Markov processesknown as abirth–death process. The more recentExtended Erlang Bmethod provides a further traffic solution that draws on Erlang's results. Offered traffic (in erlangs) is related to thecall arrival rate,λ, and theaverage call-holding time(the average time of a phone call),h, by: provided thathandλare expressed using the same units of time (seconds and calls per second, or minutes and calls per minute). The practical measurement of traffic is typically based on continuous observations over several days or weeks, during which the instantaneous traffic is recorded at regular, short intervals (such as every few seconds). These measurements are then used to calculate a single result, most commonly the busy-hour traffic (in erlangs). This is the average number of concurrent calls during a given one-hour period of the day, where that period is selected to give the highest result. (This result is called the time-consistent busy-hour traffic). An alternative is to calculate a busy-hour traffic value separately for each day (which may correspond to slightly different times each day) and take the average of these values. This generally gives a slightly higher value than the time-consistent busy-hour value. Where the existing busy-hour carried traffic,Ec, is measured on an already overloaded system, with a significant level of blocking, it is necessary to take account of the blocked calls in estimating the busy-hour offered trafficEo(which is the traffic value to be used in the Erlang formulae). The offered traffic can be estimated byEo=Ec/(1 −Pb). For this purpose, where the system includes a means of counting blocked calls and successful calls,Pbcan be estimated directly from the proportion of calls that are blocked. Failing that,Pbcan be estimated by usingEcin place ofEoin the Erlang formula and the resulting estimate ofPbcan then be used inEo=Ec/(1 −Pb)to provide a first estimate ofEo. Another method of estimatingEoin an overloaded system is to measure the busy-hour call arrival rate,λ(counting successful calls and blocked calls), and the average call-holding time (for successful calls),h, and then estimateEousing the formulaE=λh. For a situation where the traffic to be handled is completely new traffic, the only choice is to try to model expected user behavior. For example, one could estimate active user population,N, expected level of use,U(number of calls/transactions per user per day), busy-hour concentration factor,C(proportion of daily activity that will fall in the busy hour), and average holding time/service time,h(expressed in minutes). A projection of busy-hour offered traffic would then beEo=⁠NUC/60⁠herlangs. (The division by 60 translates the busy-hour call/transaction arrival rate into a per-minute value, to match the units in whichhis expressed.) TheErlang B formula(orErlang-Bwith a hyphen), also known as theErlang loss formula, is a formula for theblocking probabilitythat describes the probability of call losses for a group of identical parallel resources (telephone lines, circuits, traffic channels, or equivalent), sometimes referred to as anM/M/c/c queue.[5]It is, for example, used to dimension a telephone network's links. The formula was derived byAgner Krarup Erlangand is not limited to telephone networks, since it describes a probability in a queuing system (albeit a special case with a number of servers but no queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales. The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following aPoisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions. The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic toNservers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and doesnotdepend on the number of active sources. The total number of sources is assumed to be infinite. The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return. The formula provides the GoS (grade of service) which is the probabilityPbthat a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy:B(E,m)whereEis the total offered traffic in erlang, offered tomidentical parallel resources (servers, communication channels, traffic lanes). where: Theerlangis a dimensionless load unit calculated as the mean arrival rate,λ, multiplied by the mean call holding time,h. The unit has to be dimensionless forLittle's Lawto be dimensionally sane. This may be expressed recursively[6]as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula: Typically, instead ofB(E,m) the inverse 1/B(E,m) is calculated in numerical computation in order to ensurenumerical stability: The recursive form is derivable from the non-recursive form by repeated substitution.[7] or a Python version: The Erlang B formula is decreasing andconvexinm.[8]It requires that call arrivals can be modeled by aPoisson process, which is not always a good match, but is valid for any statistical distribution of call holding times with a finite mean. It applies to traffic transmission systems that do not buffer traffic. More modern examples compared toPOTSwhere Erlang B is still applicable, areoptical burst switching(OBS) and several current approaches tooptical packet switching(OPS). Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale. Extended Erlang Bdiffers from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is aniterative calculationrather than a formula and adds an extra parameter, the recall factorRf{\displaystyle R_{\text{f}}}, which defines the recall attempts.[9] The steps in the process are as follows.[10]It starts at iterationk=0{\displaystyle k=0}with a known initial baseline level of trafficE0{\displaystyle E_{0}}, which is successively adjusted to calculate a sequence of new offered traffic valuesEk+1{\displaystyle E_{k+1}}, each of which accounts for the recalls arising from the previously calculated offered trafficEk{\displaystyle E_{k}}. Once a satisfactory value ofE{\displaystyle E}has been found, the blocking probabilityPb{\displaystyle P_{\text{b}}}and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries. TheErlang C formulaexpresses the probability that an arriving customer will need to queue (as opposed to immediately being served).[11]Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic ofE{\displaystyle E}erlangs tom{\displaystyle m}servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff acall centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level. where: It is assumed that the call arrivals can be modeled by aPoisson processand that call holding times are described by anexponential distribution, therefore the Erlang C formula follows from the assumptions of theM/M/c queuemodel. When Erlang developed the Erlang-B and Erlang-C traffic equations, they were developed on a set of assumptions. These assumptions are accurate under most conditions; however in the event of extremely high traffic congestion, Erlang's equations fail to accurately predict the correct number of circuits required because of re-entrant traffic. This is termed ahigh-loss system, where congestion breeds further congestion at peak times. In such cases, it is first necessary for many additional circuits to be made available so that the high loss can be alleviated. Once this action has been taken, congestion will return to reasonable levels and Erlang's equations can then be used to determine how exactly many circuits are really required.[12] An example of an instance which would cause such a High Loss System to develop would be if a TV-based advertisement were to announce a particular telephone number to call at a specific time. In this case, a large number of people would simultaneously phone the number provided. If the service provider had not catered for this sudden peak demand, extreme traffic congestion will develop and Erlang's equations cannot be used.[12]
https://en.wikipedia.org/wiki/Erlang_B
Aphase-type distributionis aprobability distributionconstructed by a convolution or mixture ofexponential distributions.[1]It results from a system of one or more inter-relatedPoisson processesoccurring insequence, or phases. The sequence in which each of the phases occurs may itself be astochastic process. The distribution can be represented by arandom variabledescribing the time until absorption of aMarkov processwith one absorbing state. Each of thestatesof the Markov process represents one of the phases. It has adiscrete-timeequivalent – thediscrete phase-type distribution. The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution. Consider acontinuous-time Markov processwithm+ 1 states, wherem≥ 1, such that the states 1,...,mare transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of them+ 1 phases given by the probability vector (α0,α) whereα0is a scalar andαis a 1 ×mvector. Thecontinuous phase-type distributionis the distribution of time from the above process's starting until absorption in the absorbing state. This process can be written in the form of atransition rate matrix, whereSis anm×mmatrix andS0= –S1. Here1represents anm× 1 column vector with every element being 1. The distribution of timeXuntil the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S). The distribution function ofXis given by, and the density function, for allx> 0, where exp( · ) is thematrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by TheLaplace transformof the phase type distribution is given by whereIis the identity matrix. The following probability distributions are all considered special cases of a continuous phase-type distribution: As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want. In all the following examples it is assumed that there is no probability mass at zero, that is α0= 0. The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are :S= -λ andα= 1. The mixture of exponential orhyperexponential distributionwith λ1,λ2,...,λn>0 can be represented as a phase type distribution with with∑i=1nαi=1{\displaystyle \sum _{i=1}^{n}\alpha _{i}=1}and This mixture of densities of exponential distributed random variables can be characterized through or its cumulative distribution function withXi∼Exp(λi){\displaystyle X_{i}\sim Exp(\lambda _{i})} The Erlang distribution has two parameters, the shape an integerk> 0 and the rate λ > 0. This is sometimes denotedE(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by makingSak×kmatrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example,E(5,λ), and For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation.[2] Thehypoexponential distributionis a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case). The mixture of two Erlang distributions with parameterE(3,β1),E(3,β2) and (α1,α2) (such that α1+ α2= 1 and for eachi, αi≥ 0) can be represented as a phase type distribution with and TheCoxian distributionis a generalisation of theErlang distribution. Instead of only being able to enter the absorbing state from statekit can be reached from any phase. The phase-type representation is given by, and where 0 <p1,...,pk-1≤ 1. In the case where allpi= 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation. Thegeneralised Coxian distributionrelaxes the condition that requires starting in the first phase. Similarly to theexponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this ishere. BuToolsincludes methods for generating samples from phase-type distributed random variables.[3] Any distribution can be arbitrarily well approximated by a phase type distribution.[4][5]In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance[2]). Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods.[8]Fitting a phase type distribution toheavy-tailed distributionshas been shown to be practical in some situations.[9]
https://en.wikipedia.org/wiki/Phase-type_distribution
For detection systems that record discrete events, such asparticleandnucleardetectors, thedead timeis the time after each event during which the system is not able to record another event.[1]An everyday life example of this is what happens when someone takes a photo using a flash – another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits inquantum cryptography.[2] The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the ion drift time in agaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of thedata acquisition(the conversion time of theanalog-to-digital convertersand the readout and storage times). The intrinsic dead time of a detector is often due to its physical characteristics; for example aspark chamberis "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so-called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution. The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually 0.5–10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution. Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account. Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.[3] From the total time a detection system is running, the dead time must be subtracted to obtain thelive time. A detector, or detection system, can be characterized by aparalyzableornon-paralyzablebehaviour.[1]In a non-paralyzable detector, an event happening during the dead time is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time. In a paralyzable detector, an event happening during the dead time will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all. A semi-paralyzable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.[4] It will be assumed that the events are occurring randomly with an average frequency off. That is, they constitute aPoisson process. The probability that an event will occur in an infinitesimal time intervaldtis thenf dt. It follows that the probabilityP(t)that an event will occur at timettot+dtwith no events occurring betweent=0and timetis given by theexponential distribution(Lucke 1974, Meeks 2008): The expected time between events is then For the non-paralyzable case, with a dead time ofτ{\displaystyle \tau }, the probability of measuring an event betweent=0{\displaystyle t=0}andt=τ{\displaystyle t=\tau }is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at timetwith no intervening measurements is then given by an exponential distribution shifted byτ{\displaystyle \tau }: The expected time between measurements is then In other words, ifNm{\displaystyle N_{m}}counts are recorded during a particular time intervalT{\displaystyle T}and the dead time is known, the actual number of events (N) may be estimated by[5] If the dead time is not known, a statistical analysis can yield the correct count. For example, (Meeks 2008), ifti{\displaystyle t_{i}}are a set of intervals between measurements, then theti{\displaystyle t_{i}}will have a shifted exponential distribution, but if a fixed valueDis subtracted from each interval, with negative values discarded, the distribution will be exponential as long asDis greater than the dead timeτ{\displaystyle \tau }. For an exponential distribution, the following relationship holds: wherenis any integer. If the above function is estimated for many measured intervals with various values ofDsubtracted (and for various values ofn) it should be found that for values ofDabove a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate. With a modern microprocessor basedratemeterone technique for measuring field strength with detectors (e.g.,Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely inradiation monitoringsystems used in nuclear power generating stations. Morris, S.L. and Naftilan, S.A., "Determining Photometric Dead Time by Using Hydrogen Filters", Astron. Astrophys. Suppl. Ser. 107, 71-75, Oct. 1994
https://en.wikipedia.org/wiki/Dead_time
In applied statistics, theMarshall–Olkin exponential distributionis any member of a certain family of continuous multivariate probability distributions with positive-valued components. It was introduced by Albert W. Marshall andIngram Olkin.[1]One of its main uses is in reliability theory, where the Marshall–Olkin copula models the dependence between random variables subjected to external shocks.[2][3][4] Let{EB:∅≠B⊂{1,2,…,b}}{\displaystyle \{E_{B}:\varnothing \neq B\subset \{1,2,\ldots ,b\}\}}be a set of independent,exponentially distributedrandom variables, whereEB{\displaystyle E_{B}}has mean1/λB{\displaystyle 1/\lambda _{B}}. Let The joint distribution ofT=(T1,…,Tb){\displaystyle T=(T_{1},\ldots ,T_{b})}is called the Marshall–Olkin exponential distribution with parameters{λB,B⊂{1,2,…,b}}.{\displaystyle \{\lambda _{B},B\subset \{1,2,\ldots ,b\}\}.} Supposeb= 3. Then there are seven nonempty subsets of { 1, ...,b} = { 1, 2, 3 }; hence seven different exponential random variables: Then we have:
https://en.wikipedia.org/wiki/Marshall%E2%80%93Olkin_exponential_distribution
Inprobability theory, abeta negative binomial distributionis theprobability distributionof adiscreterandom variableX{\displaystyle X}equal to the number of failures needed to getr{\displaystyle r}successes in a sequence ofindependentBernoulli trials. The probabilityp{\displaystyle p}of success on each trial stays constant within any given experiment but varies across different experiments following abeta distribution. Thus the distribution is acompound probability distribution. This distribution has also been called both theinverse Markov-Pólya distributionand thegeneralized Waring distribution[1]or simply abbreviated as theBNBdistribution. A shifted form of the distribution has been called thebeta-Pascal distribution.[1] If parameters of the beta distribution areα{\displaystyle \alpha }andβ{\displaystyle \beta }, and if where then the marginal distribution ofX{\displaystyle X}(i.e. theposterior predictive distribution) is a beta negative binomial distribution: In the above,NB(r,p){\displaystyle \mathrm {NB} (r,p)}is thenegative binomial distributionandB(α,β){\displaystyle {\textrm {B}}(\alpha ,\beta )}is thebeta distribution. DenotingfX|p(k|q),fp(q|α,β){\displaystyle f_{X|p}(k|q),f_{p}(q|\alpha ,\beta )}the densities of the negative binomial and beta distributions respectively, we obtain the PMFf(k|α,β,r){\displaystyle f(k|\alpha ,\beta ,r)}of the BNB distribution by marginalization: Noting that the integral evaluates to: we can arrive at the following formulas by relatively simple manipulations. Ifr{\displaystyle r}is an integer, then the PMF can be written in terms of thebeta function,: More generally, the PMF can be written or Using the properties of theBeta function, the PMF with integerr{\displaystyle r}can be rewritten as: More generally, the PMF can be written as The PMF is often also presented in terms of thePochammer symbolfor integerr{\displaystyle r} Thek-thfactorial momentof a beta negative binomial random variableXis defined fork<α{\displaystyle k<\alpha }and in this case is equal to The beta negative binomial isnon-identifiablewhich can be seen easily by simply swappingr{\displaystyle r}andβ{\displaystyle \beta }in the above density orcharacteristic functionand noting that it is unchanged. Thusestimationdemands that aconstraintbe placed onr{\displaystyle r},β{\displaystyle \beta }or both. The beta negative binomial distribution contains the beta geometric distribution as a special case when eitherr=1{\displaystyle r=1}orβ=1{\displaystyle \beta =1}. It can therefore approximate thegeometric distributionarbitrarily well. It also approximates the negative binomial distribution arbitrary well for largeα{\displaystyle \alpha }. It can therefore approximate thePoisson distributionarbitrarily well for largeα{\displaystyle \alpha },β{\displaystyle \beta }andr{\displaystyle r}. ByStirling's approximationto the beta function, it can be easily shown that for largek{\displaystyle k} which implies that the beta negative binomial distribution isheavy tailedand thatmomentsless than or equal toα{\displaystyle \alpha }do not exist. The beta geometric distribution is an important special case of the beta negative binomial distribution occurring forr=1{\displaystyle r=1}. In this case the pmf simplifies to This distribution is used in someBuy Till you Die(BTYD) models. Further, whenβ=1{\displaystyle \beta =1}the beta geometric reduces to theYule–Simon distribution. However, it is more common to define the Yule-Simon distribution in terms of a shifted version of the beta geometric. In particular, ifX∼BG(α,1){\displaystyle X\sim BG(\alpha ,1)}thenX+1∼YS(α){\displaystyle X+1\sim YS(\alpha )}. In the case when the 3 parametersr,α{\displaystyle r,\alpha }andβ{\displaystyle \beta }are positive integers, the Beta negative binomial can also be motivated by anurn model- or more specifically a basicPólya urn model. Consider an urn initially containingα{\displaystyle \alpha }red balls (the stopping color) andβ{\displaystyle \beta }blue balls. At each step of the model, a ball is drawn at random from the urn and replaced, along with one additional ball of the same color. The process is repeated over and over, untilr{\displaystyle r}red colored balls are drawn. The random variableX{\displaystyle X}of observed draws of blue balls are distributed according to aBNB(r,α,β){\displaystyle \mathrm {BNB} (r,\alpha ,\beta )}. Note, at the end of the experiment, the urn always contains the fixed numberr+α{\displaystyle r+\alpha }of red balls while containing the random numberX+β{\displaystyle X+\beta }blue balls. By the non-identifiability property,X{\displaystyle X}can be equivalently generated with the urn initially containingα{\displaystyle \alpha }red balls (the stopping color) andr{\displaystyle r}blue balls and stopping whenβ{\displaystyle \beta }red balls are observed.
https://en.wikipedia.org/wiki/Beta_negative_binomial_distribution
Inprobabilityandstatisticstheextended negative binomial distributionis adiscrete probability distributionextending thenegative binomial distribution. It is atruncatedversion of the negative binomial distribution[1]for which estimation methods have been studied.[2] In the context ofactuarial science, the distribution appeared in its general form in a paper by K. Hess, A. Liewald and K.D. Schmidt[3]when they characterized all distributions for which the extendedPanjer recursionworks. For the casem= 1, the distribution was already discussed by Willmot[4]and put into a parametrized family with thelogarithmic distributionand the negative binomial distribution by H.U. Gerber.[5] For a natural numberm≥ 1and real parametersp,rwith0 <p≤ 1and–m<r< –m+ 1, theprobability mass functionof the ExtNegBin(m,r,p) distribution is given by and where is the (generalized)binomial coefficientandΓdenotes thegamma function. Using thatf( . ;m,r,ps)fors∈(0, 1]is also a probability mass function, it follows that theprobability generating functionis given by For the important casem= 1, hencer∈(–1, 0), this simplifies to
https://en.wikipedia.org/wiki/Extended_negative_binomial_distribution
Inprobability theoryandstatistics, thenegative multinomial distributionis a generalization of thenegative binomial distribution(NB(x0,p)) to more than two outcomes.[1] As with the univariate negative binomial distribution, if the parameterx0{\displaystyle x_{0}}is a positive integer, the negative multinomial distribution has anurn modelinterpretation. Suppose we have an experiment that generatesm+1≥2 possible outcomes, {X0,...,Xm}, each occurring with non-negative probabilities {p0,...,pm} respectively. If sampling proceeded untilnobservations were made, then {X0,...,Xm} would have beenmultinomially distributed. However, if the experiment is stopped onceX0reaches the predetermined valuex0(assumingx0is a positive integer), then the distribution of them-tuple {X1,...,Xm} isnegative multinomial. These variables are not multinomially distributed because their sumX1+...+Xmis not fixed, being a draw from anegative binomial distribution. Ifm-dimensionalxis partitioned as followsX=[X(1)X(2)]with sizes[n×1(m−n)×1]{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {X} ^{(1)}\\\mathbf {X} ^{(2)}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}n\times 1\\(m-n)\times 1\end{bmatrix}}}and accordinglyp{\displaystyle {\boldsymbol {p}}}p=[p(1)p(2)]with sizes[n×1(m−n)×1]{\displaystyle {\boldsymbol {p}}={\begin{bmatrix}{\boldsymbol {p}}^{(1)}\\{\boldsymbol {p}}^{(2)}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}n\times 1\\(m-n)\times 1\end{bmatrix}}}and letq=1−∑ipi(2)=p0+∑ipi(1){\displaystyle q=1-\sum _{i}p_{i}^{(2)}=p_{0}+\sum _{i}p_{i}^{(1)}} The marginal distribution ofX(1){\displaystyle {\boldsymbol {X}}^{(1)}}isNM(x0,p0/q,p(1)/q){\displaystyle \mathrm {NM} (x_{0},p_{0}/q,{\boldsymbol {p}}^{(1)}/q)}. That is the marginal distribution is also negative multinomial with thep(2){\displaystyle {\boldsymbol {p}}^{(2)}}removed and the remainingp's properly scaled so as to add to one. The univariate marginalm=1{\displaystyle m=1}is said to have a negative binomial distribution. Theconditional distributionofX(1){\displaystyle \mathbf {X} ^{(1)}}givenX(2)=x(2){\displaystyle \mathbf {X} ^{(2)}=\mathbf {x} ^{(2)}}isNM(x0+∑xi(2),p(1)){\textstyle \mathrm {NM} (x_{0}+\sum {x_{i}^{(2)}},\mathbf {p} ^{(1)})}. That is,Pr(x(1)∣x(2),x0,p)=Γ(∑i=0mxi)(1−∑i=1npi(1))x0+∑i=1m−nxi(2)Γ(x0+∑i=1m−nxi(2))∏i=1n(pi(1))xi(xi(1))!.{\displaystyle \Pr(\mathbf {x} ^{(1)}\mid \mathbf {x} ^{(2)},x_{0},\mathbf {p} )=\Gamma \!\left(\sum _{i=0}^{m}{x_{i}}\right){\frac {(1-\sum _{i=1}^{n}{p_{i}^{(1)}})^{x_{0}+\sum _{i=1}^{m-n}x_{i}^{(2)}}}{\Gamma (x_{0}+\sum _{i=1}^{m-n}x_{i}^{(2)})}}\prod _{i=1}^{n}{\frac {(p_{i}^{(1)})^{x_{i}}}{(x_{i}^{(1)})!}}.} IfX1∼NM(r1,p){\displaystyle \mathbf {X} _{1}\sim \mathrm {NM} (r_{1},\mathbf {p} )}and IfX2∼NM(r2,p){\displaystyle \mathbf {X} _{2}\sim \mathrm {NM} (r_{2},\mathbf {p} )}areindependent, thenX1+X2∼NM(r1+r2,p){\displaystyle \mathbf {X} _{1}+\mathbf {X} _{2}\sim \mathrm {NM} (r_{1}+r_{2},\mathbf {p} )}. Similarly and conversely, it is easy to see from the characteristic function that the negative multinomial isinfinitely divisible. IfX=(X1,…,Xm)∼NM⁡(x0,(p1,…,pm)){\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})\sim \operatorname {NM} (x_{0},(p_{1},\ldots ,p_{m}))}then, if the random variables with subscriptsiandjare dropped from the vector and replaced by their sum,X′=(X1,…,Xi+Xj,…,Xm)∼NM⁡(x0,(p1,…,pi+pj,…,pm)).{\displaystyle \mathbf {X} '=(X_{1},\ldots ,X_{i}+X_{j},\ldots ,X_{m})\sim \operatorname {NM} (x_{0},(p_{1},\ldots ,p_{i}+p_{j},\ldots ,p_{m})).} This aggregation property may be used to derive the marginal distribution ofXi{\displaystyle X_{i}}mentioned above. The entries of thecorrelation matrixareρ(Xi,Xi)=1.{\displaystyle \rho (X_{i},X_{i})=1.}ρ(Xi,Xj)=cov⁡(Xi,Xj)var⁡(Xi)var⁡(Xj)=pipj(p0+pi)(p0+pj).{\displaystyle \rho (X_{i},X_{j})={\frac {\operatorname {cov} (X_{i},X_{j})}{\sqrt {\operatorname {var} (X_{i})\operatorname {var} (X_{j})}}}={\sqrt {\frac {p_{i}p_{j}}{(p_{0}+p_{i})(p_{0}+p_{j})}}}.} If we let the mean vector of the negative multinomial beμ=x0p0p{\displaystyle {\boldsymbol {\mu }}={\frac {x_{0}}{p_{0}}}\mathbf {p} }andcovariance matrixΣ=x0p02pp′+x0p0diag⁡(p),{\displaystyle {\boldsymbol {\Sigma }}={\tfrac {x_{0}}{p_{0}^{2}}}\,\mathbf {p} \mathbf {p} '+{\tfrac {x_{0}}{p_{0}}}\,\operatorname {diag} (\mathbf {p} ),}then it is easy to show through properties ofdeterminantsthat|Σ|=1p0∏i=1mμi{\textstyle |{\boldsymbol {\Sigma }}|={\frac {1}{p_{0}}}\prod _{i=1}^{m}{\mu _{i}}}. From this, it can be shown thatx0=∑μi∏μi|Σ|−∏μi{\displaystyle x_{0}={\frac {\sum {\mu _{i}}\prod {\mu _{i}}}{|{\boldsymbol {\Sigma }}|-\prod {\mu _{i}}}}}andp=|Σ|−∏μi|Σ|∑μiμ.{\displaystyle \mathbf {p} ={\frac {|{\boldsymbol {\Sigma }}|-\prod {\mu _{i}}}{|{\boldsymbol {\Sigma }}|\sum {\mu _{i}}}}{\boldsymbol {\mu }}.} Substituting sample moments yields themethod of momentsestimatesx^0=(∑i=1mxi¯)∏i=1mxi¯|S|−∏i=1mxi¯{\displaystyle {\hat {x}}_{0}={\frac {(\sum _{i=1}^{m}{{\bar {x_{i}}})}\prod _{i=1}^{m}{\bar {x_{i}}}}{|\mathbf {S} |-\prod _{i=1}^{m}{\bar {x_{i}}}}}}andp^=(|S|−∏i=1mx¯i|S|∑i=1mx¯i)x¯{\displaystyle {\hat {\mathbf {p} }}=\left({\frac {|{\boldsymbol {S}}|-\prod _{i=1}^{m}{{\bar {x}}_{i}}}{|{\boldsymbol {S}}|\sum _{i=1}^{m}{{\bar {x}}_{i}}}}\right){\boldsymbol {\bar {x}}}} Waller LA and Zelterman D. (1997). Log-linear modeling with the negative multi- nomial distribution. Biometrics 53: 971–82. Johnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1997). "Chapter 36: Negative Multinomial and Other Multinomial-Related Distributions".Discrete Multivariate Distributions. Wiley.ISBN978-0-471-12844-1.
https://en.wikipedia.org/wiki/Negative_multinomial_distribution
Instatistics,Poisson regressionis ageneralized linear modelform ofregression analysisused to modelcount dataandcontingency tables.[1]Poisson regression assumes the response variableYhas aPoisson distribution, and assumes thelogarithmof itsexpected valuecan be modeled by a linear combination of unknownparameters. A Poisson regression model is sometimes known as alog-linear model, especially when used to model contingency tables. Negative binomial regressionis a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution. Poisson regression models aregeneralized linear modelswith the logarithm as the (canonical)link function, and thePoisson distributionfunction as the assumed probability distribution of the response. Ifx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}is a vector ofindependent variables, then the model takes the form whereα∈R{\displaystyle \alpha \in \mathbb {R} }andβ∈Rn{\displaystyle \mathbf {\beta } \in \mathbb {R} ^{n}}. Sometimes this is written more compactly as wherex{\displaystyle \mathbf {x} }is now an (n+ 1)-dimensional vector consisting ofnindependent variables concatenated to the number one. Hereθ{\displaystyle \theta }is simplyβ{\displaystyle \beta }concatenated toα{\displaystyle \alpha }. Thus, when given a Poisson regression modelθ{\displaystyle \theta }and an input vectorx{\displaystyle \mathbf {x} }, the predicted mean of the associated Poisson distribution is given by IfYi{\displaystyle Y_{i}}areindependentobservations with corresponding valuesxi{\displaystyle \mathbf {x} _{i}}of the predictor variables, thenθ{\displaystyle \theta }can be estimated bymaximum likelihood. The maximum-likelihood estimates lack aclosed-form expressionand must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques. Suppose we have a model with a single predictor, that is,n=1{\displaystyle n=1}: Suppose we compute the predicted values at point(Y2,x2){\displaystyle (Y_{2},x_{2})}and(Y1,x1){\displaystyle (Y_{1},x_{1})}: By subtracting the first from the second: Suppose now thatx2=x1+1{\displaystyle x_{2}=x_{1}+1}. We obtain: So the coefficient of the model is to be interpreted as the increase in the logarithm of the count of the outcome variable when the independent variable increases by 1. By applying the rules of logarithms: That is, when the independent variable increases by 1, the outcome variable is multiplied by the exponentiated coefficient. The exponentiated coefficient is also called theincidence ratio. Often, the object of interest is the average partial effect or average marginal effect∂E(Y|x)∂x{\displaystyle {\frac {\partial E(Y|x)}{\partial x}}}, which is interpreted as the change in the outcomeY{\displaystyle Y}for a one unit change in the independent variablex{\displaystyle x}. The average partial effect in the Poisson model for a continuousx{\displaystyle x}can be shown to be:[2] This can be estimated using the coefficient estimates from the Poisson modelθ^=(α^,β^){\displaystyle {\hat {\theta }}=({\hat {\alpha }},{\hat {\beta }})}with the observed values ofx{\displaystyle \mathbb {x} }. Given a set of parametersθand an input vectorx, the mean of the predictedPoisson distribution, as stated above, is given by and thus, the Poisson distribution'sprobability mass functionis given by Now suppose we are given a data set consisting ofmvectorsxi∈Rn+1,i=1,…,m{\displaystyle x_{i}\in \mathbb {R} ^{n+1},\,i=1,\ldots ,m}, along with a set ofmvaluesy1,…,ym∈N{\displaystyle y_{1},\ldots ,y_{m}\in \mathbb {N} }. Then, for a given set of parametersθ, the probability of attaining this particular set of data is given by By the method ofmaximum likelihood, we wish to find the set of parametersθthat makes this probability as large as possible. To do this, the equation is first rewritten as alikelihood functionin terms ofθ: Note that the expression on theright hand sidehas not actually changed. A formula in this form is typically difficult to work with; instead, one uses thelog-likelihood: Notice that the parametersθonly appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value forθwe may drop theyi! and simply write To find a maximum, we need to solve an equation∂ℓ(θ∣X,Y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta \mid X,Y)}{\partial \theta }}=0}which has no closed-form solution. However, the negative log-likelihood,−ℓ(θ∣X,Y){\displaystyle -\ell (\theta \mid X,Y)}, is a convex function, and so standardconvex optimizationtechniques such asgradient descentcan be applied to find the optimal value ofθ. Poisson regression may be appropriate when the dependent variable is a count, for instance ofeventssuch as the arrival of a telephone call at a call centre.[3]The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day. Poisson regression may also be appropriate for rate data, where the rate is a count of events divided by some measure of that unit'sexposure(a particular unit of observation).[4]For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as anoffset. If the rate is count/exposure, multiplying both sides of the equation by exposure moves it to the right side of the equation. When both sides of the equation are then logged, the final model contains log(exposure) as a term that is added to the regression coefficients. This logged variable, log(exposure), is called the offset variable and enters on the right-hand side of the equation with a parameter estimate (for log(exposure)) constrained to 1. which implies Offset in the case of aGLMinRcan be achieved using theoffset()function: A characteristic of thePoisson distributionis that its mean is equal to its variance. In certain circumstances, it will be found that the observedvarianceis greater than the mean; this is known asoverdispersionand indicates that the model is not appropriate. A common reason is the omission of relevant explanatory variables, or dependent observations. Under some circumstances, the problem of overdispersion can be solved by usingquasi-likelihoodestimation or anegative binomial distributioninstead.[5][6] Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: IfE(Y) =μ, the quasi-Poisson model assumes var(Y) =θμwhile the gamma-Poisson assumes var(Y) =μ(1 +κμ), whereθis the quasi-Poisson overdispersion parameter, andκis the shape parameter of thenegative binomial distribution. For both models, parameters are estimated usingiteratively reweighted least squares. For quasi-Poisson, the weights areμ/θ. For negative binomial, the weights areμ/(1 +κμ). With largeμand substantial extra-Poisson variation, the negative binomial weights are capped at 1/κ. Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean.[7] Another common problem with Poisson regression is excess zeros: if there are two processes at work, one determining whether there are zero events or any events, and a Poisson process determining how many events there are, there will be more zeros than a Poisson regression would predict. An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers. Othergeneralized linear modelssuch as thenegative binomialmodel orzero-inflated modelmay function better in these cases. On the contrary, underdispersion may pose an issue for parameter estimation.[8] Poisson regression creates proportional hazards models, one class ofsurvival analysis: seeproportional hazards modelsfor descriptions of Cox models. When estimating the parameters for Poisson regression, one typically tries to find values forθthat maximize the likelihood of an expression of the form wheremis the number of examples in the data set, andp(yi;eθ′xi){\displaystyle p(y_{i};e^{\theta 'x_{i}})}is theprobability mass functionof thePoisson distributionwith the mean set toeθ′xi{\displaystyle e^{\theta 'x_{i}}}. Regularization can be added to this optimization problem by instead maximizing[9] for some positive constantλ{\displaystyle \lambda }. This technique, similar toridge regression, can reduceoverfitting.
https://en.wikipedia.org/wiki/Negative_binomial_regression
Instatistics, the class ofvector generalized linear models(VGLMs) was proposed to enlarge the scope of models catered for bygeneralized linear models(GLMs). In particular, VGLMs allow for response variables outside the classicalexponential familyand for more than one parameter. Each parameter (not necessarily a mean) can be transformed by alink function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values. Vector generalized linear models are described in detail in Yee (2015).[1]The central algorithm adopted is theiteratively reweighted least squaresmethod, formaximum likelihoodestimation of usually all the model parameters. In particular, Fisher scoring is implemented by such, which, for most models, uses the first and expected second derivatives of the log-likelihood function. GLMs essentially cover one-parameter models from the classicalexponential family, and include 3 of the most important statistical regression models: the linear model, Poisson regression for counts, and logistic regression for binary responses. However, the exponential family is far too limiting for regular data analysis. For example, for counts, zero-inflation, zero-truncation and overdispersion are regularly encountered, and the makeshift adaptations made to the binomial and Poisson models in the form of quasi-binomial and quasi-Poisson can be argued as being ad hoc and unsatisfactory. But the VGLM framework readily handles models such aszero-inflated Poissonregression, zero-altered Poisson (hurdle) regression, positive-Poisson regression, andnegative binomialregression. As another example, for the linear model, the variance of a normal distribution is relegated as a scale parameter and it is treated often as a nuisance parameter (if it is considered as a parameter at all). But the VGLM framework allows the variance to be modelled using covariates. As a whole, one can loosely think of VGLMs as GLMs that handle many models outside the classical exponential family and are not restricted to estimating a single mean. During estimation, rather than usingweighted least squaresduring IRLS, one usesgeneralized least squaresto handle the correlation between theMlinear predictors. We suppose that the response or outcome or thedependent variable(s),y=(y1,…,yQ1)T{\displaystyle {\boldsymbol {y}}=(y_{1},\ldots ,y_{Q_{1}})^{T}}, are assumed to be generated from a particulardistribution. Most distributions are univariate, so thatQ1=1{\displaystyle Q_{1}=1}, and an example ofQ1=2{\displaystyle Q_{1}=2}is the bivariate normal distribution. Sometimes we write our data as(xi,wi,yi){\displaystyle ({\boldsymbol {x}}_{i},w_{i},{\boldsymbol {y}}_{i})}fori=1,…,n{\displaystyle i=1,\ldots ,n}. Each of thenobservations are considered to be independent. Thenyi=(yi1,…,yiQ1)T{\displaystyle {\boldsymbol {y}}_{i}=(y_{i1},\ldots ,y_{iQ_{1}})^{T}}. Thewi{\displaystyle w_{i}}are known positive prior weights, and oftenwi=1{\displaystyle w_{i}=1}. The explanatory or independent variables are writtenx=(x1,…,xp)T{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{p})^{T}}, or wheniis needed, asxi=(xi1,…,xip)T{\displaystyle {\boldsymbol {x}}_{i}=(x_{i1},\ldots ,x_{ip})^{T}}. Usually there is anintercept, in which casex1=1{\displaystyle x_{1}=1}orxi1=1{\displaystyle x_{i1}=1}. Actually, the VGLM framework allows forSresponses, each of dimensionQ1{\displaystyle Q_{1}}. In the aboveS= 1. Hence the dimension ofyi{\displaystyle {\boldsymbol {y}}_{i}}is more generallyQ=S×Q1{\displaystyle Q=S\times Q_{1}}. One handlesSresponses by code such asvglm(cbind(y1, y2, y3) ~ x2 + x3, ..., data = mydata)forS= 3. To simplify things, most of this article hasS= 1. The VGLM usually consists of four elements: Eachlinear predictoris a quantity which incorporates information about the independent variables into the model. The symbolηj{\displaystyle \eta _{j}}(Greek"eta") denotes a linear predictor and a subscriptjis used to denote thejth one. It relates thejth parameter to the explanatory variables, andηj{\displaystyle \eta _{j}}is expressed as linear combinations (thus, "linear") of unknown parametersβj,{\displaystyle {\boldsymbol {\beta }}_{j},}i.e., of regression coefficientsβ(j)k{\displaystyle \beta _{(j)k}}. Thejth parameter,θj{\displaystyle \theta _{j}}, of the distribution depends on the independent variables,x,{\displaystyle {\boldsymbol {x}},}through Letη=(η1,…,ηM)T{\displaystyle {\boldsymbol {\eta }}=(\eta _{1},\ldots ,\eta _{M})^{T}}be the vector of all the linear predictors. (For convenience we always letη{\displaystyle {\boldsymbol {\eta }}}be of dimensionM). Thusallthe covariates comprisingx{\displaystyle {\boldsymbol {x}}}potentially affectallthe parameters through the linear predictorsηj{\displaystyle \eta _{j}}. Later, we will allow the linear predictors to be generalized to additive predictors, which is the sum of smooth functions of eachxk{\displaystyle x_{k}}and each function is estimated from the data. Each link function provides the relationship between a linear predictor and a parameter of the distribution. There are many commonly used link functions, and their choice can be somewhat arbitrary. It makes sense to try to match thedomainof the link function to therangeof the distribution's parameter value. Notice above that thegj{\displaystyle g_{j}}allows a different link function for each parameter. They have similar properties as withgeneralized linear models, for example, common link functions include thelogitlink for parameters in(0,1){\displaystyle (0,1)}, and theloglink for positive parameters. TheVGAMpackage has functionidentitylink()for parameters that can assume both positive and negative values. More generally, the VGLM framework allows for any linear constraints between the regression coefficientsβ(j)k{\displaystyle \beta _{(j)k}}of each linear predictors. For example, we may want to set some to be equal to 0, or constraint some of them to be equal. We have where theHk{\displaystyle {\boldsymbol {H}}_{k}}are theconstraint matrices. Each constraint matrix is known and prespecified, and hasMrows, and between 1 andMcolumns. The elements of constraint matrices are finite-valued, and often they are just 0 or 1. For example, the value 0 effectively omits that element while a 1 includes it. It is common for some models to have aparallelismassumption, which means thatHk=1M{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {1}}_{M}}fork=2,…,p{\displaystyle k=2,\ldots ,p}, and for some models, fork=1{\displaystyle k=1}too. The special case whenHk=IM{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {I}}_{M}}for allk=1,…,p{\displaystyle k=1,\ldots ,p}is known astrivial constraints; all the regression coefficients are estimated and are unrelated. Andθj{\displaystyle \theta _{j}}is known as anintercept-onlyparameter if thejth row of all theHk={\displaystyle {\boldsymbol {H}}_{k}=}are equal to0T{\displaystyle {\boldsymbol {0}}^{T}}fork=2,…,p{\displaystyle k=2,\ldots ,p}, i.e.,ηj=β(j)1∗{\displaystyle \eta _{j}=\beta _{(j)1}^{*}}equals an intercept only. Intercept-only parameters are thus modelled as simply as possible, as a scalar. The unknown parameters,β∗=(β(1)∗T,…,β(p)∗T)T{\displaystyle {\boldsymbol {\beta }}^{*}=({\boldsymbol {\beta }}_{(1)}^{*T},\ldots ,{\boldsymbol {\beta }}_{(p)}^{*T})^{T}}, are typically estimated by the method ofmaximum likelihood. All the regression coefficients may be put into a matrix as follows: With even more generally, one can allow the value of a variablexk{\displaystyle x_{k}}to have a different value for eachηj{\displaystyle \eta _{j}}. For example, if each linear predictor is for a different time point then one might have a time-varying covariate. For example, indiscrete choice models, one hasconditionallogit models,nestedlogit models,generalizedlogit models, and the like, to distinguish between certain variants and fit a multinomial logit model to, e.g., transport choices. A variable such as cost differs depending on the choice, for example, taxi is more expensive than bus, which is more expensive than walking. Thexijfacility ofVGAMallows one to generalizeηj(xi){\displaystyle \eta _{j}({\boldsymbol {x}}_{i})}toηj(xij){\displaystyle \eta _{j}({\boldsymbol {x}}_{ij})}. The most general formula is Here theoi{\displaystyle {\boldsymbol {o}}_{i}}is an optionaloffset; which translates to be an×M{\displaystyle n\times M}matrix in practice. TheVGAMpackage has anxijargument that allows the successive elements of the diagonal matrix to be inputted. Yee (2015)[1]describes anRpackage implementation in the called VGAM.[2]Currently this software fits approximately 150 models/distributions. The central modelling functions arevglm()andvgam(). Thefamilyargument is assigned aVGAM family function, e.g.,family = negbinomialfornegative binomialregression,family = poissonffforPoissonregression,family = propoddsfor theproportional odd modelorcumulative logit modelfor ordinal categorical regression. We are maximizing a log-likelihood where thewi{\displaystyle w_{i}}are positive and knownprior weights. Themaximum likelihoodestimates can be found using aniteratively reweighted least squaresalgorithm usingFisher's scoringmethod, with updates of the form: whereI(β(a)){\displaystyle {\boldsymbol {\mathcal {I}}}({\boldsymbol {\beta }}^{(a)})}is theFisher informationmatrix at iterationa. It is also called theexpected information matrix, orEIM. For the computation, the (small)model matrixconstructed from the RHS of the formula invglm()and the constraint matrices are combined to form abigmodel matrix. The IRLS is applied to this bigX. This matrix is known as the VLM matrix, since thevector linear modelis the underlying least squares problem being solved. A VLM is a weighted multivariate regression where the variance-covariance matrix for each row of the response matrix is not necessarily the same, and is known. (In classical multivariate regression, all the errors have the same variance-covariance matrix, and it is unknown). In particular, the VLM minimizes the weighted sum of squares This quantity is minimized at each IRLS iteration. Theworking responses(also known aspseudo-responseandadjusted dependent vectors) are where theWi{\displaystyle \mathbf {W} _{i}}are known asworking weightsorworking weight matrices. They are symmetric and positive-definite. Using the EIM helps ensure that they are all positive-definite (and not just the sum of them) over much of the parameter space. In contrast, using Newton–Raphson would mean the observed information matrices would be used, and these tend to be positive-definite in a smaller subset of the parameter space. Computationally, theCholesky decompositionis used to invert the working weight matrices and to convert the overallgeneralized least squaresproblem into anordinary least squaresproblem. Of course, allgeneralized linear modelsare a special cases of VGLMs. But we often estimate all parameters by fullmaximum likelihoodestimation rather than using the method of moments for the scale parameter. If the response variable is anordinal measurementwithM+ 1levels, then one may fit a model function of the form: forj=1,…,M.{\displaystyle j=1,\ldots ,M.}Different linksglead toproportional odds modelsorordered probitmodels, e.g., theVGAMfamily functioncumulative(link = probit)assigns a probit link to the cumulative probabilities, therefore this model is also called thecumulative probit model. In general they are calledcumulative link models. For categorical and multinomial distributions, the fitted values are an (M+ 1)-vector of probabilities, with the property that all probabilities add up to 1. Each probability indicates the likelihood of occurrence of one of theM+ 1 possible values. If the response variable is anominal measurement, or the data do not satisfy the assumptions of an ordered model, then one may fit a model of the following form: forj=1,…,M.{\displaystyle j=1,\ldots ,M.}The above link is sometimes called themultilogitlink, and the model is called themultinomial logitmodel. It is common to choose the first or the last level of the response as thereferenceorbaselinegroup; the above uses the last level. TheVGAMfamily functionmultinomial()fits the above model, and it has an argument calledrefLevelthat can be assigned the level used for as the reference group. Classical GLM theory performsPoisson regressionforcount data. The link is typically the logarithm, which is known as thecanonical link. The variance function is proportional to the mean: where the dispersion parameterτ{\displaystyle \tau }is typically fixed at exactly one. When it is not, the resultingquasi-likelihoodmodel is often described as Poisson withoverdispersion, orquasi-Poisson; thenτ{\displaystyle \tau }is commonly estimated by the method-of-moments and as such, confidence intervals forτ{\displaystyle \tau }are difficult to obtain. In contrast, VGLMs offer a much richer set of models to handle overdispersion with respect to the Poisson, e.g., thenegative binomialdistribution and several variants thereof. Another count regression model is thegeneralized Poisson distribution. Other possible models are thezeta distributionand theZipf distribution. RR-VGLMs are VGLMs where a subset of theBmatrix is of alower rank. Without loss of generality, suppose thatx=(x1T,x2T)T{\displaystyle {\boldsymbol {x}}=({\boldsymbol {x}}_{1}^{T},{\boldsymbol {x}}_{2}^{T})^{T}}is a partition of the covariate vector. Then the part of theBmatrix corresponding tox2{\displaystyle {\boldsymbol {x}}_{2}}is of the formACT{\displaystyle {\boldsymbol {A}}{\boldsymbol {C}}^{T}}whereA{\displaystyle {\boldsymbol {A}}}andC{\displaystyle {\boldsymbol {C}}}are thin matrices (i.e., withRcolumns), e.g., vectors if the rankR= 1. RR-VGLMs potentially offer several advantages when applied to certain models and data sets. Firstly, ifMandpare large then the number of regression coefficients that are estimated by VGLMs is large (M×p{\displaystyle M\times p}). Then RR-VGLMs can reduce the number of estimated regression coefficients enormously ifRis low, e.g.,R= 1 orR= 2. An example of a model where this is particularly useful is the RR-multinomial logit model, also known as thestereotype model. Secondly,ν=CTx2=(ν1,…,νR)T{\displaystyle {\boldsymbol {\nu }}={\boldsymbol {C}}^{T}{\boldsymbol {x}}_{2}=(\nu _{1},\ldots ,\nu _{R})^{T}}is anR-vector oflatent variables, and often these can be usefully interpreted. IfR= 1 then we can writeν=cTx2{\displaystyle \nu ={\boldsymbol {c}}^{T}{\boldsymbol {x}}_{2}}so that the latent variable comprises loadings on the explanatory variables. It may be seen that RR-VGLMs take optimal linear combinations of thex2{\displaystyle {\boldsymbol {x}}_{2}}and then a VGLM is fitted to the explanatory variables(x1,ν){\displaystyle ({\boldsymbol {x}}_{1},{\boldsymbol {\nu }})}. Thirdly, abiplotcan be produced ifR= 2 , and this allows the model to be visualized. It can be shown that RR-VGLMs are simply VGLMs where the constraint matrices for the variables inx2{\displaystyle {\boldsymbol {x}}_{2}}are unknown and to be estimated. It then transpires thatHk=A{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {A}}}for such variables. RR-VGLMs can be estimated by analternatingalgorithm which fixesA{\displaystyle {\boldsymbol {A}}}and estimatesC,{\displaystyle {\boldsymbol {C}},}and then fixesC{\displaystyle {\boldsymbol {C}}}and estimatesA{\displaystyle {\boldsymbol {A}}}, etc. In practice, some uniqueness constraints are needed forA{\displaystyle {\boldsymbol {A}}}and/orC{\displaystyle {\boldsymbol {C}}}. InVGAM, therrvglm()function usescorner constraintsby default, which means that the topRrows ofA{\displaystyle {\boldsymbol {A}}}is set toIR{\displaystyle {\boldsymbol {I}}_{R}}. RR-VGLMs were proposed in 2003.[3] A special case of RR-VGLMs is whenR= 1 andM= 2. This isdimension reductionfrom 2 parameters to 1 parameter. Then it can be shown that where elementst1{\displaystyle t_{1}}anda21{\displaystyle a_{21}}are estimated. Equivalently, This formula provides a coupling ofη1{\displaystyle \eta _{1}}andη2{\displaystyle \eta _{2}}. It induces a relationship between two parameters of a model that can be useful, e.g., for modelling a mean-variance relationship. Sometimes there is some choice of link functions, therefore it offers a little flexibility when coupling the two parameters, e.g., a logit, probit, cauchit or cloglog link for parameters in the unit interval. The above formula is particularly useful for thenegative binomial distribution, so that the RR-NB has variance function This has been called theNB-Pvariant by some authors. Theδ1{\displaystyle \delta _{1}}andδ2{\displaystyle \delta _{2}}are estimated, and it is also possible to obtain approximate confidence intervals for them too. Incidentally, several other useful NB variants can also be fitted, with the help of selecting the right combination of constraint matrices. For example,NB− 1,NB− 2 (negbinomial()default),NB−H; see Yee (2014)[4]and Table 11.3 of Yee (2015).[1] The subclass ofrow-column interaction models(RCIMs) has also been proposed; these are a special type of RR-VGLM. RCIMs apply only to a matrixYresponse and there are no explicit explanatory variablesx{\displaystyle {\boldsymbol {x}}}. Instead, indicator variables for each row and column are explicitly set up, and an order-Rinteraction of the formACT{\displaystyle {\boldsymbol {A}}{\boldsymbol {C}}^{T}}is allowed. Special cases of this type of model include theGoodman RC association modeland the quasi-variances methodology as implemented by theqvcalcR package. RCIMs can be defined as a RR-VGLM applied toYwith For the Goodman RC association model, we haveη1ij=log⁡μij,{\displaystyle \eta _{1ij}=\log \mu _{ij},}so that ifR= 0 then it is a Poisson regression fitted to a matrix of counts with row effects and column effects; this has a similar idea to a no-interaction two-way ANOVA model. Another example of a RCIM is ifg1{\displaystyle g_{1}}is the identity link and the parameter is the median and the model corresponds to an asymmetric Laplace distribution; then a no-interaction RCIM is similar to a technique calledmedian polish. InVGAM,rcim()andgrc()functions fit the above models. And also Yee and Hadi (2014)[5]show that RCIMs can be used to fit unconstrained quadratic ordination models to species data; this is an example of indirectgradient analysisinordination(a topic in statistical ecology). Vector generalized additive models(VGAMs) are a major extension to VGLMs in which the linear predictorηj{\displaystyle \eta _{j}}is not restricted to be linear in the covariatesxk{\displaystyle x_{k}}but is the sum ofsmoothing functionsapplied to thexk{\displaystyle x_{k}}: wheref(k)∗(xk)=(f(1)k∗(xk),f(2)k∗(xk),…)T.{\displaystyle {\boldsymbol {f}}_{(k)}^{*}(x_{k})=(f_{(1)k}^{*}(x_{k}),f_{(2)k}^{*}(x_{k}),\ldots )^{T}.}These areMadditive predictors. Each smooth functionf(j)k∗{\displaystyle f_{(j)k}^{*}}is estimated from the data. Thus VGLMs aremodel-drivenwhile VGAMs aredata-driven. Currently, only smoothing splines are implemented in theVGAMpackage. ForM> 1 they are actuallyvector splines, which estimate the component functions inf(j)k∗(xk){\displaystyle f_{(j)k}^{*}(x_{k})}simultaneously. Of course, one could use regression splines with VGLMs. The motivation behind VGAMs is similar to that of Hastie and Tibshirani (1990)[6]and Wood (2017).[7]VGAMs were proposed in 1996 .[8] Currently, work is being done to estimate VGAMs usingP-splinesof Eilers and Marx (1996) .[9]This allows for several advantages over usingsmoothing splinesand vectorbackfitting, such as the ability to perform automatic smoothing parameter selection easier. These add on a quadratic in the latent variable to the RR-VGLM class. The result is a bell-shaped curve can be fitted to each response, as a function of the latent variable. ForR= 2, one has bell-shaped surfaces as a function of the 2 latent variables---somewhat similar to abivariate normal distribution. Particular applications of QRR-VGLMs can be found inecology, in a field ofmultivariate analysiscalledordination. As a specific rank-1 example of a QRR-VGLM, consider Poisson data withSspecies. The model for Speciessis the Poisson regression fors=1,…,S{\displaystyle s=1,\ldots ,S}. The right-most parameterization which uses the symbolsαs,{\displaystyle \alpha _{s},}us,{\displaystyle u_{s},}ts,{\displaystyle t_{s},}has particular ecological meaning, because they relate to the speciesabundance,optimumandtolerancerespectively. For example, the tolerance is a measure of niche width, and a large value means that that species can live in a wide range of environments. In the above equation, one would needβ(s)3<0{\displaystyle \beta _{(s)3}<0}in order to obtain a bell-shaped curve. QRR-VGLMs fit Gaussian ordination models by maximum likelihood estimation, and they are an example ofdirect gradient analysis. Thecqo()function in theVGAMpackage currently callsoptim()to search for the optimalC{\displaystyle {\boldsymbol {C}}}, and given that, it is easy to calculate the site scores and fit a suitablegeneralized linear modelto that. The function is named after the acronym CQO, which stands forconstrained quadratic ordination: theconstrainedis for direct gradient analysis (there are environmental variables, and a linear combination of these is taken as the latent variable) and thequadraticis for the quadratic form in the latent variablesν{\displaystyle {\boldsymbol {\nu }}}on theη{\displaystyle {\boldsymbol {\eta }}}scale. Unfortunately QRR-VGLMs are sensitive to outliers in both the response and explanatory variables, as well as being computationally expensive, and may give a local solution rather than a global solution. QRR-VGLMs were proposed in 2004.[10]
https://en.wikipedia.org/wiki/Vector_generalized_linear_model
Instatistics,burstinessis the intermittent increases and decreases in activity orfrequencyof an event.[1][2]One measure of burstiness is theFano factor—a ratio between thevarianceandmeanof counts. Burstiness is observable in natural phenomena, such asnatural disasters, or other phenomena, such asnetwork/data/emailnetwork traffic[3][4]orvehicular traffic.[5]Burstiness is, in part, due to changes in theprobability distributionof inter-event times.[6]Distributions of bursty processes or events are characterised byheavy, or fat, tails.[1] Burstiness of inter-contact time between nodes in atime-varying networkcan decidedly slow spreading processes over the network. This is of great interest for studying the spread of information and disease.[7] One relatively simple measure of burstiness is burstiness score. The burstiness score of a subsett{\displaystyle t}of time periodT{\displaystyle T}relative to an evente{\displaystyle e}is a measure of how oftene{\displaystyle e}appears int{\displaystyle t}compared to its occurrences inT{\displaystyle T}. It is defined by WhereEt{\displaystyle E_{t}}is the total number of occurrences of evente{\displaystyle e}in subsett{\displaystyle t}andE{\displaystyle E}is the total number of occurrences ofe{\displaystyle e}inT{\displaystyle T}. Burstiness score can be used to determine ift{\displaystyle t}is a "bursty period" relative toe{\displaystyle e}. A positive score says thate{\displaystyle e}occurs more often during subsett{\displaystyle t}than over total timeT{\displaystyle T}, makingt{\displaystyle t}a bursty period. A negative score implies otherwise.[8]
https://en.wikipedia.org/wiki/Burstiness
Theclustering illusionis the tendency to erroneously consider the inevitable "streaks" or "clusters" arising in small samples from random distributions to be non-random. The illusion is caused by a human tendency to underpredict the amount ofvariabilitylikely to appear in a small sample of random orpseudorandomdata.[1] Thomas Gilovich, an early author on the subject, argued that the effect occurs for different types of random dispersions. Some might perceive patterns instock marketprice fluctuations over time, or clusters in two-dimensional data such as the locations of impact ofWorld War IIV-1 flying bombson maps of London.[1][2]Although Londoners developed specific theories about the pattern of impacts within London, a statistical analysis by R. D. Clarke originally published in 1946 showed that the impacts ofV-2 rocketson London were a close fit to a random distribution.[3][4][5][6][7] Using thiscognitive biasin causal reasoning may result in theTexas sharpshooter fallacy, in which differences in data are ignored and similarities are overemphasized. More general forms of erroneous pattern recognition arepareidoliaandapophenia. Related biases are theillusion of controlwhich the clustering illusion could contribute to, andinsensitivity to sample sizein which people don't expect greater variation in smaller samples. A different cognitive bias involving misunderstanding of chance streams is thegambler's fallacy. Daniel KahnemanandAmos Tverskyexplained this kind of misprediction as being caused by therepresentativeness heuristic[2](which itself they also first proposed).
https://en.wikipedia.org/wiki/Clustering_illusion
Forstatisticsinprobability theory, theBoolean-Poisson modelor simplyBoolean modelfor a random subset of the plane (or higher dimensions, analogously) is one of the simplest and most tractable models instochastic geometry. Take aPoisson point processof rateλ{\displaystyle \lambda }in the plane and make each point be the center of a random set; the resulting union of overlapping sets is a realization of the Boolean modelB{\displaystyle {\mathcal {B}}}. More precisely, the parameters areλ{\displaystyle \lambda }and aprobability distributionon compact sets; for each pointξ{\displaystyle \xi }of the Poisson point process we pick a setCξ{\displaystyle C_{\xi }}from the distribution, and then defineB{\displaystyle {\mathcal {B}}}as the union∪ξ(ξ+Cξ){\displaystyle \cup _{\xi }(\xi +C_{\xi })}of translated sets. To illustrate tractability with one simple formula, the mean density ofB{\displaystyle {\mathcal {B}}}equals1−exp⁡(−λA){\displaystyle 1-\exp(-\lambda A)}whereΓ{\displaystyle \Gamma }denotes the area ofCξ{\displaystyle C_{\xi }}andA=E⁡(Γ).{\displaystyle A=\operatorname {E} (\Gamma ).}The classical theory ofstochastic geometrydevelops many further formulae.[1][2] As related topics, the case of constant-sized discs is the basic model ofcontinuum percolation[3]and the low-density Boolean models serve as a first-order approximations in the study of extremes in many models.[4] Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Boolean_model_(probability_theory)
Inprobability theory, aCox process, also known as adoubly stochastic Poisson processis apoint processwhich is a generalization of aPoisson processwhere the intensity that varies across the underlying mathematical space (often space or time) is itself a stochastic process. The process is named after thestatisticianDavid Cox, who first published the model in 1955.[1] Cox processes are used to generate simulations ofspike trains(the sequence of action potentials generated by aneuron),[2]and also infinancial mathematicswhere they produce a "useful framework for modeling prices of financial instruments in whichcredit riskis a significant factor."[3] Letξ{\displaystyle \xi }be arandom measure. A random measureη{\displaystyle \eta }is called a Cox process directed byξ{\displaystyle \xi }, ifL(η∣ξ=μ){\displaystyle {\mathcal {L}}(\eta \mid \xi =\mu )}is aPoisson processwithintensity measureμ{\displaystyle \mu }. Here,L(η∣ξ=μ){\displaystyle {\mathcal {L}}(\eta \mid \xi =\mu )}is the conditional distribution ofη{\displaystyle \eta }, given{ξ=μ}{\displaystyle \{\xi =\mu \}}. Ifη{\displaystyle \eta }is a Cox process directed byξ{\displaystyle \xi }, thenη{\displaystyle \eta }has theLaplace transform for any positive,measurable functionf{\displaystyle f}. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cox_process
Instatisticsandprobability theory, apoint processorpoint fieldis a set of a random number ofmathematical pointsrandomly located on a mathematical space such as thereal lineorEuclidean space.[1][2] Point processes on the real line form an important special case that is particularly amenable to study,[3]because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in aGeiger counter, location of radio stations in atelecommunication network[4]or of searches on theworld-wide web. General point processes on a Euclidean space can be used forspatial data analysis,[5][6]which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience,[7]economics[8]and others. Since point processes were historically developed by different communities, there are different mathematical interpretations of a point process, such as arandom counting measureor a random set,[9][10]and different notations. The notations are described in detail on thepoint process notationpage. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,[11][12]though it has been remarked that the difference between point processes and stochastic processes is not clear.[12]Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[a]on which it is defined, such as the real line orn{\displaystyle n}-dimensional Euclidean space.[15][16]Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.[17][12]Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field.[18] In mathematics, a point process is arandom elementwhose values are "point patterns" on asetS. While in the exact mathematical definition a point pattern is specified as alocally finitecounting measure, it is sufficient for more applied purposes to think of a point pattern as acountablesubset ofSthat has nolimit points.[clarification needed] To define general point processes, we start with a probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, and a measurable space(S,S){\displaystyle (S,{\mathcal {S}})}whereS{\displaystyle S}is alocally compactsecond countableHausdorff spaceandS{\displaystyle {\mathcal {S}}}is itsBorel σ-algebra. Consider now an integer-valued locally finite kernelξ{\displaystyle \xi }from(Ω,F){\displaystyle (\Omega ,{\mathcal {F}})}into(S,S){\displaystyle (S,{\mathcal {S}})}, that is, a mappingΩ×S↦Z+{\displaystyle \Omega \times {\mathcal {S}}\mapsto \mathbb {Z} _{+}}such that: This kernel defines arandom measurein the following way. We would like to think ofξ{\displaystyle \xi }as defining a mapping which mapsω∈Ω{\displaystyle \omega \in \Omega }to a measureξω∈M(S){\displaystyle \xi _{\omega }\in {\mathcal {M}}({\mathcal {S}})}(namely,Ω↦M(S){\displaystyle \Omega \mapsto {\mathcal {M}}({\mathcal {S}})}), whereM(S){\displaystyle {\mathcal {M}}({\mathcal {S}})}is the set of all locally finite measures onS{\displaystyle S}. Now, to make this mapping measurable, we need to define aσ{\displaystyle \sigma }-field overM(S){\displaystyle {\mathcal {M}}({\mathcal {S}})}. Thisσ{\displaystyle \sigma }-field is constructed as the minimal algebra so that all evaluation maps of the formπB:μ↦μ(B){\displaystyle \pi _{B}:\mu \mapsto \mu (B)}, whereB∈S{\displaystyle B\in {\mathcal {S}}}isrelatively compact, are measurable. Equipped with thisσ{\displaystyle \sigma }-field, thenξ{\displaystyle \xi }is a random element, where for everyω∈Ω{\displaystyle \omega \in \Omega },ξω{\displaystyle \xi _{\omega }}is a locally finite measure overS{\displaystyle S}. Now, bya point processonS{\displaystyle S}we simply meanan integer-valued random measure(or equivalently, integer-valued kernel)ξ{\displaystyle \xi }constructed as above. The most common example for the state spaceSis the Euclidean spaceRnor a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets ofRn, in which caseξis usually referred to as aparticle process. Despite the namepoint processsinceSmight not be a subset of the real line, as it might suggest that ξ is astochastic process. Every instance (or event) of a point process ξ can be represented as whereδ{\displaystyle \delta }denotes theDirac measure,nis an integer-valued random variable andXi{\displaystyle X_{i}}are random elements ofS. IfXi{\displaystyle X_{i}}'s arealmost surelydistinct (or equivalently, almost surelyξ(x)≤1{\displaystyle \xi (x)\leq 1}for allx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}), then the point process is known assimple. Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as anN(t){\displaystyle N(t)}function, a continuous function which takes integer values:N:R→Z0+{\displaystyle N:{\mathbb {R} }\rightarrow {\mathbb {Z} _{0}^{+}}}: which is the number of events in the observation interval(t1,t2]{\displaystyle (t_{1},t_{2}]}. It is sometimes denoted byNt1,t2{\displaystyle N_{t_{1},t_{2}}}, andNT{\displaystyle N_{T}}orN(T){\displaystyle N(T)}meanN0,T{\displaystyle N_{0,T}}. Theexpectation measureEξ(also known asmean measure) of a point process ξ is a measure onSthat assigns to every Borel subsetBofSthe expected number of points ofξinB. That is, TheLaplace functionalΨN(f){\displaystyle \Psi _{N}(f)}of a point processNis a map from the set of all positive valued functionsfon the state space ofN, to[0,∞){\displaystyle [0,\infty )}defined as follows: They play a similar role as thecharacteristic functionsforrandom variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal. Then{\displaystyle n}th power of a point process,ξn,{\displaystyle \xi ^{n},}is defined on the product spaceSn{\displaystyle S^{n}}as follows : Bymonotone class theorem, this uniquely defines the product measure on(Sn,B(Sn)).{\displaystyle (S^{n},B(S^{n})).}The expectationEξn(⋅){\displaystyle E\xi ^{n}(\cdot )}is called then{\displaystyle n}thmoment measure. The first moment measure is the mean measure. LetS=Rd{\displaystyle S=\mathbb {R} ^{d}}. Thejoint intensitiesof a point processξ{\displaystyle \xi }w.r.t. theLebesgue measureare functionsρ(k):(Rd)k→[0,∞){\displaystyle \rho ^{(k)}:(\mathbb {R} ^{d})^{k}\to [0,\infty )}such that for any disjoint bounded Borel subsetsB1,…,Bk{\displaystyle B_{1},\ldots ,B_{k}} Joint intensities do not always exist for point processes. Given thatmomentsof arandom variabledetermine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases.[2] A point processξ⊂Rd{\displaystyle \xi \subset \mathbb {R} ^{d}}is said to bestationaryifξ+x:=∑i=1NδXi+x{\displaystyle \xi +x:=\sum _{i=1}^{N}\delta _{X_{i}+x}}has the same distribution asξ{\displaystyle \xi }for allx∈Rd.{\displaystyle x\in \mathbb {R} ^{d}.}For a stationary point process, the mean measureEξ(⋅)=λ‖⋅‖{\displaystyle E\xi (\cdot )=\lambda \|\cdot \|}for some constantλ≥0{\displaystyle \lambda \geq 0}and where‖⋅‖{\displaystyle \|\cdot \|}stands for the Lebesgue measure. Thisλ{\displaystyle \lambda }is called theintensityof the point process. A stationary point process onRd{\displaystyle \mathbb {R} ^{d}}has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones.[2]Stationarity has been defined and studied for point processes in more general spaces thanRd{\displaystyle \mathbb {R} ^{d}}. A point process transformation is a function that maps a point process to another point process. We shall see some examples of point processes inRd.{\displaystyle \mathbb {R} ^{d}.} The simplest and most ubiquitous example of a point process is thePoisson point process, which is a spatial generalisation of thePoisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have aPoisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point processξ{\displaystyle \xi }is a Poisson point process if the following two conditions hold 1)ξ(B1),…,ξ(Bn){\displaystyle \xi (B_{1}),\ldots ,\xi (B_{n})}are independent for disjoint subsetsB1,…,Bn.{\displaystyle B_{1},\ldots ,B_{n}.} 2) For any bounded subsetB{\displaystyle B},ξ(B){\displaystyle \xi (B)}has aPoisson distributionwith parameterλ‖B‖,{\displaystyle \lambda \|B\|,}where‖⋅‖{\displaystyle \|\cdot \|}denotes theLebesgue measure. The two conditions can be combined and written as follows : For any disjoint bounded subsetsB1,…,Bn{\displaystyle B_{1},\ldots ,B_{n}}and non-negative integersk1,…,kn{\displaystyle k_{1},\ldots ,k_{n}}we have that The constantλ{\displaystyle \lambda }is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameterλ.{\displaystyle \lambda .}It is a simple, stationary point process. To be more specific one calls the above point process a homogeneous Poisson point process. Aninhomogeneous Poisson processis defined as above but by replacingλ‖B‖{\displaystyle \lambda \|B\|}with∫Bλ(x)dx{\displaystyle \int _{B}\lambda (x)\,dx}whereλ{\displaystyle \lambda }is a non-negative function onRd.{\displaystyle \mathbb {R} ^{d}.} ACox process(named afterSir David Cox) is a generalisation of the Poisson point process, in that we userandom measuresin place ofλ‖B‖{\displaystyle \lambda \|B\|}. More formally, letΛ{\displaystyle \Lambda }be arandom measure. A Cox point process driven by therandom measureΛ{\displaystyle \Lambda }is the point processξ{\displaystyle \xi }with the following two properties : It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process isEξ(⋅)=EΛ(⋅){\displaystyle E\xi (\cdot )=E\Lambda (\cdot )}and thus in the special case of a Poisson point process, it isλ‖⋅‖.{\displaystyle \lambda \|\cdot \|.} For a Cox point process,Λ(⋅){\displaystyle \Lambda (\cdot )}is called theintensity measure. Further, ifΛ(⋅){\displaystyle \Lambda (\cdot )}has a (random) density (Radon–Nikodym derivative)λ(⋅){\displaystyle \lambda (\cdot )}i.e., thenλ(⋅){\displaystyle \lambda (\cdot )}is called theintensity fieldof the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes. There have been many specific classes of Cox point processes that have been studied in detail such as: By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsetsB{\displaystyle B}, whereξα{\displaystyle \xi _{\alpha }}stands for a Poisson point process with intensity measureα(⋅):=Eξ(⋅)=EΛ(⋅).{\displaystyle \alpha (\cdot ):=E\xi (\cdot )=E\Lambda (\cdot ).}Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes calledclusteringorattractive propertyof the Cox point process. An important class of point processes, with applications tophysics,random matrix theory, andcombinatorics, is that ofdeterminantal point processes.[25] A Hawkes processNt{\displaystyle N_{t}}, also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as whereν:R+→R+{\displaystyle \nu :\mathbb {R} ^{+}\rightarrow \mathbb {R} ^{+}}is a kernel function which expresses the positive influence of past eventsTi{\displaystyle T_{i}}on the current value of the intensity processλ(t){\displaystyle \lambda (t)},μ(t){\displaystyle \mu (t)}is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and{Ti:Ti<Ti+1}∈R{\displaystyle \{T_{i}:T_{i}<T_{i+1}\}\in \mathbb {R} }is the time of occurrence of thei-th event of the process.[26] Given a sequence of non-negative random variables{Xk,k=1,2,…}{\textstyle \{X_{k},k=1,2,\dots \}}, if they are independent and the cdf ofXk{\displaystyle X_{k}}is given byF(ak−1x){\displaystyle F(a^{k-1}x)}fork=1,2,…{\displaystyle k=1,2,\dots }, wherea{\displaystyle a}is a positive constant, then{Xk,k=1,2,…}{\displaystyle \{X_{k},k=1,2,\ldots \}}is called a geometric process (GP).[27] The geometric process has several extensions, including theα- series process[28]and thedoubly geometric process.[29] Historically the first point processes that were studied had the real half lineR+= [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems,[30]in which the points represented events in time, such as calls to a telephone exchange. Point processes onR+are typically described by giving the sequence of their (random) inter-event times (T1,T2, ...), from which the actual sequence (X1,X2, ...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called arenewal process. Theintensityλ(t|Ht) of a point process on the real half-line with respect to a filtrationHtis defined as Htcan denote the history of event-point times preceding timetbut can also correspond to other filtrations (for example in the case of a Cox process). In theN(t){\displaystyle N(t)}-notation, this can be written in a more compact form: Thecompensatorof a point process, also known as thedual-predictable projection, is the integrated conditional intensity function defined by ThePapangelou intensity functionof a point processN{\displaystyle N}in then{\displaystyle n}-dimensional Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}is defined as whereBδ(x){\displaystyle B_{\delta }(x)}is the ball centered atx{\displaystyle x}of a radiusδ{\displaystyle \delta }, andσ[N(Rn∖Bδ(x))]{\displaystyle \sigma [N(\mathbb {R} ^{n}\setminus B_{\delta }(x))]}denotes the information of the point processN{\displaystyle N}outsideBδ(x){\displaystyle B_{\delta }(x)}. The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as The analysis of point pattern data in a compact subsetSofRnis a major object of study withinspatial statistics. Such data appear in a broad range of disciplines,[32]amongst which are The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibitcomplete spatial randomness(i.e. are a realization of a spatialPoisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classicalmultivariate statisticsconsist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial). Apart from the applications in spatial statistics, point processes are one of the fundamental objects instochastic geometry. Research has also focussed extensively on various models built on point processes such asVoronoi tessellations,random geometric graphs, andBoolean models.
https://en.wikipedia.org/wiki/Point_process
In mathematics,stochastic geometryis the study of random spatial patterns. At the heart of the subject lies the study of random point patterns. This leads to the theory ofspatial point processes, hence notions of Palm conditioning, which extend to the more abstract setting ofrandom measures. There are various models for point processes, typically based on but going beyond the classic homogeneousPoisson point process(the basic model forcomplete spatial randomness) to find expressive models which allow effective statistical methods. The point pattern theory provides a major building block for generation of random object processes, allowing construction of elaborate random spatial patterns. The simplest version, theBoolean model, places a random compact object at each point of a Poisson point process. More complex versions allow interactions based in various ways on the geometry of objects. Different directions of application include: the production of models for random images either as set-union of objects, or as patterns of overlapping objects; also the generation of geometrically inspired models for the underlying point process (for example, the point pattern distribution may be biased by an exponential factor involving the area of the union of the objects; this is related to the Widom–Rowlinson model[1]of statistical mechanics). What is meant by a random object? A complete answer to this question requires the theory ofrandom closed sets, which makes contact with advanced concepts from measure theory. The key idea is to focus on the probabilities of the given random closed set hitting specified test sets. There arise questions of inference (for example, estimate the set which encloses a given point pattern) and theories of generalizations of means etc. to apply to random sets. Connections are now being made between this latter work and recent developments in geometric mathematical analysis concerning general metric spaces and their geometry. Good parametrizations of specific random sets can allow us to refer random object processes to the theory of marked point processes; object-point pairs are viewed as points in a larger product space formed as the product of the original space and the space of parametrization. Suppose we are concerned no longer with compact objects, but with objects which are spatially extended: lines on the plane or flats in 3-space. This leads to consideration of line processes, and of processes of flats or hyper-flats. There can no longer be a preferred spatial location for each object; however the theory may be mapped back into point process theory by representing each object by a point in a suitable representation space. For example, in the case of directed lines in the plane one may take the representation space to be a cylinder. A complication is that the Euclidean motion symmetries will then be expressed on the representation space in a somewhat unusual way. Moreover, calculations need to take account of interesting spatial biases (for example, line segments are less likely to be hit by random lines to which they are nearly parallel) and this provides an interesting and significant connection to the hugely significant area ofstereology, which in some respects can be viewed as yet another theme of stochastic geometry. It is often the case that calculations are best carried out in terms of bundles of lines hitting various test-sets, rather than by working in representation space. Line and hyper-flat processes have their own direct applications, but also find application as one way of creatingtessellationsdividing space; hence for example one may speak of Poisson line tessellations. A notable result[2]proves that the cell at the origin of the Poisson line tessellation is approximately circular when conditioned to be large. Tessellations in stochastic geometry can of course be produced by other means, for example by usingVoronoiand variant constructions, and also by iterating various means of construction. The name appears to have been coined byDavid KendallandKlaus Krickeberg[3]while preparing for a June 1969Oberwolfachworkshop, though antecedents for the theory stretch back much further under the namegeometric probability. The term "stochastic geometry" was also used by Frisch andHammersleyin 1963[4]as one of two suggestions for names of a theory of "random irregular structures" inspired bypercolation theory. This brief description has focused on the theory[3][5]of stochastic geometry, which allows a view of the structure of the subject. However, much of the life and interest of the subject, and indeed many of its original ideas, flow from a very wide range of applications, for example: astronomy,[6]spatially distributed telecommunications,[7]wireless network modeling and analysis,[8]modeling ofchannel fading,[9][10]forestry,[11]the statistical theory of shape,[12]material science,[13]multivariate analysis, problems inimage analysis[14]andstereology. There are links to statistical mechanics,[15]Markov chain Monte Carlo, and implementations of the theory in statistical computing (for example, spatstat[16]inR). Most recently determinantal and permanental point processes (connected to random matrix theory) are beginning to play a role.[17]
https://en.wikipedia.org/wiki/Stochastic_geometry
Inmathematicsandtelecommunications,stochastic geometry models of wireless networksrefer tomathematical modelsbased onstochastic geometrythat are designed to represent aspects ofwireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields includingpoint processes,spatial statistics,geometric probability,percolationtheory, as well as methods from more general mathematical disciplines such asgeometry,probability theory,stochastic processes,queueing theory,information theory, andFourier analysis.[1][2][3][4] In the early 1960s a stochastic geometry model[5]was developed to study wireless networks. This model is considered to be pioneering and the origin ofcontinuum percolation.[6]Network models based ongeometric probabilitywere later proposed and used in the late 1970s[7]and continued throughout the 1980s[8][9]for examiningpacket radio networks. Later their use increased significantly for studying a number of wireless network technologies includingmobilead hocnetworks,sensor networks,vehicularad hocnetworks,cognitive radionetworks and several types ofcellular networks, such asheterogeneous cellular networks.[10][11][12]Key performance andquality of servicequantities are often based on concepts frominformation theorysuch as thesignal-to-interference-plus-noise ratio, which forms the mathematical basis for defining network connectivity and coverage.[4][11] The principal idea underlying the research of these stochastic geometry models, also known asrandom spatial models,[10]is that it is best to assume that the locations of nodes or the network structure and the aforementioned quantities arerandomin nature due to the size and unpredictability of users in wireless networks. The use of stochastic geometry can then allow for the derivation of closed-form or semi-closed-form expressions for these quantities without resorting to simulation methods or (possibly intractable or inaccurate)deterministic models.[10] The discipline of stochastic geometry entails the mathematical study ofrandomobjects defined on some (oftenEuclidean) space. In the context of wireless networks, the random objects are usually simple points (which may represent the locations of network nodes such as receivers and transmitters) or shapes (for example, the coverage area of a transmitter) and the Euclidean space is either 3-dimensional, or more often, the (2-dimensional) plane, which represents a geographical region. In wireless networks (for example, cellular networks) the underlying geometry (the relative locations of nodes) plays a fundamental role due to the interference of other transmitters, whereas in wired networks (for example, theInternet) the underlying geometry is less important. A wireless network can be seen as a collection of (information theoretic)channelssharing space and some common frequency band. Each channel consists of a set oftransmitterstrying to send data to a set of receivers. The simplest channel is thepoint-to-pointchannel which involves a single transmitter aiming at sending data to a single receiver. The broadcast channel, in information theory terminology,[13]is theone-to-manysituation with a single transmitter aiming at sending different data to different receivers and it arises in, for example, thedownlinkof a cellular network.[14]The multiple access channel is the converse, with several transmitters aiming at sending different data to a single receiver.[13]This many-to-one situation arises in, for example, theuplinkof cellular networks.[14]Other channel types exist such as the many-to-many situation. These (information theoretic) channels are also referred to as network links, many of which will be simultaneously active at any given time. There are number of examples of geometric objects that can be of interest in wireless networks. For example, consider a collection ofpointsin the Euclidean plane. For each point, place in the plane a disk with its center located at the point. The disks are allowed to overlap with each other and the radius of each disk is random and (stochastically) independent of all the other radii. Themathematical objectconsisting of the union of all these disks is known as a Boolean (random disk) model[4][15][16]and may represent, for example, the sensing region of a sensor network. If all the radii are not random, but common positive constant, then the resulting model is known as theGilbert disk(Boolean) model.[17] Instead of placing disks on the plane, one may assign adisjoint(or non-overlapping) subregion to each node. Then the plane is partitioned into a collection of disjoint subregions. For example, each subregion may consist of the collection of all the locations of this plane that are closer to some point of the underlying point pattern than any other point of the point pattern. This mathematical structure is known as aVoronoi tessellationand may represent, for example, the association cells in a cellular network where users associate with the closest base station. Instead of placing a disk or a Voronoi cell on a point, one could place a cell defined from the information theoretic channels described above. For instance, the point-to-point channel cell of a point was defined[18]as the collection of all the locations of the plane where a receiver could sustain a point-to-point channel with a certain quality from a transmitter located at this point. This, given that the other point is also an active transmitter, is a point-to-point channel in its own right. In each case, the fact that the underlying point pattern is random (for example, a point process) or deterministic (for example, a lattice of points) or some combination of both, will influence the nature of the Boolean model, the Voronoi tessellation, and other geometrical structures such as the point-to-point channel cells constructed from it. In wired communication, the field of information theory (in particular, theShannon–Hartley theorem) motivates the need for studying thesignal-to-noise ratio(SNR). In a wireless communication, when a collection of channels is active at the same time, the interference from the other channels is considered as noise, which motivates the need for the quantity known as thesignal-to-interference-plus-noiseratio (SINR). For example, if we have a collection of point-to-point channels, the SINR of the channel of a particular transmitter–receiver pair is defined as: whereSis the power, at the receiver, of the incoming signal from said transmitter,Iis the combined power of all the other (interfering) transmitters in the network, andNis the power of some thermal noise term. TheSINRreduces toSNRwhen there is no interference (i.e.I= 0). In networks where the noise is negligible, also known as "interference limited" networks, weN= 0, which gives thesignal-to-interference ratio(SIR). A common goal of stochastic geometry wireless network models is to derive expressions for the SINR or for the functions of the SINR which determine coverage (or outage) and connectivity. For example, the concept of the outage probabilitypout, which is informally the probability of not being able to successfully send a signal on a channel, is made more precise in the point-to-point case by defining it as the probability that the SINR of a channel is less than or equal to some network-dependent threshold.[19]The coverage probabilitypcis then the probability that the SINR is larger than the SINR threshold. In short, given a SINR thresholdt, the outage and coverage probabilities are given by and One aim of the stochastic geometry models is to derive the probability laws of theShannon channel capacityor rate of a typical channel when taking into account the interference created by all other channels. In the point-to-point channel case, the interference created by other transmitters is considered as noise, and when thisnoiseisGaussian, the law of the typical Shannon channel capacity is then determined by that of the SINR through Shannon's formula (inbitsper second): whereBis thebandwidthof the channel inhertz. In other words, there is a direct relationship between the coverage or outage probability and the Shannon channel capacity. The problem of determining theprobability distributionofCunder such a random setting has been studied in several types of wireless network architectures or types. In general, the use of methods from the theories of probability and stochastic processes in communication systems has a long and interwoven history stretching back over a century to the pioneering teletraffic work ofAgner Erlang.[20]In the setting of stochastic geometry models,Edgar Gilbert[5]in the 1960s proposed a mathematical model for wireless networks, now known as a Gilbert disk model,[17]that gave rise to the field of continuum percolation theory, which in turn is a generalization of discrete percolation.[6]Starting in the late 1970s,Leonard Kleinrockand others used wireless models based on Poisson processes to study packet forward networks.[7][8][9]This work would continue until the 1990s where it would cross paths with the work on shot noise. The general theory and techniques of stochastic geometry and, in particular, point processes have often been motivated by the understanding of a type ofnoisethat arises in electronic systems known asshot noise. For certain mathematical functions of a point process, a standard method for finding the average (orexpectation) of the sum of these functions isCampbell's formula[4][21]or theorem,[22]which has its origins in the pioneering work byNorman R. Campbellon shot noise over a century ago.[23][24]Much later in the 1960s, Gilbert alongsideHenry Pollakstudied the shot noise process[25]formed from a sum of response functions of a Poisson process and identically distributed random variables. The shot noise process inspired more formal mathematical work in the field of point processes,[26][27]often involving the use ofcharacteristic functions, and would later be used for models of signal interference from other nodes in the network. Around the early 1990s, shot noise based on a Poisson process and a power-law repulse function was studied and observed to have astable distribution.[28]Independently, researchers[19][29]successfully developedFourierandLaplace transformtechniques for the interference experienced by a user in a wireless network in which the locations of the (interfering) nodes or transmitters are positioned according to a Poisson process. It was independently shown again that Poisson shot noise, now as a model for interference, has a stable distribution[29]by use of characteristic functions or, equivalently, Laplace transforms, which are often easier to work with than the corresponding probability distributions.[1][2][30] Moreover, the assumption of the received (i.e. useful) signal power beingexponentially distributed(for example, due toRayleigh fading) and the Poisson shot noise (for which the Laplace is known) allows for explicitclosed-form expressionfor the coverage probability based on the SINR.[19][31]This observation helps to explain why the Rayleighfadingassumption is frequently made when constructing stochastic geometry models.[1][2][4] Later in the early 2000s researchers started examining the properties of the regions under SINR coverage in the framework of stochastic geometry and, in particular, coverage processes.[18]Connectivity in terms of the SINR was studied using techniques from continuum percolation theory. More specifically, the early results of Gilbert were generalized to the setting of the SINR case.[32][33] A wireless network consists of nodes (each of which is a transmitter, receiver or both, depending on the system) that produce, relay or consume data within the network. For example,base stationsand users in a cellular phone network or sensor nodes in a sensor network. Before developingstochastic geometrywireless models, models are required for mathematically representing the signal propagation and the node positioning. The propagation model captures how signals propagate from transmitters to receivers. The node location or positioning model (idealizes and) represents the positions of the nodes as a point process. The choice of these models depends on the nature of the wireless network and its environment. The network type depends on such factors as the specific architecture (for instance cellular) and the channel ormedium access control(MAC) protocol, which controls the channels and, hence, the communicating structures of the network. In particular, to prevent the collision of transmissions in the network, the MAC protocol dictates, based on certain rules, when transmitter-receiver pairs can access the network both in time and space, which also affects the active node positioning model. Suitable and manageable models are needed for thepropagationofelectromagnetic signals(or waves) through variousmedia, such as air, taking into accountmultipath propagation(due to reflection, refraction, diffraction and dispersion) caused by signals colliding with obstacles such as buildings. The propagation model is a building block of the stochastic geometry wireless network model. A common approach is to consider propagation models with two separate parts consisting of the random and deterministic (or non-random) components of signal propagation. The deterministic component is usually represented by somepath-lossor attenuation function that uses the distance propagated by the signal (from its source) for modeling the power decay of electromagnetic signals. The distance-dependent path-loss function may be a simplepower-lawfunction (for example, theHata model), a fast-decaying exponential function, some combination of both, or another decreasing function. Owing to its tractability, models have often incorporated the power-law function where the path-loss exponentα> 2, and |x−y| denotes thedistancebetween pointyand the signal source at pointx. The random component seeks to capture certain types of signal fading associated with absorption and reflections by obstacles. Thefadingmodels in use include Rayleigh (implyingexponentialrandom variablesfor the power),log-normal,Rice, andNakagamidistributions. Both the deterministic and random components of signal propagation are usually considered detrimental to the overall performance of a wireless network. An important task in stochastic geometry network models is choosing a mathematical model for the location of the network nodes. The standard assumption is that the nodes are represented by (idealized) points in some space (often EuclideanRn, and even more often in the planeR2), which means they form a stochastic or random structure known as a (spatial) point process.[10] A number of point processes have been suggested to model the positioning of wireless network nodes. Among these, the most frequently used is thePoisson process, which gives a Poisson network model.[10]The Poisson process in general is commonly used as a mathematical model across numerous disciplines due to its highly tractable and well-studied nature.[15][22]It is often assumed that the Poisson process is homogeneous (implying it is astationary process) with some constant node densityλ. For a Poisson process in the plane, this implies that the probability of havingnpoints or nodes in a bounded regionBis given by where |B| is the area ofBandn! denotesnfactorial. The above equation quickly extends to theR3case by replacing the area term with avolumeterm. The mathematical tractability or ease of working with Poisson models is mostly because of its 'complete independence', which essentially says that two (or more) disjoint (or non-overlapping) bounded regions respectively contain two (or more) a Poisson number of points that are independent to each other. This important property characterizes the Poisson process and is often used as its definition.[22] The complete independence or `randomness'[35]property of Poisson processes leads to some useful characteristics and results ofpoint process operationssuch as the superposition property: the superposition ofn{\displaystyle n}Poisson processes with densitiesλ1toλnis another Poisson process with density Furthermore, randomly thinning a Poisson process (with densityλ), where each point is independently removed (or kept) with some probabilityp(or 1 −p), forms another Poisson process (with density (1 −p)λ) while the kept points also form a Poisson process (with densitypλ) that is independent to the Poisson process of removed points.[15][22] These properties and the definition of the homogeneous Poisson process extend to the case of the inhomogeneous (or non-homogeneous) Poisson process, which is a non-stationary stochastic process with a location-dependent densityλ(x) wherexis a point (usually in the plane,R2). For more information, see the articles on the Poisson process. Despite its simplifying nature, the independence property of the Poisson process has been criticized for not realistically representing the configuration of deployed networks.[34]For example, it does not capture node "repulsion" where two (or more) nodes in a wireless network may not be normally placed (arbitrarily) close to each other (for examples, base stations in a cellular network). In addition to this, MAC protocols often induce correlations or non-Poisson configurations into the geometry of the simultaneously active transmitter pattern. Strong correlations also arise in the case of cognitive radio networks where secondary transmitters are only allowed to transmit if they are far from primary receivers. To answer these and other criticisms, a number of point processes have been suggested to represent the positioning of nodes including the binomial process, cluster processes, Matérn hard-core processes,[2][4][36][37]and Strauss and Ginibre processes.[10][38][39]For example, Matérn hard-core processes are constructed by dependently thinning a Poisson point process. The dependent thinning is done in way such that for any point in the resulting hard-core process, there are no other points within a certain set radius of it, thus creating a "hard-core" around each point in the process.[4][15]On the other hand, soft-core processes have point repulsion that ranges somewhere between the hard-core processes and Poisson processes (which have no repulsion). More specifically, the probability of a point existing near another point in a soft-core point process decreases in some way as it approaches the other point, thus creating a "soft-core" around each point where other points can exist, but are less likely. Although models based on these and other point processes come closer to resembling reality in some situations, for example in the configuration of cellular base stations,[34][40]they often suffer from a loss of tractability while the Poisson process greatly simplifies the mathematics and techniques, explaining its continued use for developing stochastic geometry models of wireless networks.[10]Also, it has been shown that the SIR distribution of non-Poisson cellular networks can be closely approximated by applying a horizontal shift to the SIR distribution of a Poisson network.[41] The type of network model is a combination of factors such as the network architectural organization (cellular,ad hoc, cognitive radio), themedium access control(MAC) protocol being used, the application running on it, and whether the network is mobile or static. Around the beginning of the 21st century a number of new network technologies have arisen including mobilead hocnetworks and sensor networks. Stochastic geometry and percolation techniques have been used to develop models for these networks.[2][42]The increases in user traffic has resulted in stochastic geometry being applied to cellular networks.[43] Poisson bipolar network modelis a type of stochastic geometry model based on the Poisson process and is an early example of a model formobilead hocnetworks(MANETs),[2][31][44]which are a self-organizing wireless communication network in which mobile devices rely on no infrastructure (base stations or access points). In MANET models, the transmitters form a random point process and each transmitter has its receiver located at some random distance and orientation. The channels form a collection of transmitter-receiver pairs or "bipoles"; the signal of a channel is that transmitted over the associated bipole, whereas the interference is that created by all other transmitters than that of the bipole. The approach of considering the transmitters-receive bipoles led to the development and analysis of one of the Poisson bipolar network model. The choice of the medium access probability, which maximizes the mean number of successful transmissions per unit space, was in particular derived in.[31] Awireless sensor networkconsists of a spatially distributed collection of autonomous sensor nodes. Each node is designed to monitor physical or environmental conditions, such as temperature, sound, pressure, eetc. and to cooperatively relay the collected data through the network to a main location. In unstructured sensor networks,[45]the deployment of nodes may be done in a random manner. A chief performance criterion of all sensor networks is the ability of the network to gather data, which motivates the need to quantify the coverage or sensing area of the network. It is also important to gauge the connectivity of the network or its capability of relaying the collected data back to the main location. The random nature of unstructured sensors networks has motivated the use of stochastic geometry methods. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity.[42][46]One model that is used to study to these networks and wireless networks in general is thePoisson–Boolean model, which is a type of coverage process fromcontinuum percolation theory. One of the main limitations of sensor networks is energy consumption where usually each node has a battery and, perhaps, an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a sub-collection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Rudimentary power-saving models have been proposed such as the simple uncoordinated or decentralized "blinking" model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a new type model referred to as a blinking Boolean-Poisson model, was proposed to analyze the latency and connectivity performance of sensor networks with such sleep schemes.[42] Acellular networkis a radio network distributed over some region with subdivisions called cells, each served by at least one fixed-locationtransceiver, known as a cell base station. In cellular networks, each cell uses a different set of frequencies from neighboring cells, to mitigate interference and provide higher bandwidth within each cell. The operators of cellular networks need to known certain performance orquality of service(QoS) metrics in order todimensionthe networks, which means adjusting the density of the deployed base stations to meet the demand of user traffic for a required QoS level. In cellular networks, the channel from the users (or phones) to the base station(s) is known as the uplink channel. Conversely, the downlink channel is from the base station(s) to the users. The downlink channel is the most studied with stochastic geometry models while models for the uplink case, which is a more difficult problem, are starting to be developed.[47] In the downlink case, the transmitters and the receivers can be considered as two separate point processes. In the simplest case, there is one point-to-point channel per receiver (i.e. the user), and for a given receiver, this channel is from the closest transmitter (i.e. the base station) to the receiver. Another option consists in selecting the transmitter with the best signal power to the receiver. In any case, there may be several channels with the same transmitter. A first approach for analyzing cellular networks is to consider the typical user, who can be assumed to be located anywhere on the plane. Under the assumption of point process ergodicity (satisfied when using homogeneous Poisson processes), the results for the typical user correspond to user averages. The coverage probability of the typical user is then interpreted as the proportion of network users who can connect to the cellular network. Building off previous work done on anAloha model,[44]the coverage probability for the typical user was derived for a Poisson network.[43][48]The Poisson model of a cellular network proves to be more tractable than a hexagonal model.[43]Meanwhile, this observation could be argued by the fact that a detailed and precise derivation for the channel attenuation probability distribution function between a random node and a reference base-station for a hexagonal model was explicitly derived in;[49]and this result could be used to tractably derive the outage probability. In the presence of sufficiently strong and independent log-normal shadow fading (or shadowing) and a singular power-law attenuation function, it was observed by simulation[50]for hexagonal networks and then later mathematically proved[51][52]that for general stationary (including hexagonal) networks that quantities like the SINR and SIR of the typical user behave stochastically as though the underlying network were Poisson. In other words, given a path loss function, using a Poisson cellular network model with constant shadowing is equivalent (in terms of SIR, SINR, etc.) to assuming sufficiently large and independent fading or shadowing in the mathematical model with the base stations positioned according to either a deterministic or random configuration with a constant density. The results were originally derived for log-shadowing, but then extended to a large family of fading and shadowing models[52]For log- normal shadowing, it has also been mathematically shown that the wireless networks can still appear Poisson if there some correlation among the shadowing.[53] In the context of cellular networks, aheterogeneous network(sometimes known as a HetNet) is a network that uses several types of base stationsmacro-base stations,pico-base stations, and/orfemto-base stationsin order to provide better coverage andbit rates. This is in particular used to cope with the difficulty of covering with macro-base stations only open outdoor environment, office buildings, homes, and underground areas. Recent Poisson-based models have been developed to derive the coverage probability of such networks in the downlink case.[54][55][56]The general approach is to have a number or layers or "tiers"' of networks which are then combined or superimposed onto each other into one heterogeneous or multi-tier network. If each tier is a Poisson network, then the combined network is also a Poisson network owing to the superposition characteristic of Poisson processes.[22]Then the Laplace transform for this superimposed Poisson model is calculated, leading to the coverage probability in (the downlink channel) of a cellular network with multiple tiers when a user is connected to the instantaneously strongest base station[54]and when a user is connected to the strongest base station on average (not including small scale fading).[55] In recent years the model formulating approach of considering a "typical user" in cellular (or other) networks has been used considerably. This is, however, just a first approach which allows one to characterize only the spectral efficiency (or information rate) of the network. In other words, this approach captures the best possible service that can be given to a single user who does not need to share wireless network resources with other users. Models beyond the typical user approach have been proposed with the aim of analyzing QoS metrics of a population of users, and not just a single user. Broad speaking, these models can be classified into four types: static, semi-static, semi-dynamic and (fully) dynamic.[57]More specifically: The ultimate goal when constructing these models consists of relating the following three key network parameters: user traffic demand per surface unit, network density and user QoS metric(s). These relations form part of the network dimensioning tools, which allow the network operators to appropriately vary the density of the base stations to meet the traffic demands for a required performance level. The MAC protocol controls when transmitters can access the wireless medium. The aim is to reduce or prevent collisions by limiting the power of interference experienced by an active receiver. The MAC protocol determines the pattern of simultaneously active channels, given the underlying pattern of available channels. Different MAC protocols hence perform different thinning operations on the available channels, which results in different stochastic geometry models being needed. Aslotted Aloha wireless networkemploys the Aloha MAC protocol where the channels access the medium, independently at each time interval, with some probabilityp.[2]If the underlying channels (that is, their transmitters for the point-to-point case) are positioned according to a Poisson process (with densityλ), then the nodes accessing the network also form a Poisson network (with densitypλ), which allows the use of the Poisson model. ALOHA is not only one of the simplest and most classic MAC protocol but also was shown to achieveNash equilibriawhen interpreted as a power control schemes.[71] Several early stochastic models of wireless networks were based on Poisson point processes with the aim of studying the performance of slotted Aloha.[7][72][73]Under Rayleigh fading and the power-law path-loss function, outage (or equivalently, coverage) probability expressions were derived by treating the interference term as a shot noise and using Laplace transforms models,[19][74]which was later extended to a general path-loss function,[31][44][75]and then further extended to a pure or non-slotted Aloha case.[76] Thecarrier-sense multiple access(CSMA) MAC protocol controls the network in such a way that channels close to each other never access the medium simultaneously. When applied to a Poisson point process, this was shown to naturally lead to a Matérn-like hard-core (or soft-core in the case of fading) point process which exhibits the desired "repulsion".[2][36]The probability for a channel to be scheduled is known in closed-form, as well as the so-called pair-correlation function of the point process of scheduled nodes.[2] In a network withcode-division multiple access(CDMA) MAC protocol, each transmitter modulates its signal by a code that isorthogonalto that of the other signals, and which is known to its receiver. This mitigates the interference from other transmitters, and can be represented in a mathematical model by multiplying the interference by anorthogonalityfactor. Stochastic geometry models based on this type of representation were developed to analyze the coverage areas of transmitters positioned according to a Poisson process.[18] In the previous MAC-based models, point-to-point channels were assumed and the interference was considered as noise. In recent years, models have been developed to study more elaborate channels arising from the discipline of network information theory.[77]More specifically, a model was developed for one of the simplest settings: a collection of transmitter-receiver pairs represented as a Poisson point process.[78]In this model, the effects of an interference reduction scheme involving "point-to-point codes" were examined. These codes, consisting of randomly and independently generatedcodewords, give transmitters-receivers permission when to exchange information, thus acting as a MAC protocol. Furthermore, in this model a collection or "party" of channels was defined for each such pair. This party is a multiple access channel,[77]namely the many-to-one situation for channels. The receiver of the party is the same as that of the pair, and the transmitter of the pair belongs to the set of transmitters of the party, together with other transmitters. Using stochastic geometry, the probability of coverage was derived as well as the geometric properties of the coverage cells.[78]It was also shown[77]that when using the point-to-point codes and simultaneous decoding, the statistical gain obtained over a Poisson configuration is arbitrarily large compared to the scenario where interference is treated as noise. Stochastic geometry wireless models have been proposed for several network types includingcognitive radionetworks,[79][80]relay networks,[81]andvehicularad hocnetworks. For further reading of stochastic geometry wireless network models, see the textbook by Haenggi,[4]the two-volume text by Baccelli and Błaszczyszyn[1][2](availableonline), and the survey article.[11]For interference in wireless networks, see the monograph on interference by Ganti and Haenggi[30](availableonline). For an introduction to stochastic geometry and spatial statistics in a more general setting, see the lectures notes by Baddeley[21](availableonlinewith Springer subscription). For a complete and rigorous treatment of point processes, see the two-volume text by Daley and Vere-Jones[35][82](availableonlinewith Springer subscription).
https://en.wikipedia.org/wiki/Stochastic_geometry_models_of_wireless_networks
Inqueueing theory, a discipline within the mathematicaltheory of probability, aMarkovian arrival process(MAPorMArP[1]) is a mathematical model for the time between job arrivals to a system. The simplest such process is aPoisson processwhere the time between each arrival isexponentially distributed.[2][3] The processes were first suggested byMarcel F. Neutsin 1979.[2][4] A Markov arrival process is defined by two matrices,D0andD1where elements ofD0represent hidden transitions and elements ofD1observable transitions. Theblock matrixQbelow is atransition rate matrixfor acontinuous-time Markov chain.[5] The simplest example is a Poisson process whereD0= −λandD1=λwhere there is only one possible transition, it is observable, and occurs at rateλ. ForQto be a valid transition rate matrix, the following restrictions apply to theDi Thephase-type renewal processis a Markov arrival process withphase-type distributedsojourn between arrivals. For example, if an arrival process has an interarrival time distribution PH(α,S){\displaystyle ({\boldsymbol {\alpha }},S)}with an exit vector denotedS0=−S1{\displaystyle {\boldsymbol {S}}^{0}=-S{\boldsymbol {1}}}, the arrival process has generator matrix, Thebatch Markovian arrival process(BMAP) is a generalisation of the Markovian arrival process by allowing more than one arrival at a time.[6][7]The homogeneous case has rate matrix, An arrival of sizek{\displaystyle k}occurs every time a transition occurs in the sub-matrixDk{\displaystyle D_{k}}. Sub-matricesDk{\displaystyle D_{k}}have elements ofλi,j{\displaystyle \lambda _{i,j}}, the rate of aPoisson process, such that, and TheMarkov-modulated Poisson processorMMPPwheremPoisson processes are switched between by an underlyingcontinuous-time Markov chain.[8]If each of themPoisson processes has rateλiand the modulating continuous-time Markov hasm×mtransition rate matrixR, then the MAP representation is A MAP can be fitted using anexpectation–maximization algorithm.[9]
https://en.wikipedia.org/wiki/Markovian_arrival_processes
Instatistics, afixed-effect Poisson modelis aPoisson regressionmodel used for staticpanel datawhen the outcome variable iscount data. Hausman, Hall, and Griliches pioneered the method in the mid 1980s. Their outcome of interest was the number of patents filed by firms, where they wanted to develop methods to control for the firmfixed effects.[1]Linear panel data models use the linear additivity of the fixed effects to difference them out and circumvent theincidental parameter problem. Even though Poisson models are inherently nonlinear, the use of the linear index and the exponential link function lead to multiplicativeseparability, more specifically[2] This formula looks very similar to the standard Poisson premultiplied by the termai. As the conditioning set includes the observables over all periods, we are in the static panel data world and are imposingstrict exogeneity.[3]Hausman, Hall, and Griliches then use Andersen's conditional Maximum Likelihood methodology to estimateb0. Usingni= Σyitallows them to obtain the following nice distributional result ofyi At this point, the estimation of the fixed-effect Poisson model is transformed in a useful way and can be estimated by maximum-likelihood estimation techniques formultinomiallog likelihoods. This is computationally not necessarily very restrictive, but the distributional assumptions up to this point are fairly stringent. Wooldridge provided evidence that these models have nice robustness properties as long as the conditional mean assumption (i.e. equation 1) holds.[5]Chamberlain also providedsemi-parametric efficiency boundsfor these estimators under slightly weaker exogeneity assumptions. However, these bounds are practically difficult to attain, as the proposed methodology needshigh-dimensionalnonparametric regressionsfor attaining these bounds.
https://en.wikipedia.org/wiki/Fixed-effect_Poisson_model
Partial (pooled) likelihood estimation forpanel datais aquasi-maximum likelihoodmethod forpanel analysisthat assumes that density ofyit{\displaystyle y_{it}}givenxit{\displaystyle x_{it}}is correctly specified for each time period but it allows for misspecification in the conditional density ofyi=(yi1,…,yiT){\displaystyle y_{i}=(y_{i1},\dots ,y_{iT})}givenxi=(xi1,…,xiT){\displaystyle x_{i}=(x_{i1},\dots ,x_{iT})}. Concretely, partial likelihood estimation uses the product of conditional densities as the density of the joint conditional distribution. This generality facilitatesmaximum likelihoodmethods in panel data setting because fully specifying conditional distribution ofyican be computationally demanding.[1]On the other hand, allowing for misspecification generally results in violation of information equality and thus requires robuststandard error estimatorfor inference. In the following exposition, we follow the treatment in Wooldridge.[1]Particularly, the asymptotic derivation is done under fixed-T, growing-N setting. Writing the conditional density of yitgivenxitasft(yit|xit;θ), the partial maximum likelihood estimator solves: In this formulation, the joint conditional density ofyigivenxiis modeled asΠtft(yit|xit; θ). We assume thatft(yit|xit; θ)is correctly specified for eacht= 1,...,Tand that there existsθ0∈ Θ that uniquely maximizesE[ft(yit│xit; θ)].But, it is not assumed that the joint conditional density is correctly specified. Under some regularity conditions, partial MLE is consistent and asymptotically normal. By the usual argument forM-estimators(details in Wooldridge[1]), the asymptotic variance of√N(θMLE- θ0) is A−1BA−1whereA−1= E[ Σt∇2θlogft(yit│xit; θ)]−1and B=E[( Σt∇θlogft(yit│xit; θ) ) ( Σt∇θlogft(yit│xit; θ ) )T]. If the joint conditional density of yigiven xiis correctly specified, the above formula for asymptotic variance simplifies because information equality saysB=A. Yet, except for special circumstances, thejoint densitymodeled by partial MLE is not correct. Therefore, for valid inference, the above formula for asymptotic variance should be used. For information equality to hold, one sufficient condition is that scores of the densities for each time period are uncorrelated. In dynamically complete models, the condition holds and thus simplified asymptotic variance is valid.[1] Pooled QMLE is a technique that allows estimating parameters whenpanel datais available with Poisson outcomes. For instance, one might have information on the number of patents files by a number of different firms over time. Pooled QMLE does not necessarily containunobserved effects(which can be eitherrandom effectsorfixed effects), and the estimation method is mainly proposed for these purposes. The computational requirements are less stringent, especially compared tofixed-effect Poisson models, but the trade off is the possibly strong assumption of nounobserved heterogeneity. Pooled refers to pooling the data over the different time periodsT, while QMLE refers to the quasi-maximum likelihood technique. ThePoisson distributionofyi{\displaystyle y_{i}}givenxi{\displaystyle x_{i}}is specified as follows:[2] the starting point for Poisson pooled QMLE is the conditional mean assumption. Specifically, we assume that for someb0{\displaystyle b_{0}}in a compact parameter spaceB, the conditional mean is given by[2] The compact parameter space condition is imposed to enable the use ofM-estimation techniques, while the conditional mean reflects the fact that the population mean of a Poisson process is the parameter of interest. In this particular case, the parameter governing the Poisson process is allowed to vary with respect to the vectorxt⋅{\displaystyle x_{t}\centerdot }.[2]The functionmcan, in principle, change over time even though it is often specified as static over time.[3]Note that only the conditional mean function is specified, and we will get consistent estimates ofb0{\displaystyle b_{0}}as long as this mean condition is correctly specified. This leads to the following first order condition, which represents the quasi-log likelihood for the pooled Poisson estimation:[2] A popular choice ism=(xt,b0)=exp⁡(xtb0){\displaystyle m=(x_{t},b_{0})=\exp(x_{t}b_{0})}, as Poisson processes are defined over the positive real line.[3]This reduces the conditional moment to an exponential index function, wherextb0{\displaystyle x_{t}b_{0}}is the linear index and exp is the link function.[4]
https://en.wikipedia.org/wiki/Partial_likelihood_methods_for_panel_data#Pooled_QMLE_for_Poisson_models
Control functions(also known astwo-stage residual inclusion) are statistical methods to correct forendogeneityproblems by modelling the endogeneity in theerror term. The approach thereby differs in important ways from other models that try to account for the sameeconometricproblem.Instrumental variables, for example, attempt to model the endogenous variableXas an ofteninvertiblemodel with respect to a relevant andexogenousinstrumentZ.Panel analysisuses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time. Control functions were introduced byHeckmanand Robb[1]although the principle can be traced back to earlier papers.[2]A particular reason why they are popular is because they work for non-invertible models (such asdiscrete choice models) and allow forheterogeneouseffects, where effects at the individual level can differ from effects at the aggregate.[3]A well-known example of the control function approach is theHeckman correction. Assume we start from a standard endogenous variable setup with additive errors, whereXis an endogenous variable, andZis an exogenous variable that can serve as an instrument. A popular instrumental variable approach is to use a two-step procedure and estimate equation (2) first and then use the estimates of this first step to estimate equation (1) in a second step. The control function, however, uses that this model implies The functionh(V) is effectively the control function that models the endogeneity and where this econometric approach lends its name from.[4] In aRubin causal modelpotential outcomes framework, whereY1is the outcome variable of people for who the participation indicatorDequals 1, the control function approach leads to the following model as long as the potential outcomesY0andY1are independent ofDconditional onXandZ.[5] Since the second-stage regression includesgenerated regressors, its variance-covariance matrix needs to be adjusted.[6][7] Wooldridge and Terza provide a methodology to both deal with and test for endogeneity within the exponential regression framework, which the following discussion follows closely.[8]While the example focuses on aPoisson regressionmodel, it is possible to generalize to other exponential regression models, although this may come at the cost of additional assumptions (e.g. for binary response or censored data models). Assume the following exponential regression model, whereai{\displaystyle a_{i}}is an unobserved term in the latent variable. We allow for correlation betweenai{\displaystyle a_{i}}andxi{\displaystyle x_{i}}(implyingxi{\displaystyle x_{i}}is possibly endogenous), but allow for no such correlation betweenai{\displaystyle a_{i}}andzi{\displaystyle z_{i}}. The variableszi{\displaystyle z_{i}}serve as instrumental variables for the potentially endogenousxi{\displaystyle x_{i}}. One can assume a linear relationship between these two variables or alternatively project the endogenous variablexi{\displaystyle x_{i}}onto the instruments to get the following reduced form equation: The usual rank condition is needed to ensure identification. The endogeneity is then modeled in the following way, whereρ{\displaystyle \rho }determines the severity of endogeneity andvi{\displaystyle v_{i}}is assumed to be independent ofei{\displaystyle e_{i}}. Imposing these assumptions, assuming the models are correctly specified, and normalizingE⁡[exp⁡(ei)]=1{\displaystyle \operatorname {E} [\exp(e_{i})]=1}, we can rewrite the conditional mean as follows: Ifvi{\displaystyle v_{i}}were known at this point, it would be possible to estimate the relevant parameters byquasi-maximum likelihood estimation(QMLE). Following the two step procedure strategies, Wooldridge and Terza propose estimating equation (1) byordinary least squares. The fitted residuals from this regression can then be plugged into the estimating equation (2) and QMLE methods will lead to consistent estimators of the parameters of interest. Significance tests onρ^{\displaystyle {\hat {\rho }}}can then be used to test for endogeneity within the model. The original Heckit procedure makesdistributional assumptionsabout the error terms, however, more flexible estimation approaches with weaker distributional assumptions have been established.[9]Furthermore, Blundell and Powell show how the control function approach can be particularly helpful in models with nonadditive errors, such as discrete choice models.[10]This latter approach, however, does implicitly make strong distributional and functional form assumptions.[5]
https://en.wikipedia.org/wiki/Control_function_(econometrics)#Endogeneity_in_Poisson_regression
In the theory of finite population sampling,Bernoulli samplingis a sampling process where each element of thepopulationis subjected to anindependentBernoulli trialwhich determines whether the element becomes part of the sample. An essential property of Bernoulli sampling is that all elements of the population have equal probability of being included in the sample.[1] Bernoulli sampling is therefore a special case ofPoisson sampling. InPoisson samplingeach element of the population may have a different probability of being included in the sample. In Bernoulli sampling, the probability is equal for all the elements. Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows abinomial distribution. The most basic Bernoulli method generatesnrandom variates to extract a sample from a population ofnitems. Suppose you want to extract a given percentagepctof the population. The algorithm can be described as follows:[2] A percentage of 20%, say, is usually expressed as a probabilityp=0.2. In that case, random variates are generated in the unit interval. After running the algorithm, a sample of sizekwill have been selected. One would expect to havek≈n⋅p{\displaystyle k\approx n\cdot p}, which is more and more likely asngrows. In fact, It is possible to calculate the probability of obtaining a sample size ofkby theBinomial distribution: f(k,n,p)=(nk)pk(1−p)n−k{\displaystyle f(k,n,p)={\binom {n}{k}}p^{k}(1-p)^{n-k}} On the leftthis function is shown for four values ofn{\displaystyle n}andp=0.2{\displaystyle p=0.2}. In order to compare the values for different values ofn{\displaystyle n}, thek{\displaystyle k}'s in abscissa are scaled from[0,n]{\displaystyle \left[0,n\right]}to the unit interval, while the value of the function, in ordinate, is multiplied by the inverse, so that the area under the graph maintains the same value —that area is related to the corresponding cumulative distribution function. The values are shown in logarithmic scale. On the rightthe minimum values ofn{\displaystyle n}that satisfy given error bounds with 95% probability. Given an error, the set ofk{\displaystyle k}'s within bounds can be described as follows: Kn,p={k∈N:|kn−p|<error}{\displaystyle K_{n,p}=\left\{k\in \mathbb {N} :\left\vert {\frac {k}{n}}-p\right\vert <\mathrm {error} \right\}} The probability to end up withinK{\displaystyle K}is given again by the binomial distribution as: ∑k∈Kf(k,n,p).{\displaystyle \sum _{k\in K}f(k,n,p).} The picture shows the lowest values ofn{\displaystyle n}such that the sum is at least 0.95. Forp=0.0{\displaystyle p=0.0}andp=1.00{\displaystyle p=1.00}the algorithm delivers exact results for alln{\displaystyle n}'s. Thep{\displaystyle p}'s in between are obtained bybisection. Note that, if100⋅p{\displaystyle 100\cdot p}is an integer percentage,error=0.005{\displaystyle \mathrm {error} =0.005}, guarantees that100⋅k/n=100⋅p{\displaystyle 100\cdot k/n=100\cdot p}. Values as high asn=38400{\displaystyle n=38400}can be required for such an exact match.
https://en.wikipedia.org/wiki/Bernoulli_sampling
a0,t+(a0,t)2−(a0,t)2=a0,t{\displaystyle a_{0,t}+(a_{0,t})^{2}-(a_{0,t})^{2}=a_{0,t}}sinceRx(t1,t2)=a0,min(t1,t2)+a0,t1a0,t2{\displaystyle R_{x}(t_{1},t_{2})=a_{0,min(t_{1},t_{2})}+a_{0,t_{1}}a_{0,t_{2}}} Inprobability theory,statisticsand related fields, aPoisson point process(also known as:Poisson random measure,Poisson random point fieldandPoisson point field) is a type ofmathematical objectthat consists ofpointsrandomly located on amathematical spacewith the essential feature that the points occur independently of one another.[1]The process's name derives from the fact that the number of points in any given finite region follows aPoisson distribution. The process and the distribution are named after French mathematicianSiméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments onradioactive decay, telephone call arrivals andactuarial science.[2][3] This point process is used as amathematical modelfor seemingly random processes in numerous disciplines includingastronomy,[4]biology,[5]ecology,[6]geology,[7]seismology,[8]physics,[9]economics,[10]image processing,[11][12]andtelecommunications.[13][14] The Poisson point process is often defined on the real number line, where it can be considered astochastic process. It is used, for example, inqueueing theory[15]to model random events distributed in time, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes. In theplane, the point process, also known as aspatial Poisson process,[16]can represent the locations of scattered objects such as transmitters in awireless network,[13][17][18][19]particlescolliding into a detector or trees in a forest.[20]The process is often used in mathematical models and in the related fields of spatial point processes,[21]stochastic geometry,[1]spatial statistics[21][22]andcontinuum percolation theory.[23] The point process depends on a single mathematical object, which, depending on the context, may be aconstant, alocally integrable functionor, in more general settings, aRadon measure.[24]In the first case, the constant, known as therateorintensity, is the averagedensityof the points in the Poisson process located in some region of space. The resulting point process is called ahomogeneousorstationary Poisson point process.[25]In the second case, the point process is called aninhomogeneousornonhomogeneousPoisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process.[26]The wordpointis often omitted,[27]but there are otherPoisson processesof objects, which, instead of points, consist of more complicated mathematical objects such aslinesandpolygons, and such processes can be based on the Poisson point process.[28]Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of thegeneralized renewal process. Depending on the setting, the process has several equivalent definitions[29]as well as definitions of varying generality owing to its many applications and characterizations.[30]The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model;[31][32]in higher dimensions such as the plane where it plays a role instochastic geometry[1]andspatial statistics;[33]or on more general mathematical spaces.[34]Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.[35] Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used.[24][36]The two properties are not logically independent; indeed, the Poisson distribution of point counts implies the independence property,[a]while in the converse direction the assumptions that: (i) the point process is simple, (ii) has no fixed atoms, and (iii) is a.s. boundedly finite are required.[37] A Poisson point process is characterized via thePoisson distribution. The Poisson distribution is the probability distribution of arandom variableN{\textstyle N}(called aPoisson random variable) such that the probability thatN{\displaystyle \textstyle N}equalsn{\displaystyle \textstyle n}is given by: wheren!{\textstyle n!}denotesfactorialand the parameterΛ{\textstyle \Lambda }determines the shape of the distribution. (In fact,Λ{\textstyle \Lambda }equals the expected value ofN{\textstyle N}.) By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable.[36] Consider a collection ofdisjointand bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such ascomplete randomness,complete independence,[38]orindependent scattering[39][40]and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general,[41]which motivates the Poisson process being sometimes called apurelyorcompletelyrandom process.[38] If a Poisson point process has a parameter of the formΛ=νλ{\textstyle \Lambda =\nu \lambda }, whereν{\textstyle \nu }is Lebesgue measure (that is, it assigns length, area, or volume to sets) andλ{\textstyle \lambda }is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, calledrateorintensity, is related to the expected (or average) number of Poisson points existing in some bounded region,[42][43]whererateis usually used when the underlying space has one dimension.[42]The parameterλ{\textstyle \lambda }can be interpreted as the average number of points per some unit of extent such aslength, area,volume, or time, depending on the underlying mathematical space, and it is also called themean densityormean rate;[44]seeTerminology. The homogeneous Poisson point process, when considered on the positive half-line, can be defined as acounting process, a type of stochastic process, which can be denoted as{N(t),t≥0}{\textstyle \{N(t),t\geq 0\}}.[29][32]A counting process represents the total number of occurrences or events that have happened up to and including timet{\textstyle t}. A counting process is a homogeneous Poisson counting process with rateλ>0{\textstyle \lambda >0}if it has the following three properties:[29][32] The last property implies: In other words, the probability of the random variableN(t){\textstyle N(t)}being equal ton{\textstyle n}is given by: The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean1/λ{\textstyle 1/\lambda }.[45]The time differences between the events or arrivals are known asinterarrival[46]orinteroccurrencetimes.[45] Interpreted as apoint process, a Poisson point process can be defined on thereal lineby considering the number of points of the process in the interval(a,b]{\textstyle (a,b]}. For the homogeneous Poisson point process on the real line with parameterλ>0{\textstyle \lambda >0}, the probability of this random number of points, written here asN(a,b]{\textstyle N(a,b]}, being equal to somecounting numbern{\textstyle n}is given by:[47] For some positive integerk{\textstyle k}, the homogeneous Poisson point process has the finite-dimensional distribution given by:[47] where the real numbersai<bi≤ai+1{\textstyle a_{i}<b_{i}\leq a_{i+1}}. In other words,N(a,b]{\textstyle N(a,b]}is a Poisson random variable with meanλ(b−a){\textstyle \lambda (b-a)}, wherea≤b{\textstyle a\leq b}. Furthermore, the number of points in any two disjoint intervals, say,(a1,b1]{\textstyle (a_{1},b_{1}]}and(a2,b2]{\textstyle (a_{2},b_{2}]}are independent of each other, and this extends to any finite number of disjoint intervals.[47]In the queueing theory context, one can consider a point existing (in an interval) as anevent, but this is different to the wordeventin the probability theory sense.[b]It follows thatλ{\textstyle \lambda }is the expected number ofarrivalsthat occur per unit of time.[32] The previous definition has two important features shared by Poisson point processes in general:[47][24] Furthermore, it has a third feature related to just the homogeneous Poisson point process:[48] In other words, for any finitet>0{\textstyle t>0}, the random variableN(a+t,b+t]{\textstyle N(a+t,b+t]}is independent oft{\textstyle t}, so it is also called a stationary Poisson process.[47] The quantityλ(bi−ai){\textstyle \lambda (b_{i}-a_{i})}can be interpreted as the expected or average number of points occurring in the interval(ai,bi]{\textstyle (a_{i},b_{i}]}, namely: whereE{\displaystyle \operatorname {E} }denotes theexpectationoperator. In other words, the parameterλ{\textstyle \lambda }of the Poisson process coincides with thedensityof points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers.[49]More specifically, with probability one: wherelim{\textstyle \lim }denotes thelimitof a function, andλ{\displaystyle \lambda }is expected number of arrivals occurred per unit of time. The distance between two consecutive points of a point process on the real line will be anexponential random variablewith parameterλ{\textstyle \lambda }(or equivalently, mean1/λ{\textstyle 1/\lambda }). This implies that the points have thememorylessproperty: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing,[50][51]but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions.[52] A point process withstationary incrementsis sometimes said to beorderly[53]orregularif:[54] wherelittle-o notationis being used. A point process is called asimple point processwhen the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple,[55]which is the case for the homogeneous Poisson point process.[56] On the real line, the homogeneous Poisson point process has a connection to the theory ofmartingalesvia the following characterization: a point process is the homogeneous Poisson point process if and only if is a martingale.[57][58] On the real line, the Poisson process is a type of continuous-timeMarkov processknown as abirth process, a special case of thebirth–death process(with just births and zero deaths).[59][60]More complicated processes with theMarkov property, such asMarkov arrival processes, have been defined where the Poisson process is a special case.[45] If the homogeneous Poisson process is considered just on the half-line[0,∞){\textstyle [0,\infty )}, which can be the case whent{\textstyle t}represents time[29]then the resulting process is not truly invariant under translation.[52]In that case the Poisson process is no longer stationary, according to some definitions of stationarity.[25] There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role inqueueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena.[15][45]For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points.[61][62]This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as arenewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane.[63] Aspatial Poisson processis a Poisson point process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}.[57][64]For its mathematical definition, one first considers a bounded, open or closed (or more precisely,Borel measurable) regionB{\textstyle B}of the plane. The number of points of a point processN{\displaystyle \textstyle N}existing in this regionB⊂R2{\displaystyle \textstyle B\subset \mathbb {R} ^{2}}is a random variable, denoted byN(B){\displaystyle \textstyle N(B)}. If the points belong to a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB{\displaystyle \textstyle B}is given by: where|B|{\displaystyle \textstyle |B|}denotes the area ofB{\displaystyle \textstyle B}. For some finite integerk≥1{\displaystyle \textstyle k\geq 1}, we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) setsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}. The number of points of the point processN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}can be written asN(Bi){\displaystyle \textstyle N(B_{i})}. Then the homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[65] The spatial Poisson point process features prominently inspatial statistics,[21][22]stochastic geometry, andcontinuum percolation theory.[23]This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks.[17][18][19]For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process. The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded regionB{\displaystyle \textstyle B}of Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, if the points form a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is given by: where|B|{\displaystyle \textstyle |B|}now denotes thed{\displaystyle \textstyle d}-dimensional volume ofB{\displaystyle \textstyle B}. Furthermore, for a collection of disjoint, bounded Borel setsB1,…,Bk⊂Rd{\displaystyle \textstyle B_{1},\dots ,B_{k}\subset \mathbb {R} ^{d}}, letN(Bi){\displaystyle \textstyle N(B_{i})}denote the number of points ofN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}. Then the corresponding homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[67] Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameterλ{\displaystyle \textstyle \lambda }, which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process.[25]Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset ofRd{\textstyle \mathbb {R} ^{d}}, then depending on some definitions of stationarity, the process is no longer stationary.[25][52] If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval(a,b]{\displaystyle \textstyle (a,b]}wherea≤b{\displaystyle \textstyle a\leq b}, then its location will be a uniform random variable defined on that interval.[65]Furthermore, the homogeneous point process is sometimes called theuniformPoisson point process (seeTerminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates.[68][69] TheinhomogeneousornonhomogeneousPoisson point process(seeTerminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, this is achieved by introducing a locally integrable positive functionλ:Rd→[0,∞){\displaystyle \lambda \colon \mathbb {R} ^{d}\to [0,\infty )}, such that for every bounded regionB{\displaystyle \textstyle B}the (d{\displaystyle \textstyle d}-dimensional) volume integral ofλ(x){\displaystyle \textstyle \lambda (x)}over regionB{\displaystyle \textstyle B}is finite. In other words, if this integral, denoted byΛ(B){\displaystyle \textstyle \Lambda (B)}, is:[43] wheredx{\displaystyle \textstyle {\mathrm {d} x}}is a (d{\displaystyle \textstyle d}-dimensional) volume element,[c]then for every collection of disjoint boundedBorel measurablesetsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}, an inhomogeneous Poisson process with (intensity) functionλ(x){\displaystyle \textstyle \lambda (x)}has the finite-dimensional distribution:[67] Furthermore,Λ(B){\displaystyle \textstyle \Lambda (B)}has the interpretation of being the expected number of points of the Poisson process located in the bounded regionB{\displaystyle \textstyle B}, namely On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbersa{\displaystyle \textstyle a}andb{\displaystyle \textstyle b}, wherea≤b{\displaystyle \textstyle a\leq b}, denote byN(a,b]{\displaystyle \textstyle N(a,b]}the number points of an inhomogeneous Poisson process with intensity functionλ(t){\displaystyle \textstyle \lambda (t)}occurring in the interval(a,b]{\displaystyle \textstyle (a,b]}. The probability ofn{\displaystyle \textstyle n}points existing in the above interval(a,b]{\displaystyle \textstyle (a,b]}is given by: where the mean or intensity measure is: which means that the random variableN(a,b]{\displaystyle \textstyle N(a,b]}is a Poisson random variable with meanE⁡[N(a,b]]=Λ(a,b){\displaystyle \textstyle \operatorname {E} [N(a,b]]=\Lambda (a,b)}. A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by amonotone transformationor mapping, which is achieved with the inverse ofΛ{\displaystyle \textstyle \Lambda }.[70][71] The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}, represents the total number of occurrences or events that have happened up to and including timet{\displaystyle \textstyle t}. A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties:[32][72] whereo(h){\displaystyle \textstyle o(h)}is asymptotic orlittle-o notationforo(h)/h→0{\displaystyle \textstyle o(h)/h\rightarrow 0}ash→0{\displaystyle \textstyle h\rightarrow 0}. In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies:[73]Pr{N(t+h)−N(t)≥2}=o(h2){\displaystyle \Pr\{N(t+h)-N(t)\geq 2\}=o(h^{2})}. The above properties imply thatN(t+h)−N(t){\displaystyle \textstyle N(t+h)-N(t)}is a Poisson random variable with the parameter (or mean) which implies An inhomogeneous Poisson process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}is called aspatial Poisson process[16]It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region.[20][74]For example, its intensity function (as a function of Cartesian coordinatesx{\textstyle x}andy{\displaystyle \textstyle y}) can be so the corresponding intensity measure is given by the surface integral whereB{\textstyle B}is some bounded region in the planeR2{\textstyle \mathbb {R} ^{2}}. In the plane,Λ(B){\textstyle \Lambda (B)}corresponds to a surface integral while inRd{\textstyle \mathbb {R} ^{d}}the integral becomes a (d{\textstyle d}-dimensional) volume integral. When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory.[72][75]Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include: In the plane, the Poisson point process is important in the related disciplines of stochastic geometry[1][33]and spatial statistics.[21][22]The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density.[20]This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans,[78]forestry,[6]and search problems.[79] The Poisson intensity functionλ(x){\textstyle \lambda (x)}has an interpretation, considered intuitive,[20]with the volume elementdx{\textstyle \mathrm {d} x}in the infinitesimal sense:λ(x)dx{\textstyle \lambda (x)\,\mathrm {d} x}is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volumedx{\textstyle \mathrm {d} x}located atx{\textstyle x}.[20] For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of widthδ{\textstyle \delta }is approximatelyλδ{\textstyle \lambda \delta }. In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived.[80][41][81] If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is asimple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space.[82][18][83] Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulationwindow, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated.[84][85] The number of pointsN{\textstyle N}in the window, denoted here byW{\textstyle W}, needs to be simulated, which is done by using a (pseudo)-random number generatingfunction capable of simulating Poisson random variables. For the homogeneous case with the constantλ{\textstyle \lambda }, the mean of the Poisson random variableN{\textstyle N}is set toλ|W|{\textstyle \lambda |W|}where|W|{\textstyle |W|}is the length, area or (d{\textstyle d}-dimensional) volume ofW{\textstyle W}. For the inhomogeneous case,λ|W|{\textstyle \lambda |W|}is replaced with the (d{\textstyle d}-dimensional) volume integral The second stage requires randomly placing theN{\displaystyle \textstyle N}points in the windowW{\displaystyle \textstyle W}. For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or intervalW{\displaystyle \textstyle W}. For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the windowW{\displaystyle \textstyle W}. If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed inW{\displaystyle \textstyle W}, and suitable change of coordinates (from Cartesian) are needed.[84] For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity functionλ(x){\displaystyle \textstyle \lambda (x)}.[84]If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinatesr{\displaystyle \textstyle r}andθ{\displaystyle \textstyle \theta }), implying it is rotationally variant or independent ofθ{\displaystyle \textstyle \theta }but dependent onr{\displaystyle \textstyle r}, by a change of variable inr{\displaystyle \textstyle r}if the intensity function is sufficiently simple.[84] For more complicated intensity functions, one can use anacceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio:.[86] wherexi{\displaystyle \textstyle x_{i}}is the point under consideration for acceptance or rejection. That is, a location is uniformly randomly selected for consideration, then to determine whether to place a sample at that location a uniformly randomly drawn number in[0,1]{\displaystyle [0,1]}is compared to the probability density functionλ(x)Λ(W){\displaystyle {\frac {\lambda (x)}{\Lambda (W)}}}, accepting if it is smaller than the probability density function, and repeating until the previously chosen number of samples have been drawn. Inmeasure theory, the Poisson point process can be further generalized to what is sometimes known as thegeneral Poisson point process[20][87]orgeneral Poisson process[74]by using aRadon measureΛ{\displaystyle \textstyle \Lambda }, which is alocally finite measure. In general, this Radon measureΛ{\displaystyle \textstyle \Lambda }can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points atx{\displaystyle \textstyle x}is a Poisson random variable with meanΛ(x){\displaystyle \textstyle \Lambda ({x})}.[87]But sometimes the converse is assumed, so the Radon measureΛ{\displaystyle \textstyle \Lambda }isdiffuseor non-atomic.[20] A point processN{\displaystyle \textstyle {N}}is a general Poisson point process with intensityΛ{\displaystyle \textstyle \Lambda }if it has the two following properties:[20] The Radon measureΛ{\displaystyle \textstyle \Lambda }maintains its previous interpretation of being the expected number of points ofN{\displaystyle \textstyle {N}}located in the bounded regionB{\displaystyle \textstyle B}, namely Furthermore, ifΛ{\displaystyle \textstyle \Lambda }is absolutely continuous such that it has a density (which is theRadon–Nikodym densityor derivative) with respect to the Lebesgue measure, then for all Borel setsB{\displaystyle \textstyle B}it can be written as: where the densityλ(x){\displaystyle \textstyle \lambda (x)}is known, among other terms, as the intensity function. Despite its name, the Poisson point process was neither discovered nor studied by its namesake. It is cited as an example ofStigler's law of eponymy.[2][3]The name arises from the process's inherent relation to the Poisson distribution, derived by Poisson as a limiting case of thebinomial distribution.[88]It describes theprobabilityof the sum ofn{\displaystyle \textstyle n}Bernoulli trialswith probabilityp{\displaystyle \textstyle p}, often likened to the number of heads (or tails) aftern{\displaystyle \textstyle n}biasedcoin flipswith the probability of a head (or tail) occurring beingp{\displaystyle \textstyle p}. For some positive constantΛ>0{\displaystyle \textstyle \Lambda >0}, asn{\displaystyle \textstyle n}increases towards infinity andp{\displaystyle \textstyle p}decreases towards zero such that the productnp=Λ{\displaystyle \textstyle np=\Lambda }is fixed, the Poisson distribution more closely approximates that of the binomial.[89] Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in thelimitofp{\displaystyle \textstyle p}(to zero) andn{\displaystyle \textstyle n}(to infinity). It only appears once in all of Poisson's work,[90]and the result was not well known during his time. Over the following years others used the distribution without citing Poisson, includingPhilipp Ludwig von SeidelandErnst Abbe.[91][2]At the end of the 19th century,Ladislaus Bortkiewiczstudied the distribution, citing Poisson, using real data on the number of deaths from horse kicks in thePrussian army.[88][92] There are a number of claims for early uses or discoveries of the Poisson point process.[2][3]For example,John Michellin 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the erroneous assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brighteststarsin thePleiades, without deriving the Poisson distribution. This work inspiredSimon Newcombto study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution in 1860.[3] At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations.[2][3]In Sweden 1903,Filip Lundbergpublished a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.[93][94] InDenmarkA.K. Erlangderived the Poisson distribution in 1909 when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang unaware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent of each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[2] In 1910Ernest RutherfordandHans Geigerpublished experimental results on counting alpha particles. Their experimental work had mathematical contributions fromHarry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process.[2]After this time, there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.[2] The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields bybiologists, ecologists, engineers and others working in thephysical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used.[2]For example, in 1922 SwedishchemistandNobel LaureateTheodor Svedbergproposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities.[95]A number of mathematicians started studying the process in the early 1930s, and important contributions were made byAndrey Kolmogorov,William FellerandAleksandr Khinchin,[2]among others.[96]In the field ofteletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes.[97] The SwedeConny Palmin his 1943dissertationstudied the Poisson and other point processes in theone-dimensionalsetting by examining them in terms of the statistical or stochastic dependence between the points in time.[98][97]In his work exists the first known recorded use of the termpoint processesasPunktprozessein German.[98][3] It is believed[2]that William Feller was the first in print to refer to it as thePoisson processin a 1940 paper. Although the Swede Ove Lundberg used the termPoisson processin his 1940 PhD dissertation,[3]in which Feller was acknowledged as an influence,[99]it has been claimed that Feller coined the term before 1940.[89]It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then.[3]Feller worked from 1936 to 1939 alongsideHarald CramératStockholm University, where Lundberg was a PhD student under Cramér who did not use the termPoisson processin a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the termPoisson processwas coined sometime between 1936 and 1939 at the Stockholm University.[3] The terminology of point process theory in general has been criticized for being too varied.[3]In addition to the wordpointoften being omitted,[63][27]the homogeneous Poisson (point) process is also called astationaryPoisson (point) process,[47]as well asuniformPoisson (point) process.[42]The inhomogeneous Poisson point process, as well as being callednonhomogeneous,[47]is also referred to as thenon-stationaryPoisson process.[72][100] The termpoint processhas been criticized, as the termprocesscan suggest over time and space, sorandom point field,[101]resulting in the termsPoisson random point fieldorPoisson point fieldbeing also used.[102]A point process is considered, and sometimes called, a random counting measure,[103]hence the Poisson point process is also referred to as aPoisson random measure,[104]a term used in the study of Lévy processes,[104][105]but some choose to use the two terms for Poisson points processes defined on two different underlying spaces.[106] The underlying mathematical space of the Poisson point process is called acarrier space,[107][108]orstate space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line,[109][110]which corresponds to the index set[111]or parameter set[112]in stochastic process terminology. The measureΛ{\displaystyle \textstyle \Lambda }is called theintensity measure,[113]mean measure,[36]orparameter measure,[67]as there are no standard terms.[36]IfΛ{\displaystyle \textstyle \Lambda }has a derivative or density, denoted byλ(x){\displaystyle \textstyle \lambda (x)}, is called theintensity functionof the Poisson point process.[20]For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constantλ>0{\displaystyle \textstyle \lambda >0}, which can be referred to as therate, usually when the underlying space is the real line, or theintensity.[42]It is also called themean rateor themean density[114]orrate.[32]Forλ=1{\displaystyle \textstyle \lambda =1}, the corresponding process is sometimes referred to as thestandard Poisson(point) process.[43][57][115] The extent of the Poisson point process is sometimes called theexposure.[116][117] The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}is used to represent the Poisson process.[29][32] Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notationx∈N{\displaystyle \textstyle x\in N}, implying thatx{\displaystyle \textstyle x}is a random point belonging to or being an element of the Poisson point processN{\displaystyle \textstyle N}. Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point processN{\displaystyle \textstyle {N}}being found or located in some (Borel measurable) regionB{\displaystyle \textstyle B}asN(B){\displaystyle \textstyle N(B)}, which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.[118] For general point processes, sometimes a subscript on the point symbol, for examplex{\displaystyle \textstyle x}, is included so one writes (with set notation)xi∈N{\displaystyle \textstyle x_{i}\in N}instead ofx∈N{\displaystyle \textstyle x\in N}, andx{\displaystyle \textstyle x}can be used for thebound variablein integral expressions such as Campbell's theorem, instead of denoting random points.[18]Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the pointx{\displaystyle \textstyle x}orxi{\displaystyle \textstyle x_{i}}belongs to or is a point of the point processX{\displaystyle \textstyle X}, and be written with set notation asx∈X{\displaystyle \textstyle x\in X}orxi∈X{\displaystyle \textstyle x_{i}\in X}.[110] Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point processN{\displaystyle \textstyle N}defined on the Euclidean state spaceRd{\displaystyle \textstyle {\mathbb {R} ^{d}}}and a (measurable) functionf{\displaystyle \textstyle f}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the expression demonstrates two different ways to write a summation over a point process (see alsoCampbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation.[118] In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem.[119]In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively.[120][121] For a Poisson point processN{\displaystyle \textstyle N}with intensity measureΛ{\displaystyle \textstyle \Lambda }on some spaceX{\displaystyle X}, theLaplace functionalis given by:[18] One version ofCampbell's theoreminvolves the Laplace functional of the Poisson point process. The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded functionv{\displaystyle \textstyle v}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that0≤v(x)≤1{\displaystyle \textstyle 0\leq v(x)\leq 1}. For a point processN{\displaystyle \textstyle {N}}the probability generating functional is defined as:[122] where the product is performed for all the points inN{\textstyle N}. If the intensity measureΛ{\displaystyle \textstyle \Lambda }ofN{\displaystyle \textstyle {N}}is locally finite, then theG{\textstyle G}is well-defined for any measurable functionu{\displaystyle \textstyle u}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}. For a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the generating functional is given by: which in the homogeneous case reduces to For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the firstmoment measureis its intensity measure:[18][19] which for a homogeneous Poisson point process withconstantintensityλ{\displaystyle \textstyle \lambda }means: where|B|{\displaystyle \textstyle |B|}is the length, area or volume (or more generally, theLebesgue measure) ofB{\displaystyle \textstyle B}. The Mecke equation characterizes the Poisson point process. LetNσ{\displaystyle \mathbb {N} _{\sigma }}be the space of allσ{\displaystyle \sigma }-finite measures on some general spaceQ{\displaystyle {\mathcal {Q}}}. A point processη{\displaystyle \eta }with intensityλ{\displaystyle \lambda }onQ{\displaystyle {\mathcal {Q}}}is a Poisson point process if and only if for all measurable functionsf:Q×Nσ→R+{\displaystyle f:{\mathcal {Q}}\times \mathbb {N} _{\sigma }\to \mathbb {R} _{+}}the following holds For further details see.[123] For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }then{\displaystyle \textstyle n}-thfactorial moment measureis given by the expression:[124] whereΛ{\displaystyle \textstyle \Lambda }is the intensity measure or first moment measure ofN{\displaystyle \textstyle {N}}, which for some Borel setB{\displaystyle \textstyle B}is given by For a homogeneous Poisson point process then{\displaystyle \textstyle n}-th factorial moment measure is simply:[18][19] where|Bi|{\displaystyle \textstyle |B_{i}|}is the length, area, or volume (or more generally, theLebesgue measure) ofBi{\displaystyle \textstyle B_{i}}. Furthermore, then{\displaystyle \textstyle n}-th factorial moment density is:[124] Theavoidance function[69]orvoid probability[118]v{\displaystyle \textstyle v}of a point processN{\displaystyle \textstyle {N}}is defined in relation to some setB{\displaystyle \textstyle B}, which is a subset of the underlying spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, as the probability of no points ofN{\displaystyle \textstyle {N}}existing inB{\displaystyle \textstyle B}. More precisely,[125]for a test setB{\displaystyle \textstyle B}, the avoidance function is given by: For a general Poisson point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }, its avoidance function is given by: Simple point processes are completely characterized by their void probabilities.[126]In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known asRényi's theorem, which is named afterAlfréd Rényiwho discovered the result for the case of a homogeneous point process in one-dimension.[127] In one form,[127]the Rényi's theorem says for a diffuse (or non-atomic) Radon measureΛ{\displaystyle \textstyle \Lambda }onRd{\displaystyle \textstyle \mathbb {R} ^{d}}and a setA{\displaystyle \textstyle A}is a finite union of rectangles (so not Borel[d]) that ifN{\displaystyle \textstyle N}is a countable subset ofRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that: thenN{\displaystyle \textstyle {N}}is a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }. Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process).[129] For the Poisson process, the independentp(x){\displaystyle \textstyle p(x)}-thinning operations results in another Poisson point process. More specifically, ap(x){\displaystyle \textstyle p(x)}-thinning operation applied to a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }gives a point process of removed points that is also Poisson point processNp{\displaystyle \textstyle {N}_{p}}with intensity measureΛp{\displaystyle \textstyle \Lambda _{p}}, which for a bounded Borel setB{\displaystyle \textstyle B}is given by: This thinning result of the Poisson point process is sometimes known asPrekopa's theorem.[130]Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other.[129]In other words, if a region is known to containn{\displaystyle \textstyle n}kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known assplitting[131][132]the Poisson point process. If there is a countable collection of point processesN1,N2,…{\displaystyle \textstyle N_{1},N_{2},\dots }, then their superposition, or, in set theory language, their union, which is[133] also forms a point process. In other words, any points located in any of the point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }will also be located in the superposition of these point processesN{\displaystyle \textstyle {N}}. Thesuperposition theoremof the Poisson point process says that the superposition of independent Poisson point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }with mean measuresΛ1,Λ2,…{\displaystyle \textstyle \Lambda _{1},\Lambda _{2},\dots }will also be a Poisson point process with mean measure[134][89] In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a pointx{\textstyle x}is sampled from a countablen{\textstyle n}union of Poisson processes, then the probability that the pointx{\displaystyle \textstyle x}belongs to thej{\textstyle j}th Poisson processNj{\textstyle N_{j}}is given by: For two homogeneous Poisson processes with intensitiesλ1,λ2…{\textstyle \lambda _{1},\lambda _{2}\dots }, the two previous expressions reduce to and The operation clustering is performed when each pointx{\displaystyle \textstyle x}of some point processN{\displaystyle \textstyle {N}}is replaced by another (possibly different) point process. If the original processN{\displaystyle \textstyle {N}}is a Poisson point process, then the resulting processNc{\displaystyle \textstyle {N}_{c}}is called a Poisson cluster point process. A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement[135]or translation.[136]The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem,[135]which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process. One version of the displacement theorem[135]involves a Poisson point processN{\displaystyle \textstyle {N}}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}with intensity functionλ(x){\displaystyle \textstyle \lambda (x)}. It is then assumed the points ofN{\displaystyle \textstyle {N}}are randomly displaced somewhere else inRd{\displaystyle \textstyle \mathbb {R} ^{d}}so that each point's displacement is independent and that the displacement of a point formerly atx{\displaystyle \textstyle x}is a random vector with a probability densityρ(x,⋅){\displaystyle \textstyle \rho (x,\cdot )}.[e]Then the new point processND{\displaystyle \textstyle N_{D}}is also a Poisson point process with intensity function If the Poisson process is homogeneous withλ(x)=λ>0{\displaystyle \textstyle \lambda (x)=\lambda >0}and ifρ(x,y){\displaystyle \rho (x,y)}is a function ofy−x{\displaystyle y-x}, then In other words, after each random and independent displacement of points, the original Poisson point process still exists. The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}to another Euclidean spaceRd′{\displaystyle \textstyle \mathbb {R} ^{d'}}, whered′≥1{\displaystyle \textstyle d'\geq 1}is not necessarily equal tod{\displaystyle \textstyle d}.[18] Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space.[137] If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as themapping theorem.[137][138]The theorem involves some Poisson point process with mean measureΛ{\displaystyle \textstyle \Lambda }on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measureΛ′{\displaystyle \textstyle \Lambda '}. More specifically, one can consider a (Borel measurable) functionf{\displaystyle \textstyle f}that maps a point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }from one spaceS{\displaystyle \textstyle S}, to another spaceT{\displaystyle \textstyle T}in such a manner so that the new point processN′{\displaystyle \textstyle {N}'}has the intensity measure: with no atoms, whereB{\displaystyle \textstyle B}is a Borel set andf−1{\displaystyle \textstyle f^{-1}}denotes the inverse of the functionf{\displaystyle \textstyle f}. IfN{\displaystyle \textstyle {N}}is a Poisson point process, then the new processN′{\displaystyle \textstyle {N}'}is also a Poisson point process with the intensity measureΛ′{\displaystyle \textstyle \Lambda '}. The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process.[139]There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics.[140] One method for approximating random events or phenomena with Poisson processes is called theclumping heuristic.[141]The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters orclumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable[140]and the locations of the clumps will be close to a Poisson process.[141] Stein's methodis a mathematical technique originally developed for approximating random variables such asGaussianand Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds onprobability metrics, which give way to quantify how different two random mathematical objects vary stochastically.[139][142]Upperbounds on probability metrics such astotal variationandWasserstein distancehave been derived.[139] Researchers have applied Stein's method to Poisson point processes in a number of ways,[139]such as usingPalm calculus.[108]Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certainpoint process operationssuch as thinning and superposition.[143][144]Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as theCox point process, which is a Poisson process with a random intensity measure.[139] In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.[145] Similar convergence results have been developed for thinning and superposition operations[145]that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin[f]equations, which has its origins in the work ofConny PalmandAleksandr Khinchin,[146]and help explains why the Poisson process can often be used as a mathematical model of various random phenomena.[145] The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena. ThePoisson-type random measures(PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed underPoint process operation#Thinning. These random measures are examples of themixed binomial processand share the distributional self-similarity property of thePoisson random measure. They are the only members of the canonical non-negativepower seriesfamily of distributions to possess this property and include thePoisson distribution,negative binomial distribution, andbinomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed[147]and include thePoisson random measure, negative binomial random measure, and binomial random measure. For mathematical models the Poisson point process is often defined in Euclidean space,[1][36]but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures,[148][149]which requires an understanding of mathematical fields such as probability theory, measure theory and topology.[150] In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics.[151]Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures.[115]In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space.[152] ACox point process,Cox processordoubly stochastic Poisson processis a generalization of the Poisson point process by letting its intensity measureΛ{\displaystyle \textstyle \Lambda }to be also random and independent of the underlying Poisson process. The process is named afterDavid Coxwho introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille.[3]The intensity measure may be a realization of random variable or a random field. For example, if thelogarithmof the intensity measure is aGaussian random field, then the resulting process is known as alog Gaussian Cox process.[153]More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit aclusteringof points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics[154]and wireless networks.[19] For a given point process, each random point of a point process can have a random mathematical object, known as amark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes.[155][156]The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form amarked point process.[157]It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space.[158]If the underlying point process is a Poisson point process, then the resulting point process is amarked Poisson point process.[159] If a general point process is defined on somemathematical spaceand the random marks are defined on another mathematical space, then the marked point process is defined on theCartesian productof these two spaces. For a marked Poisson point process with independent and identically distributed marks, themarking theorem[158][160]states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes. Thecompound Poisson point processorcompound Poisson processis formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection ofindependent and identically distributednon-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space.[161] If there is a marked Poisson point process formed from a Poisson point processN{\displaystyle \textstyle N}(defined on, for example,Rd{\displaystyle \textstyle \mathbb {R} ^{d}}) and a collection of independent and identically distributed non-negative marks{Mi}{\displaystyle \textstyle \{M_{i}\}}such that for each pointxi{\displaystyle \textstyle x_{i}}of the Poisson processN{\displaystyle \textstyle N}there is a non-negative random variableMi{\displaystyle \textstyle M_{i}}, the resulting compound Poisson process is then:[162] whereB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is a Borel measurable set. If general random variables{Mi}{\displaystyle \textstyle \{M_{i}\}}take values in, for example,d{\displaystyle \textstyle d}-dimensional Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the resulting compound Poisson process is an example of aLévy processprovided that it is formed from a homogeneous Point processN{\displaystyle \textstyle N}defined on the non-negative numbers[0,∞){\displaystyle \textstyle [0,\infty )}.[163] The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets,[164]where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
https://en.wikipedia.org/wiki/Poisson_process
In the theory of finitepopulationsampling, asampling designspecifies for every possiblesampleitsprobabilityof being drawn. Mathematically, a sampling design is denoted by the functionP(S){\displaystyle P(S)}which gives the probability of drawing a sampleS.{\displaystyle S.} DuringBernoulli sampling,P(S){\displaystyle P(S)}is given by where for each elementq{\displaystyle q}is the probability of being included in the sample andNsample(S){\displaystyle N_{\text{sample}}(S)}is the total number of elements in the sampleS{\displaystyle S}andNpop{\displaystyle N_{\text{pop}}}is the total number of elements in the population (before sampling commenced). In business research, companies must often generate samples of customers, clients, employees, and so forth to gather their opinions. Sample design is also a critical component of marketing research and employee research for many organizations. During sample design, firms must answer questions such as: These issues require very careful consideration, and good commentaries are provided in several sources.[1][2]
https://en.wikipedia.org/wiki/Sampling_design
Inprobability theoryandstatistics,Campbell's theoremor theCampbell–Hardy theoremis either a particularequationor set of results relating to theexpectationof afunctionsummed over apoint processto anintegralinvolving themean measureof the point process, which allows for the calculation ofexpected valueandvarianceof therandomsum. One version of the theorem,[1]also known asCampbell's formula,[2]: 28entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process.[2]There also exist equations involvingmoment measuresandfactorial moment measuresthat are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory ofpoint processes[3]andqueueing theory[4]as well as the related fieldsstochastic geometry,[1]continuum percolation theory,[5]andspatial statistics.[2][6] Another result by the name of Campbell's theorem[7]is specifically for thePoisson point processand gives a method for calculatingmomentsas well as theLaplace functionalof a Poisson point process. The name of both theorems stems from the work[8][9]byNorman R. Campbellonthermionicnoise, also known asshot noise, invacuum tubes,[3][10]which was partly inspired by the work ofErnest RutherfordandHans Geigeronalpha particledetection, where thePoisson point processarose as a solution to a family of differential equations byHarry Bateman.[10]In Campbell's work, he presents the moments andgenerating functionsof the random sum of a Poisson process on the real line, but remarks that the main mathematical argument was due toG. H. Hardy, which has inspired the result to be sometimes called theCampbell–Hardy theorem.[10][11] For a point processN{\displaystyle N}defined ond-dimensionalEuclidean spaceRd{\displaystyle {\textbf {R}}^{d}},[a]Campbell's theorem offers a way to calculate expectations of a real-valued functionf{\displaystyle f}defined also onRd{\displaystyle {\textbf {R}}^{d}}and summed overN{\displaystyle N}, namely: whereE{\displaystyle E}denotes the expectation and set notation is used such thatN{\displaystyle N}is considered as a random set (seePoint process notation). For a point processN{\displaystyle N}, Campbell's theorem relates the above expectation with the intensity measureΛ{\displaystyle \Lambda }. In relation to aBorel setBthe intensity measure ofN{\displaystyle N}is defined as: where themeasurenotation is used such thatN{\displaystyle N}is considered a randomcounting measure. The quantityΛ(B){\displaystyle \Lambda (B)}can be interpreted as the average number of points of the point processN{\displaystyle N}located in the setB. One version of Campbell's theorem for a general (not necessarily simple) point processN{\displaystyle N}with intensity measure: is known asCampbell's formula[2]orCampbell's theorem,[1][12][13]which gives a method for calculating expectations of sums ofmeasurable functionsf{\displaystyle f}withrangeson thereal line. More specifically, for a point processN{\displaystyle N}and a measurable functionf:Rd→R{\displaystyle f:{\textbf {R}}^{d}\rightarrow {\textbf {R}}}, the sum off{\displaystyle f}over the point process is given by the equation: where if one side of the equation is finite, then so is the other side.[14]This equation is essentially an application ofFubini's theorem[1]and it holds for a wide class of point processes, simple or not.[2]Depending on the integral notation,[b]this integral may also be written as:[14] If the intensity measureΛ{\displaystyle \Lambda }of a point processN{\displaystyle N}has a densityλ(x){\displaystyle \lambda (x)}, then Campbell's formula becomes: For a stationary point processN{\displaystyle N}with constant densityλ>0{\displaystyle \lambda >0},Campbell's theoremorformulareduces to a volume integral: This equation naturally holds for the homogeneous Poisson point processes, which is an example of astationary stochastic process.[1] Campbell's theorem for general point processes gives a method for calculating the expectation of a function of a point (of a point process) summed over all the points in the point process. These random sums over point processes have applications in many areas where they are used as mathematical models. Campbell originally studied a problem of random sums motivated by understanding thermionic noise in valves, which is also known as shot-noise. Consequently, the study of random sums of functions over point processes is known as shot noise in probability and, particularly, point process theory. In wireless network communication, when a transmitter is trying to send a signal to a receiver, all the other transmitters in the network can be considered as interference, which poses a similar problem as noise does in traditional wired telecommunication networks in terms of the ability to send data based on information theory. If the positioning of the interfering transmitters are assumed to form some point process, then shot noise can be used to model the sum of their interfering signals, which has led to stochastic geometry models of wireless networks.[15] The total input in neurons is the sum of many synaptic inputs with similar time courses. When the inputs are modeled as independent Poisson point process, the mean current and its variance are given by Campbell theorem. A common extension is to consider a sum with random amplitudes In this case the cumulantsκi{\displaystyle \kappa _{i}}ofS{\displaystyle S}equal whereai¯{\displaystyle {\overline {a^{i}}}}are the raw moments of the distribution ofa{\displaystyle a}.[16] For general point processes, other more general versions of Campbell's theorem exist depending on the nature of the random sum and in particular the function being summed over the point process. If the function is a function of more than one point of the point process, themoment measuresorfactorial moment measuresof the point process are needed, which can be compared to moments and factorial of random variables. The type of measure needed depends on whether the points of the point process in the random sum are need to be distinct or may repeat. Moment measures are used when points are allowed to repeat. Factorial moment measures are used when points are not allowed to repeat, hence points are distinct. For general point processes, Campbell's theorem is only for sums of functions of a single point of the point process. To calculate the sum of a function of a single point as well as the entire point process, then generalized Campbell's theorems are required using the Palm distribution of the point process, which is based on the branch of probability known as Palm theory orPalm calculus. Another version of Campbell's theorem[7]says that for a Poisson point processN{\displaystyle N}with intensity measureΛ{\displaystyle \Lambda }and a measurable functionf:Rd→R{\displaystyle f:{\textbf {R}}^{d}\rightarrow {\textbf {R}}}, the random sum isabsolutely convergentwithprobability oneif and only ifthe integral Provided that this integral is finite, then the theorem further asserts that for anycomplexvalueθ{\displaystyle \theta }the equation holds if the integral on the right-hand sideconverges, which is the case for purelyimaginaryθ{\displaystyle \theta }. Moreover, and if this integral converges, then whereVar(S){\displaystyle {\text{Var}}(S)}denotes thevarianceof the random sumS{\displaystyle S}. From this theorem some expectation results for thePoisson point processfollow, including itsLaplace functional.[7][c] For a Poisson point processN{\displaystyle N}with intensity measureΛ{\displaystyle \Lambda }, theLaplace functionalis a consequence of the above version of Campbell's theorem[7]and is given by:[15] which for the homogeneous case is:
https://en.wikipedia.org/wiki/Campbell%27s_theorem_(probability)
Acontinuous-time Markov chain(CTMC) is a continuousstochastic processin which, for each state, the process will change state according to anexponential random variableand then move to a different state as specified by the probabilities of astochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state. An example of a CTMC with three states{0,1,2}{\displaystyle \{0,1,2\}}is as follows: the process makes a transition after the amount of time specified by theholding time—an exponential random variableEi{\displaystyle E_{i}}, whereiis its current state. Each random variable is independent and such thatE0∼Exp(6){\displaystyle E_{0}\sim {\text{Exp}}(6)},E1∼Exp(12){\displaystyle E_{1}\sim {\text{Exp}}(12)}andE2∼Exp(18){\displaystyle E_{2}\sim {\text{Exp}}(18)}. When a transition is to be made, the process moves according to thejump chain, adiscrete-time Markov chainwith stochastic matrix: Equivalently, by the property ofcompeting exponentials, this CTMC changes state from stateiaccording to the minimum of two random variables, which are independent and such thatEi,j∼Exp(qi,j){\displaystyle E_{i,j}\sim {\text{Exp}}(q_{i,j})}fori≠j{\displaystyle i\neq j}where the parameters are given by theQ-matrixQ=(qi,j){\displaystyle Q=(q_{i,j})} Each non-diagonal entryqi,j{\displaystyle q_{i,j}}can be computed as the probability that the jump chain moves from stateito statej, divided by the expected holding time of statei. The diagonal entries are chosen so that each row sums to 0. A CTMC satisfies theMarkov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains. Let(Ω,A,Pr){\displaystyle (\Omega ,{\cal {A}},\Pr )}be a probability space, letS{\displaystyle S}be a countable nonempty set, and letT=R≥0{\displaystyle T=\mathbb {R} _{\geq 0}}(T{\displaystyle T}for "time"). EquipS{\displaystyle S}with thediscrete metric, so that we can make sense ofright continuityof functionsR≥0→S{\displaystyle \mathbb {R} _{\geq 0}\to S}. A continuous-time Markov chain is defined by:[1] Note that the row sums ofQ{\displaystyle Q}are 0:∀i∈S,∑j∈Sqi,j=0,{\displaystyle \forall i\in S,~\sum _{j\in S}q_{i,j}=0,}or more succinctly,Q⋅1=0{\displaystyle Q\cdot 1=0}. This situation contrasts with the situation fordiscrete-time Markov chains, where all row sums of the transition matrix equal unity. Now, letX:T→SΩ{\displaystyle X:T\to S^{\Omega }}such that∀t∈TX(t){\displaystyle \forall t\in T~X(t)}is(A,P(S)){\displaystyle ({\cal {A}},{\cal {P}}(S))}-measurable. There are three equivalent ways to defineX{\displaystyle X}beingMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}: via transition probabilities or via the jump chain and holding times.[5] As a prelude to a transition-probability definition, we first motivate the definition of aregularrate matrix. We will use the transition-rate matrixQ{\displaystyle Q}to specify the dynamics of the Markov chain by means of generating a collection oftransition matricesP(t){\displaystyle P(t)}onS{\displaystyle S}(t∈R≥0{\displaystyle t\in \mathbb {R} _{\geq 0}}), via the following theorem. Existence of solution toKolmogorov backward equations([6])—There existsP∈([0,1]S×S)T{\displaystyle P\in ([0,1]^{S\times S})^{T}}such that for alli,j∈S{\displaystyle i,j\in S}the entry(P(t)i,j)t∈T{\displaystyle (P(t)_{i,j})_{t\in T}}is differentiable andP{\displaystyle P}satisfies theKolmogorov backward equations: We sayQ{\displaystyle Q}isregularto mean that we do have uniqueness for the above system, i.e., that there exists exactly one solution.[7][8]We sayQ{\displaystyle Q}isirregularto meanQ{\displaystyle Q}is not regular. IfS{\displaystyle S}is finite, then there is exactly one solution, namelyP=(etQ)t∈T,{\displaystyle P=(e^{tQ})_{t\in T},}and henceQ{\displaystyle Q}is regular. Otherwise,S{\displaystyle S}is infinite, and there exist irregular transition-rate matrices onS{\displaystyle S}.[a]IfQ{\displaystyle Q}is regular, then for the unique solutionP{\displaystyle P}, for eacht∈T{\displaystyle t\in T},P(t){\displaystyle P(t)}will be astochastic matrix.[6]We will assumeQ{\displaystyle Q}is regular from the beginning of the following subsection up through the end of this section, even though it is conventional[10][11][12]to not include this assumption. (Note for the expert: thus we are not defining continuous-time Markov chains in general but onlynon-explosivecontinuous-time Markov chains.) LetP{\displaystyle P}be the (unique) solution of the system (0). (Uniqueness guaranteed by our assumption thatQ{\displaystyle Q}is regular.) We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: for any nonnegative integern≥0{\displaystyle n\geq 0}, for allt0,…,tn+1∈T{\displaystyle t_{0},\dots ,t_{n+1}\in T}such thatt0<⋯<tn+1,{\displaystyle t_{0}<\dots <t_{n+1},}for alli0,…,in+1∈I,{\displaystyle i_{0},\dots ,i_{n+1}\in I,} Using induction and the fact that∀A,B∈APr(B)≠0→Pr(A∩B)=Pr(A∣B)Pr(B),{\displaystyle \forall A,B\in {\cal {A}}~~\Pr(B)\neq 0\rightarrow \Pr(A\cap B)=\Pr(A\mid B)\Pr(B),}we can show the equivalence of the above statement containing (1) and the following statement: for alli∈I,Pr(X0=i)=λi{\displaystyle i\in I,~\Pr(X_{0}=i)=\lambda _{i}}and for any nonnegative integern≥0{\displaystyle n\geq 0}, for allt0,…,tn+1∈T{\displaystyle t_{0},\dots ,t_{n+1}\in T}such thatt0<⋯<tn+1,{\displaystyle t_{0}<\dots <t_{n+1},}for alli0,…,in+1∈I{\displaystyle i_{0},\dots ,i_{n+1}\in I}such that0<Pr(X0=i0,…,Xtn=in){\displaystyle 0<\Pr(X_{0}=i_{0},\dots ,X_{t_{n}}=i_{n})}(it follows that0<Pr(Xtn=in){\displaystyle 0<\Pr(X_{t_{n}}=i_{n})}), It follows from continuity of the functions(P(t)i,j)t∈T{\displaystyle (P(t)_{i,j})_{t\in T}}(i,j∈S{\displaystyle i,j\in S}) that the trajectory(Xt(ω))t∈T{\displaystyle (X_{t}(\omega ))_{t\in T}}is almost surelyright continuous(with respect to thediscrete metriconS{\displaystyle S}): there exists aPr{\displaystyle \Pr }-null setN{\displaystyle N}such that{ω∈Ω:(Xt(ω))t∈Tis right continuous}⊆N{\displaystyle \{\omega \in \Omega :(X_{t}(\omega ))_{t\in T}{\text{ is right continuous}}\}\subseteq N}.[13] Letf:T→S{\displaystyle f:T\to S}be right continuous (when we equipS{\displaystyle S}with thediscrete metric). Define let be theholding-time sequenceassociated tof{\displaystyle f}, chooses∈S,{\displaystyle s\in S,}and let be "thestate sequence" associated tof{\displaystyle f}. Thejump matrixΠ{\displaystyle \Pi }, alternatively writtenΠ(Q){\displaystyle \Pi (Q)}if we wish to emphasize the dependence onQ{\displaystyle Q}, is the matrixΠ=([i=j])i∈Z,j∈S∪⋃i∈S∖Z({((i,j),(−Qi,i)−1Qi,j):j∈S∖{i}}∪{((i,i),0)}),{\displaystyle \Pi =([i=j])_{i\in Z,j\in S}\cup \bigcup _{i\in S\setminus Z}(\{((i,j),(-Q_{i,i})^{-1}Q_{i,j}):j\in S\setminus \{i\}\}\cup \{((i,i),0)\}),}whereZ=Z(Q)={k∈S:qk,k=0}{\displaystyle Z=Z(Q)=\{k\in S:q_{k,k}=0\}}is thezero setof the function(qk,k)k∈S.{\displaystyle (q_{k,k})_{k\in S}.}[14] We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: the trajectories ofX{\displaystyle X}are almost surely right continuous, letf{\displaystyle f}be a modification ofX{\displaystyle X}to have (everywhere) right-continuous trajectories,∑n∈Z≥0H(f(ω))n=+∞{\displaystyle \sum _{n\in \mathbb {Z} _{\geq 0}}H(f(\omega ))_{n}=+\infty }almost surely (note to experts: this condition saysX{\displaystyle X}is non-explosive), the state sequencey(f(ω)){\displaystyle y(f(\omega ))}is a discrete-time Markov chain with initial distributionλ{\displaystyle \lambda }(jump-chain property) and transition matrixΠ(Q),{\displaystyle \Pi (Q),}and∀n∈Z≥0∀B∈B(R≥0)Pr(Hn(f)∈B)=Exp⁡(−qYn,Yn)(B){\displaystyle \forall n\in \mathbb {Z} _{\geq 0}~\forall B\in {\cal {B}}(\mathbb {R} _{\geq 0})~\Pr(H_{n}(f)\in B)=\operatorname {Exp} (-q_{Y_{n},Y_{n}})(B)}(holding-time property). We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: for alli∈S,{\displaystyle i\in S,}Pr(X(0)=i)=λi{\displaystyle \Pr(X(0)=i)=\lambda _{i}}and for alli,j{\displaystyle i,j}, for allt{\displaystyle t}and for small strictly positive values ofh{\displaystyle h}, the following holds for allt∈T{\displaystyle t\in T}such that0<Pr(X(t)=i){\displaystyle 0<\Pr(X(t)=i)}: where the term[i=j]{\displaystyle [i=j]}is1{\displaystyle 1}ifi=j{\displaystyle i=j}and otherwise0{\displaystyle 0}, and thelittle-o termo(h){\displaystyle o(h)}depends in a certain way oni,j,h{\displaystyle i,j,h}.[15][16] The above equation shows thatqi,j{\displaystyle q_{i,j}}can be seen as measuring how quickly the transition fromi{\displaystyle i}toj{\displaystyle j}happens fori≠j{\displaystyle i\neq j}, and how quickly the transition away fromi{\displaystyle i}happens fori=j{\displaystyle i=j}. Communicating classes, transience, recurrence and positive and null recurrence are defined identically as fordiscrete-time Markov chains. Write P(t) for the matrix with entriespij= P(Xt=j|X0=i). Then the matrix P(t) satisfies the forward equation, afirst-order differential equation where the prime denotes differentiation with respect tot. The solution to this equation is given by amatrix exponential In a simple case such as a CTMC on the state space {1,2}. The generalQmatrix for such a process is the following 2 × 2 matrix withα,β> 0 The above relation for forward matrix can be solved explicitly in this case to give Computing direct solutions is complicated in larger matrices. The fact thatQis the generator for asemigroupof matrices is used. The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values oft. Observe that for the two-state process considered earlier with P(t) given by ast→ ∞ the distribution tends to Observe that each row has the same distribution as this does not depend on starting state. The row vectorπmay be found by solving with the constraint The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} andtransition-rate matrix The stationary distribution of this chain can be found by solvingπQ=0{\displaystyle \pi Q=0}, subject to the constraint that elements must sum to 1 to obtain The image to the right describes a discrete-time Markov chain modelingPac-Manwith state-space {1,2,3,4,5,6,7,8,9}. The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the ghosts move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition-rate matrix: Q=(−1121214−114141412−11213−113131414−114141313−11312−112141414−1141212−1){\displaystyle Q={\begin{pmatrix}-1&{\frac {1}{2}}&&{\frac {1}{2}}\\{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}&&&{\frac {1}{4}}\\&{\frac {1}{2}}&-1&&&{\frac {1}{2}}\\{\frac {1}{3}}&&&-1&{\frac {1}{3}}&&{\frac {1}{3}}\\&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}\\&&{\frac {1}{3}}&&{\frac {1}{3}}&-1&&&{\frac {1}{3}}\\&&&{\frac {1}{2}}&&&-1&{\frac {1}{2}}\\&{\frac {1}{4}}&&&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}\\&&&&&{\frac {1}{2}}&&{\frac {1}{2}}&-1\end{pmatrix}}} This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the ghosts can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solvingπQ=0{\displaystyle \pi Q=0}, subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint isπ=(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)%.{\displaystyle \pi =(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\%.}The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least. For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by From this,Smay be written as whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero. To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as (Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton.
https://en.wikipedia.org/wiki/Continuous-time_Markov_process
In mathematicalqueueing theory,Little's law(alsoresult,theorem,lemma, orformula[1][2]) is a theorem byJohn Littlewhich states that the long-term average numberLof customers in astationarysystem is equal to the long-term average effective arrival rateλmultiplied by the average timeWthat a customer spends in the system. Expressed algebraically the law is The relationship is not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else. In most queuing systems, service time is thebottleneckthat creates the queue.[3] The result applies to any system, and particularly, it applies to systems within systems.[4]For example in a bank branch, thecustomer linemight be one subsystem, and each of thetellersanother subsystem, and Little's result could be applied to each one, as well as the whole thing.The only requirements are that the system be stable andnon-preemptive[vague]; this rules out transition states such as initial startup or shutdown. In some cases it is possible not only to mathematically relate theaveragenumber in the system to theaveragewait but even to relate the entireprobability distribution(and moments) of the number in the system to the wait.[5] In a 1954 paper, Little's law was assumed true and used without proof.[6][7]The formL=λWwas first published byPhilip M. Morsewhere he challenged readers to find a situation where the relationship did not hold.[6][8]Little published in 1961 his proof of the law, showing that no such situation existed.[9]Little's proof was followed by a simpler version by Jewell[10]and another by Eilon.[11]Shaler Stidham published a different and more intuitive proof in 1972.[12][13] Imagine an application that had no easy way to measureresponse time. If the mean number in the system and the throughput are known, the average response time can be found using Little’s Law: For example: A queue depth meter shows an average of nine jobs waiting to be serviced. Add one for the job being serviced, so there is an average of ten jobs in the system. Another meter shows a mean throughput of 50 per second. The mean response time is calculated as 0.2 seconds = 10 / 50 per second. Imagine a small store with a single counter and an area for browsing, where only one person can be at the counter at a time, and no one leaves without buying something. So the system is: If the rate at which people enter the store (called the arrival rate) is the rate at which they exit (called the exit rate), the system is stable. By contrast, an arrival rate exceeding an exit rate would represent an unstable system, where the number of waiting customers in the store would gradually increase towards infinity. Little's Law tells us that the average number of customers in the storeL, is the effective arrival rateλ, times the average time that a customer spends in the storeW, or simply: Assume customers arrive at the rate of 10 per hour and stay an average of 0.5 hour. This means we should find the average number of customers in the store at any time to be 5. Now suppose the store is considering doing more advertising to raise the arrival rate to 20 per hour. The store must either be prepared to host an average of 10 occupants or must reduce the time each customer spends in the store to 0.25 hour. The store might achieve the latter by ringing up the bill faster or by adding more counters. We can apply Little's Law to systems within the store. For example, consider the counter and its queue. Assume we notice that there are on average 2 customers in the queue and at the counter. We know the arrival rate is 10 per hour, so customers must be spending 0.2 hours on average checking out. We can even apply Little's Law to the counter itself. The average number of people at the counter would be in the range (0, 1) since no more than one person can be at the counter at a time. In that case, the average number of people at the counter is also known as the utilisation of the counter. However, because a store in reality generally has a limited amount of space, it can eventually become unstable. If the arrival rate is much greater than the exit rate, the store will eventually start to overflow, and thus any new arriving customers will simply be rejected (and forced to go somewhere else or try again later) until there is once again free space available in the store. This is also the difference between thearrival rateand theeffective arrival rate, where the arrival rate roughly corresponds to the rate at which customers arrive at the store, whereas the effective arrival rate corresponds to the rate at which customersenterthe store. However, in a system with an infinite size and no loss, the two are equal. To use Little's law on data, formulas must be used to estimate the parameters, as the result does not necessarily directly apply over finite time intervals, due to problems like how to log customers already present at the start of the logging interval and those who have not yet departed when logging stops.[14] Little's law is widely used in manufacturing to predict lead time based on the production rate and the amount of work-in-process.[15] Software-performance testers have used Little's law to ensure that the observed performance results are not due to bottlenecks imposed by the testing apparatus.[16][17] Other applications include staffing emergency departments in hospitals.[18][19] Lastly, an equivalent version of Little's law also applies in the fields ofdemographyandpopulation biology, although not referred to as "Little's Law".[20][21]For example, Cohen (2008)[22]explains that in a homogeneous stationary population without migration,P=B×e{\displaystyle P=B\times e}, whereP{\displaystyle P}is the total population size,B{\displaystyle B}is the number of births per year, ande{\displaystyle e}is the life expectancy from birth. The formulaP=B×e{\displaystyle P=B\times e}is thus directly equivalent to Little's law (L=λ×W{\displaystyle L=\lambda \times W}). However, biological populations tend to be dynamic and therefore more complicated to model accurately.[23] An extension of Little's law provides a relationship between the steady state distribution of number of customers in the system and time spent in the system under afirst come, first servedservice discipline.[24]
https://en.wikipedia.org/wiki/Little%27s_lemma
In the study of age-structured population growth, probably one of the most important equations is theEuler–Lotka equation. Based on the age demographic of females in the population and female births (since in many cases it is the females that are more limited in the ability to reproduce), this equation allows for an estimation of how a population is growing. The field of mathematicaldemographywas largely developed byAlfred J. Lotkain the early 20th century, building on the earlier work ofLeonhard Euler. The Euler–Lotka equation, derived and discussed below, is often attributed to either of its origins: Euler, who derived a special form in 1760, or Lotka, who derived a more general continuous version. The equation in discrete time is given by whereλ{\displaystyle \lambda }is the discrete growth rate,ℓ(a) is the fraction of individuals surviving to ageaandb(a) is the number of offspring born to an individual of ageaduring the time step. The sum is taken over the entire life span of the organism. A.J. Lotka in 1911 developed a continuous model of population dynamics as follows. This model tracks only the females in the population. LetB(t)dtbe the number of births during the time interval fromttot+dt. Also define thesurvival functionℓ(a), the fraction of individuals surviving to agea. Finally defineb(a) to be the birth rate for mothers of agea. The productB(t-a)ℓ(a) therefore denotes thenumber densityof individuals born att-aand still alive att, whileB(t-a)ℓ(a)b(a) denotes the number of births in this cohort, which suggest the followingVolterra integral equationforB: We integrate over all possible ages to find the total rate of births at timet. We are in effect finding the contributions of all individuals of age up tot. We need not consider individuals born before the start of this analysis since we can just set the base point low enough to incorporate all of them. Let us then guess anexponentialsolution of the formB(t) =Qert. Plugging this into the integral equation gives: or This can be rewritten in thediscretecase by turning the integral into a sum producing lettingα{\displaystyle \alpha }andβ{\displaystyle \beta }be the boundary ages for reproduction or defining the discrete growth rateλ=erwe obtain the discrete time equation derived above: whereω{\displaystyle \omega }is the maximum age, we can extend these ages sinceb(a) vanishes beyond the boundaries. Let us write theLeslie matrixas: wheresi{\displaystyle s_{i}}andfi{\displaystyle f_{i}}are survival to the next age class and per capita fecundity respectively. Note thatsi=ℓi+1/ℓi{\displaystyle s_{i}=\ell _{i+1}/\ell _{i}}whereℓiis the probability of surviving to agei{\displaystyle i}, andfi=sibi+1{\displaystyle f_{i}=s_{i}b_{i+1}}, the number of births at agei+1{\displaystyle i+1}weighted by the probability of surviving to agei+1{\displaystyle i+1}. Now if we have stable growth the growth of the system is aneigenvalueof thematrixsinceni+1=Lni=λni{\displaystyle \mathbf {n_{i+1}} =\mathbf {Ln_{i}} =\lambda \mathbf {n_{i}} }. Therefore, we can use this relationship row by row to derive expressions forni{\displaystyle n_{i}}in terms of the values in the matrix andλ{\displaystyle \lambda }. Introducing notationni,t{\displaystyle n_{i,t}}the population in age classi{\displaystyle i}at timet{\displaystyle t}, we haven1,t+1=λn1,t{\displaystyle n_{1,t+1}=\lambda n_{1,t}}. However alson1,t+1=s0n0,t{\displaystyle n_{1,t+1}=s_{0}n_{0,t}}. This implies that By the same argument we find that Continuinginductivelywe conclude that generally Considering the top row, we get Now we may substitute our previous work for theni,t{\displaystyle n_{i,t}}terms and obtain: First substitute the definition of the per-capita fertility and divide through by the left hand side: Now we note the following simplification. Sincesi=ℓi+1/ℓi{\displaystyle s_{i}=\ell _{i+1}/\ell _{i}}we note that This sum collapses to: which is the desired result. From the above analysis we see that the Euler–Lotka equation is in fact thecharacteristic polynomialof the Leslie matrix. We can analyze its solutions to find information about the eigenvalues of the Leslie matrix (which has implications for the stability of populations). Considering the continuous expressionfas a function ofr, we can examine its roots. We notice that at negative infinity the function grows to positive infinity and at positive infinity the function approaches 0. The firstderivativeis clearly −afand the second derivative isa2f. This function is then decreasing, concave up and takes on all positive values. It is also continuous by construction so by the intermediate value theorem, it crossesr= 1 exactly once. Therefore, there is exactly one real solution, which is therefore the dominant eigenvalue of the matrix the equilibrium growth rate. This same derivation applies to the discrete case. If we letλ= 1 the discrete formula becomes thereplacement rateof the population.
https://en.wikipedia.org/wiki/Lotka%27s_integral_equation
Inprobability theory, thePalm–Khintchine theorem, the work ofConny PalmandAleksandr Khinchin, expresses that a large number ofrenewal processes, not necessarilyPoissonian, when combined ("superimposed") will have Poissonian properties.[1] It is used to generalise the behaviour of users or clients inqueuing theory. It is also used in dependability and reliability modelling of computing andtelecommunications. According to Heyman and Sobel (2003),[1]the theorem states that the superposition of a large number of independent equilibrium renewal processes, each with a finite intensity, behaves asymptotically like a Poisson process: Let{Ni(t),t≥0},i=1,2,…,m{\displaystyle \{N_{i}(t),t\geq 0\},i=1,2,\ldots ,m}be independent renewal processes and{N(t),t>0}{\displaystyle \{N(t),t>0\}}be the superposition of these processes. Denote byXjm{\displaystyle X_{jm}}the time between the first and the second renewal epochs in processj{\displaystyle j}. DefineNjm(t){\displaystyle N_{jm}(t)}thej{\displaystyle j}th counting process,Fjm(t)=P(Xjm≤t){\displaystyle F_{jm}(t)=P(X_{jm}\leq t)}andλjm=1/(E((Xjm))){\displaystyle \lambda _{jm}=1/(E((X_{jm)}))}. If the following assumptions hold 1) For all sufficiently largem{\displaystyle m}:λ1m+λm+⋯+λmm=λ<∞{\displaystyle \lambda _{1m}+\lambda _{m}+\cdots +\lambda _{mm}=\lambda <\infty } 2) Givenε>0{\displaystyle \varepsilon >0}, for everyt>0{\displaystyle t>0}and sufficiently largem{\displaystyle m}:Fjm(t)<ε{\displaystyle F_{jm}(t)<\varepsilon }for allj{\displaystyle j} then the superpositionN0m(t)=N1m(t)+Nm(t)+⋯+Nmm(t){\displaystyle N_{0m}(t)=N_{1m}(t)+N_{m}(t)+\cdots +N_{mm}(t)}of the counting processes approaches a Poisson process asm→∞{\displaystyle m\to \infty }.
https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theorem
In the theory ofrenewal processes, a part of the mathematical theory of probability, theresidual timeor theforward recurrence timeis the time between any given timet{\displaystyle t}and the nextepochof the renewal process under consideration. In the context of random walks, it is also known asovershoot. Another way to phrase residual time is "how much more time is there to wait?". The residual time is very important in most of the practical applications of renewal processes: Consider a renewal process{N(t),t≥0}{\displaystyle \{N(t),t\geq 0\}}, withholding timesSi{\displaystyle S_{i}}andjump times(or renewal epochs)Ji{\displaystyle J_{i}}, andi∈N{\displaystyle i\in \mathbb {N} }. The holding timesSi{\displaystyle S_{i}}are non-negative, independent, identically distributed random variables and the renewal process is defined asN(t)=sup{n:Jn≤t}{\displaystyle N(t)=\sup\{n:J_{n}\leq t\}}. Then, to a given timet{\displaystyle t}, there corresponds uniquely anN(t){\displaystyle N(t)}, such that: Theresidual time(or excess time) is given by the timeY(t){\displaystyle Y(t)}fromt{\displaystyle t}to the next renewal epoch. Let thecumulative distribution functionof the holding timesSi{\displaystyle S_{i}}beF(t)=Pr[Si≤t]{\displaystyle F(t)=Pr[S_{i}\leq t]}and recall that therenewal functionof a process ism(t)=E[N(t)]{\displaystyle m(t)=\mathbb {E} [N(t)]}. Then, for a given timet{\displaystyle t}, the cumulative distribution function ofY(t){\displaystyle Y(t)}is calculated as:[2] Differentiating with respect tox{\displaystyle x}, the probability density function can be written as where we have substitutedu=t−y.{\displaystyle u=t-y.}From elementary renewal theory,m′(t)→1/μ{\displaystyle m'(t)\rightarrow 1/\mu }ast→∞{\displaystyle t\rightarrow \infty }, whereμ{\displaystyle \mu }is the mean of the distributionF{\displaystyle F}. If we consider the limiting distribution ast→∞{\displaystyle t\rightarrow \infty }, assuming thatf(t)→0{\displaystyle f(t)\rightarrow 0}ast→∞{\displaystyle t\rightarrow \infty }, we have the limiting pdf as Likewise, the cumulative distribution of the residual time is For larget{\displaystyle t}, the distribution is independent oft{\displaystyle t}, making it a stationary distribution. An interesting fact is that the limiting distribution of forward recurrence time (or residual time) has the same form as the limiting distribution of the backward recurrence time (or age). This distribution is always J-shaped, with mode at zero. The first two moments of this limiting distributionΦ{\displaystyle \Phi }are: whereσ2{\displaystyle \sigma ^{2}}is the variance ofF{\displaystyle F}andμ2{\displaystyle \mu _{2}}andμ3{\displaystyle \mu _{3}}are its second and third moments. The fact thatE[Y]=μ2+σ22μ>μ2{\displaystyle E[Y]={\frac {\mu ^{2}+\sigma ^{2}}{2\mu }}>{\frac {\mu }{2}}}(forσ>0{\displaystyle \sigma >0}) is also known variously as the waiting time paradox, inspection paradox, or the paradox of renewal theory. The paradox arises from the fact that the average waiting time until the next renewal, assuming that the reference time pointt{\displaystyle t}is uniform randomly selected within the inter-renewal interval, is larger than the average inter-renewal intervalμ2{\displaystyle {\frac {\mu }{2}}}. The average waiting isE[Y]=μ2{\displaystyle E[Y]={\frac {\mu }{2}}}only whenσ2=0{\displaystyle \sigma ^{2}=0}, that is when the renewals are always punctual or deterministic. When the holding timesSi{\displaystyle S_{i}}are exponentially distributed withF(t)=1−e−λt{\displaystyle F(t)=1-e^{-\lambda t}}, the residual times are also exponentially distributed. That is becausem(t)=λt{\displaystyle m(t)=\lambda t}and: This is a known characteristic of theexponential distribution, i.e., itsmemoryless property. Intuitively, this means that it does not matter how long it has been since the last renewal epoch, the remaining time is still probabilistically the same as in the beginning of the holding time interval. Renewal theory texts usually also define thespent timeor thebackward recurrence time(or the current lifetime) asZ(t)=t−JN(t){\displaystyle Z(t)=t-J_{N(t)}}. Its distribution can be calculated in a similar way to that of the residual time. Likewise, the totallife timeis the sum of backward recurrence time and forward recurrence time.
https://en.wikipedia.org/wiki/Residual_time
TheMcKendrick–von Foerster equationis a linear first-orderpartial differential equationencountered in several areas ofmathematical biology– for example,demography[1]andcell proliferationmodeling; it is applied when age structure is an important feature in themathematical model.[2]It was first presented byAnderson Gray McKendrickin 1926 as a deterministic limit of lattice models applied toepidemiology,[3]and subsequently independently in 1959 bybiophysicsprofessorHeinz von Foersterfor describing cell cycles. The mathematical formula can be derived from first principles. It reads: ∂n∂t+∂n∂a=−m(a)n{\displaystyle {\frac {\partial n}{\partial t}}+{\frac {\partial n}{\partial a}}=-m(a)n} where the population densityn(t,a){\displaystyle n(t,a)}is a function of agea{\displaystyle a}and timet{\displaystyle t}, andm(a){\displaystyle m(a)}is the death function. Whenm(a)=0{\displaystyle m(a)=0}, we have:[2] It relates that a population ages, and that fact is the only one that influences change in population density; the negative sign shows that time flows in just one direction, that there is no birth and the population is going to die out. Suppose that for a change in timedt{\displaystyle dt}and change in ageda{\displaystyle da}, the population density is:n(t+dt,a+da)=[1−m(a)dt]n(t,a){\displaystyle n(t+dt,a+da)=[1-m(a)dt]n(t,a)}That is, during a time perioddt{\displaystyle dt}the population density decreases by a percentagem(a)dt{\displaystyle m(a)dt}. Taking aTaylor seriesexpansion to orderdt{\displaystyle dt}gives us that:n(t+dt,a+da)≈n(t,a)+∂n∂tdt+∂n∂ada{\displaystyle n(t+dt,a+da)\approx n(t,a)+{\partial n \over {\partial t}}dt+{\partial n \over {\partial a}}da}We know thatda/dt=1{\textstyle da/dt=1}, since the change of age with time is 1. Therefore, after collecting terms, we must have that:∂n∂t+∂n∂a=−m(a)n{\displaystyle {\partial n \over {\partial t}}+{\partial n \over {\partial a}}=-m(a)n} The von Foerster equation is acontinuity equation; it can be solved using themethod of characteristics.[2]Another way is bysimilarity solution; and a third is a numerical approach such asfinite differences. To get the solution, the following boundary conditions should be added: which states that the initial births should be conserved (see Sharpe–Lotka–McKendrick’s equation for otherwise), and that: which states that the initial population must be given; then it will evolve according to the partial differential equation. In Sebastian Aniţa, Viorel Arnăutu, Vincenzo Capasso.An Introduction to Optimal Control Problems in Life Sciences and Economics(Birkhäuser. 2011), this equation appears as a special case of theSharpe–Lotka–McKendrick’s equation; in the latter there is inflow, and the math is based ondirectional derivative. The McKendrick’s equation appears extensively in the context of cell biology as a good approach to model the eukaryotic cell cycle.[4]
https://en.wikipedia.org/wiki/Von_Foerster_equation
Aratio distribution(also known as aquotient distribution) is aprobability distributionconstructed as the distribution of theratioofrandom variableshaving two other known distributions. Given two (usuallyindependent) random variablesXandY, the distribution of the random variableZthat is formed as the ratioZ=X/Yis aratio distribution. An example is theCauchy distribution(also called thenormal ratio distribution), which comes about as the ratio of twonormally distributedvariables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: thet-distributionarises from aGaussianrandom variable divided by an independentchi-distributedrandom variable, while theF-distributionoriginates from the ratio of two independentchi-squared distributedrandom variables. More general ratio distributions have been considered in the literature.[1][2][3][4][5][6][7][8][9] Often the ratio distributions areheavy-tailed, and it may be difficult to work with such distributions and develop an associatedstatistical test. A method based on themedianhas been suggested as a "work-around".[10] The ratio is one type of algebra for random variables: Related to the ratio distribution are theproduct distribution,sum distributionanddifference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described inMelvin D. Springer's book from 1979The Algebra of Random Variables.[8] The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product isC = ABand a ratio isD=C/Ait does not necessarily mean that the distributions ofDandBare the same. Indeed, a peculiar effect is seen for theCauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.[8]This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables,C1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}each constructed from two Gaussian distributionsC1=G1/G2{\displaystyle C_{1}=G_{1}/G_{2}}andC2=G3/G4{\displaystyle C_{2}=G_{3}/G_{4}}then whereFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle C_3 = G_4/G_3}. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions. A way of deriving the ratio distribution ofZ=X/Y{\displaystyle Z=X/Y}from the joint distribution of the two other random variablesX , Y, with joint pdfpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}, is by integration of the following form[3] If the two variables are independent thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle p_{XY}(x,y) = p_X(x) p_Y(y) }and this becomes This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is DefiningZ=X/Y{\displaystyle Z=X/Y}we have Using the known definite integral∫0∞xexp⁡(−cx2)dx=12c{\textstyle \int _{0}^{\infty }\,x\,\exp \left(-cx^{2}\right)\,dx={\frac {1}{2c}}}we get which is the Cauchy distribution, or Student'stdistribution withn= 1 TheMellin transformhas also been suggested for derivation of ratio distributions.[8] In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distributionfx,y(x,y)=fx(x)fy(y){\displaystyle f_{x,y}(x,y)=f_{x}(x)f_{y}(y)}which has support in the positive quadrantx,y>0{\displaystyle x,y>0}and we wish to find the pdf of the ratioR=X/Y{\displaystyle R=X/Y}. The hatched volume above the liney=x/R{\displaystyle y=x/R}represents the cumulative distribution of the functionfx,y(x,y){\displaystyle f_{x,y}(x,y)}multiplied with the logical functionX/Y≤R{\displaystyle X/Y\leq R}. The density is first integrated in horizontal strips; the horizontal strip at heightyextends fromx= 0 tox = Ryand has incremental probabilityfy(y)dy∫0Ryfx(x)dx{\textstyle f_{y}(y)dy\int _{0}^{Ry}f_{x}(x)\,dx}.Secondly, integrating the horizontal strips upward over allyyields the volume of probability above the line Finally, differentiateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle F_R(R)}with respect toR{\displaystyle R}to get the pdffR(R){\displaystyle f_{R}(R)}. Move the differentiation inside the integral: and since then As an example, find the pdf of the ratioRwhen We have thus Differentiation wrt.Ryields the pdf ofR FromMellin transformtheory, for distributions existing only on the positive half-linex≥0{\displaystyle x\geq 0}, we have the product identityE⁡[(UV)p]=E⁡[Up]E⁡[Vp]{\displaystyle \operatorname {E} [(UV)^{p}]=\operatorname {E} [U^{p}]\;\;\operatorname {E} [V^{p}]}providedU,V{\displaystyle U,\;V}are independent. For the case of a ratio of samples likeE⁡[(X/Y)p]{\displaystyle \operatorname {E} [(X/Y)^{p}]}, in order to make use of this identity it is necessary to use moments of the inverse distribution. SetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle 1/Y = Z }such thatE⁡[(XZ)p]=E⁡[Xp]E⁡[Y−p]{\displaystyle \operatorname {E} [(XZ)^{p}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{-p}]}. Thus, if the moments ofXp{\displaystyle X^{p}}andY−p{\displaystyle Y^{-p}}can be determined separately, then the moments ofX/Y{\displaystyle X/Y}can be found. The moments ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y^{-p} }are determined from the inverse pdf ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}, often a tractable exercise. At simplest,E⁡[Y−p]=∫0∞y−pfy(y)dy{\textstyle \operatorname {E} [Y^{-p}]=\int _{0}^{\infty }y^{-p}f_{y}(y)\,dy}. To illustrate, letX{\displaystyle X}be sampled from a standardGamma distribution Z=Y−1{\displaystyle Z=Y^{-1}}is sampled from an inverse Gamma distribution with parameterβ{\displaystyle \beta }and has pdfΓ−1(β)z−(1+β)e−1/z{\displaystyle \;\Gamma ^{-1}(\beta )z^{-(1+\beta )}e^{-1/z}}. The moments of this pdf are Multiplying the corresponding moments gives Independently, it is known that the ratio of the two Gamma samplesR=X/Y{\displaystyle R=X/Y}follows the Beta Prime distribution: SubstitutingB(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}we haveE⁡[Rp]=Γ(α+p)Γ(β−p)Γ(α+β)/Γ(α)Γ(β)Γ(α+β)=Γ(α+p)Γ(β−p)Γ(α)Γ(β){\displaystyle \operatorname {E} [R^{p}]={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha +\beta )}}{\Bigg /}{\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha )\Gamma (\beta )}}}which is consistent with the product of moments above. In theProduct distributionsection, and derived fromMellin transformtheory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have which, in terms of probability distributions, is equivalent to Note thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \operatorname{E}(1/Y) \neq \frac{1}{\operatorname{E}(Y)} }i.e.,∫−∞∞y−1fy(y)dy≠1∫−∞∞yfy(y)dy{\displaystyle \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy\neq {\frac {1}{\int _{-\infty }^{\infty }yf_{y}(y)\,dy}}} The variance of a ratio of independent variables is WhenXandYare independent and have aGaussian distributionwith zero mean, the form of their ratio distribution is aCauchy distribution. This can be derived by settingZ=X/Y=tan⁡θ{\displaystyle Z=X/Y=\tan \theta }then showing thatθ{\displaystyle \theta }has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have Ifp(x,y){\displaystyle p(x,y)}is a function only ofrthenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta }is uniformly distributed onFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle [0, 2\pi ] }with density1/2π{\displaystyle 1/2\pi }so the problem reduces to finding the probability distribution ofZunder the mapping We have, by conservation of probability and sincedz/dθ=1/cos2⁡θ{\displaystyle dz/d\theta =1/\cos ^{2}\theta } and settingcos2⁡θ=11+(tan⁡θ)2=11+z2{\textstyle \cos ^{2}\theta ={\frac {1}{1+(\tan \theta )^{2}}}={\frac {1}{1+z^{2}}}}we get There is a spurious factor of 2 here. Actually, two values ofθ{\displaystyle \theta }spaced byπ{\displaystyle \pi }map onto the same value ofz, the density is doubled, and the final result is When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented byDavid Hinkley.[6]The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Studenttin which the density depends only on radiusr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}}. It does not extend to the ratio of two independent Studenttdistributions which give the Cauchy ratio shown in a section below for one degree of freedom. In the absence of correlation(cor⁡(X,Y)=0){\displaystyle (\operatorname {cor} (X,Y)=0)}, theprobability density functionof the ratioZ=X/Yof two normal variablesX=N(μX,σX2) andY=N(μY,σY2) is given exactly by the following expression, derived in several sources:[6] where The above expression becomes more complicated when the variablesXandYare correlated. Ifμx=μy=0{\displaystyle \mu _{x}=\mu _{y}=0}butFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \sigma_X \neq \sigma_Y}andρ≠0{\displaystyle \rho \neq 0}the more general Cauchy distribution is obtained whereρis thecorrelation coefficientbetweenXandYand The complex distribution has also been expressed with Kummer'sconfluent hypergeometric functionor theHermite function.[9] This was shown in Springer 1979 problem 4.28. A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be Take logs to get Sinceloge⁡(1+δ)=δ−δ22+δ33+⋯{\displaystyle \log _{e}(1+\delta )=\delta -{\frac {\delta ^{2}}{2}}+{\frac {\delta ^{3}}{3}}+\cdots }then asymptotically Alternatively, Geary (1930) suggested that has approximately astandard Gaussian distribution:[1]This transformation has been called theGeary–Hinkley transformation;[7]the approximation is good ifYis unlikely to assume negative values, basicallyμy>3σy{\displaystyle \mu _{y}>3\sigma _{y}}. This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratioz{\displaystyle z}could be transformed into a near-Gaussian form and developed an approximation fort{\displaystyle t}dependent on the probability of negative denominator valuesx+μx<0{\displaystyle x+\mu _{x}<0}being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: in whichx,y{\displaystyle x,y}are zero-mean correlated normal variables with variancesσx2,σy2{\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}}andX,Y{\displaystyle X,Y}have meansμx,μy.{\displaystyle \mu _{x},\mu _{y}.}Writex′=x−ρyσx/σy{\displaystyle x'=x-\rho y\sigma _{x}/\sigma _{y}}such thatx′,y{\displaystyle x',y}become uncorrelated andx′{\displaystyle x'}has standard deviation The ratio: is invariant under this transformation and retains the same pdf. They{\displaystyle y}term in the numerator appears to be made separable by expanding: to get in whichμx′=μx−ρμyσxσy{\textstyle \mu '_{x}=\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}}andzhas now become a ratio of uncorrelated non-central normal samples with an invariantz-offset (this is not formally proven, though appears to have been used by Geary), Finally, to be explicit, the pdf of the ratioz{\displaystyle z}for correlated variables is found by inputting the modified parametersσx′,μx′,σy,μy{\displaystyle \sigma _{x}',\mu _{x}',\sigma _{y},\mu _{y}}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \rho'=0 }into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset−ρσxσy{\displaystyle -\rho {\frac {\sigma _{x}}{\sigma _{y}}}}onz{\displaystyle z}. The figures above show an example of a positively correlated ratio withσx=σy=1,μx=0,μy=0.5,ρ=0.975{\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975}in which the shaded wedges represent the increment of area selected by given ratiox/y∈[r,r+δ]{\displaystyle x/y\in [r,r+\delta ]}which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratioz=x/y≈1{\displaystyle z=x/y\approx 1}the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdfpZ(x/y){\displaystyle p_{Z}(x/y)}. Conversely asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x/y}moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability. The ratio of correlated zero-mean circularly symmetriccomplex normal distributedvariables was determined by Baxley et al.[13]and has since been extended to the nonzero-mean and nonsymmetric case.[14]In the correlated zero-mean case, the joint distribution ofx,yis where (⋅)H{\displaystyle (\cdot )^{H}}is an Hermitian transpose and The PDF ofZ=X/Y{\displaystyle Z=X/Y}is found to be In the usual event thatσx=σy{\displaystyle \sigma _{x}=\sigma _{y}}we get Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient ofρ=0.7exp⁡(iπ/4){\displaystyle \rho =0.7\exp(i\pi /4)}. The pdf peak occurs at roughly the complex conjugate of a scaled downρ{\displaystyle \rho }. The ratio of independent or correlated log-normals is log-normal. This follows, because ifX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}arelog-normally distributed, thenln⁡(X1){\displaystyle \ln(X_{1})}andln⁡(X2){\displaystyle \ln(X_{2})}are normally distributed. If they are independent or their logarithms follow abivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.[note 1] This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution ofX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}is adequately approximated by a log-normal. This is a common result of themultiplicative central limit theorem, also known asGibrat's law, whenXi{\displaystyle X_{i}}is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.[15] With two independent random variables following auniform distribution, e.g., the ratio distribution becomes If two independent random variables,XandYeach follow aCauchy distributionwith median equal to zero and shape factora{\displaystyle a} then the ratio distribution for the random variableZ=X/Y{\displaystyle Z=X/Y}is[16] This distribution does not depend ona{\displaystyle a}and the result stated by Springer[8](p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as theproduct distributionof the random variableW=XY{\displaystyle W=XY}: More generally, if two independent random variablesXandYeach follow aCauchy distributionwith median equal to zero and shape factorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle a}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}respectively, then: The result for the ratio distribution can be obtained from the product distribution by replacingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}with1b.{\displaystyle {\frac {1}{b}}.} IfXhas a standard normal distribution andYhas a standard uniform distribution, thenZ=X/Yhas a distribution known as theslash distribution, with probability density function where φ(z) is the probability density function of the standard normal distribution.[17] LetGbe a normal(0,1) distribution,YandZbechi-squared distributionswithmandndegrees of freedomrespectively, all independent, withfχ(x,k)=xk2−1e−x/22k/2Γ(k/2){\displaystyle f_{\chi }(x,k)={\frac {x^{{\frac {k}{2}}-1}e^{-x/2}}{2^{k/2}\Gamma (k/2)}}}. Then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V_1 \sim {\chi'}_{k_1}^2(\lambda)}, anoncentral chi-squared distribution, andV2∼χ′k22(0){\displaystyle V_{2}\sim {\chi '}_{k_{2}}^{2}(0)}andV1{\displaystyle V_{1}}is independent ofV2{\displaystyle V_{2}}then mnFm,n′=β′(m2,n2)orFm,n′=β′(m2,n2,1,nm){\displaystyle {\frac {m}{n}}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}}){\text{ or }}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})}definesFm,n′{\displaystyle F'_{m,n}}, Fisher's F density distribution, the PDF of the ratio of two Chi-squares withm, ndegrees of freedom. The CDF of the Fisher density, found inF-tables is defined in thebeta prime distributionarticle. If we enter anF-test table withm= 3,n= 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral Forgamma distributionsUandVwith arbitraryshape parametersα1andα2and their scale parameters both set to unity, that is,U∼Γ(α1,1),V∼Γ(α2,1){\displaystyle U\sim \Gamma (\alpha _{1},1),V\sim \Gamma (\alpha _{2},1)}, whereΓ(x;α,1)=xα−1e−xΓ(α){\displaystyle \Gamma (x;\alpha ,1)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}}, then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma (x;\alpha,1) }, thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta U \sim \Gamma (x;\alpha,\theta) = \frac { x^{\alpha-1} e^{- \frac{x}{\theta}}}{ \theta^k \Gamma(\alpha)} }. Note that hereθis ascale parameter, rather than a rate parameter. IfU∼Γ(α1,θ1),V∼Γ(α2,θ2){\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),\;V\sim \Gamma (\alpha _{2},\theta _{2})}, then by rescaling theθ{\displaystyle \theta }parameter to unity we have Thus in whichFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \beta'(\alpha,\beta,p,q)}represents thegeneralisedbeta prime distribution. In the foregoing it is apparent that ifX∼β′(α1,α2,1,1)≡β′(α1,α2){\displaystyle X\sim \beta '(\alpha _{1},\alpha _{2},1,1)\equiv \beta '(\alpha _{1},\alpha _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta X \sim \beta'( \alpha_1, \alpha_2, 1, \theta ) }. More explicitly, since ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1, \theta_1 ), V \sim \Gamma(\alpha_2, \theta_2 ) }then where IfX,Yare independent samples from theRayleigh distributionFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_r(r) = (r/\sigma^2) e^ {-r^2/2\sigma^2}, \;\; r \ge 0 }, the ratioZ=X/Yfollows the distribution[18] and has cdf The Rayleigh distribution has scaling as its only parameter. The distribution ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Z = \alpha X/Y }follows and has cdf Thegeneralized gamma distributionis which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that hereais ascale parameter, rather than a rate parameter;dis a shape parameter. In the ratios above, Gamma samples,U,Vmay have differing sample sizesα1,α2{\displaystyle \alpha _{1},\alpha _{2}}but must be drawn from the same distributionxα−1e−xθθkΓ(α){\displaystyle {\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}}with equal scalingθ{\displaystyle \theta }. In situations whereUandVare differently scaled, a variables transformation allows the modified random ratio pdf to be determined. LetX=UU+V=11+B{\displaystyle X={\frac {U}{U+V}}={\frac {1}{1+B}}}whereU∼Γ(α1,θ),V∼Γ(α2,θ),θ{\displaystyle U\sim \Gamma (\alpha _{1},\theta ),V\sim \Gamma (\alpha _{2},\theta ),\theta }arbitrary and, from above,X∼Beta(α1,α2),B=V/U∼Beta′(α2,α1){\displaystyle X\sim Beta(\alpha _{1},\alpha _{2}),B=V/U\sim Beta'(\alpha _{2},\alpha _{1})}. RescaleVarbitrarily, definingY∼UU+φV=11+φB,0≤φ≤∞{\displaystyle Y\sim {\frac {U}{U+\varphi V}}={\frac {1}{1+\varphi B}},\;\;0\leq \varphi \leq \infty } We haveFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle B = \frac{1-X}{X} }and substitution intoYgivesY=Xφ+(1−φ)X,dY/dX=φ(φ+(1−φ)X)2{\displaystyle Y={\frac {X}{\varphi +(1-\varphi )X}},dY/dX={\frac {\varphi }{(\varphi +(1-\varphi )X)^{2}}}} TransformingXtoYgivesFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y) = \frac{f_X (X) } {| dY/dX|} = \frac {\beta(X,\alpha_1,\alpha_2)}{ \varphi / [\varphi + (1-\varphi) X]^2} } NotingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle X = \frac {\varphi Y}{ 1-(1 - \varphi)Y} }we finally have Thus, ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1,\theta_1) }andV∼Γ(α2,θ2){\displaystyle V\sim \Gamma (\alpha _{2},\theta _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y = \frac {U} { U + V} }is distributed asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y, \varphi) }withFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \varphi = \frac {\theta_2}{\theta_1} } The distribution ofYis limited here to the interval [0,1]. It can be generalized by scaling such that ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y \sim f_Y(Y,\varphi) }then wherefY(Y,φ,Θ)=φ/Θ[1−(1−φ)Y/Θ]2β(φY/Θ1−(1−φ)Y/Θ,α1,α2),0≤Y≤Θ{\displaystyle f_{Y}(Y,\varphi ,\Theta )={\frac {\varphi /\Theta }{[1-(1-\varphi )Y/\Theta ]^{2}}}\beta \left({\frac {\varphi Y/\Theta }{1-(1-\varphi )Y/\Theta }},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq \Theta } Though not ratio distributions of two variables, the following identities for one variable are useful: combining the latter two equations yields Corollary IfU∼Γ(α,1),V∼Γ(β,1){\displaystyle U\sim \Gamma (\alpha ,1),V\sim \Gamma (\beta ,1)}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \frac{U}{V} \sim \beta' ( \alpha, \beta )}and Further results can be found in theInverse distributionarticle. This result was derived by Katz et al.[20] SupposeX∼Binomial(n,p1){\displaystyle X\sim {\text{Binomial}}(n,p_{1})}andY∼Binomial(m,p2){\displaystyle Y\sim {\text{Binomial}}(m,p_{2})}andX{\displaystyle X},Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}are independent. LetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle T=\frac{X/n}{Y/m}}. Thenlog⁡(T){\displaystyle \log(T)}is approximately normally distributed with meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \log(p_1/p_2)}and variance(1/p1)−1n+(1/p2)−1m{\displaystyle {\frac {(1/p_{1})-1}{n}}+{\frac {(1/p_{2})-1}{m}}}. The binomial ratio distribution is of significance in clinical trials: if the distribution ofTis known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.[citation needed] In the ratio of Poisson variablesR = X/Ythere is a problem thatYis zero with finite probability soRis undefined. To counter this, consider the truncated, or censored, ratioR' = X/Y'where zero sample ofYare discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample beinge−λ{\displaystyle e^{-\lambda }}, the generic pdf of a left truncated Poisson distribution is which sums to unity. Following Cohen,[21]fornindependent trials, the multidimensional truncated pdf is and the log likelihood becomes On differentiation we get and setting to zero gives the maximum likelihood estimateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} } Note that asλ^→0{\displaystyle {\hat {\lambda }}\to 0}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x \to 1 }so the truncated maximum likelihoodλ{\displaystyle \lambda }estimate, though correct for both truncated and untruncated distributions, gives a truncated meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }value which is highly biassed relative to the untruncated one. Nevertheless it appears thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }is asufficient statisticforFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }sinceFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} }depends on the data only through the sample meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x = \frac{1}{n} \sum_{i=1}^n x_i }in the previous equation which is consistent with the methodology of the conventionalPoisson distribution. Absent any closed form solutions, the following approximate reversion for truncatedFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }is valid over the whole range0≤λ≤∞;1≤x¯≤∞{\displaystyle 0\leq \lambda \leq \infty ;\;1\leq {\bar {x}}\leq \infty }. which compares with the non-truncated version which is simplyFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda = \bar x }. Taking the ratioFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle R = \hat \lambda_X / \hat \lambda_Y }is a valid operation even thoughFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_X }may use a non-truncated model whileFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_Y }has a left-truncated one. The asymptotic large-Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle n\lambda \text{ variance of }\hat \lambda }(andCramér–Rao bound) is in which substitutingLgives Then substitutingx¯{\displaystyle {\bar {x}}}from the equation above, we get Cohen's variance estimate The variance of the point estimate of the meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }, on the basis ofntrials, decreases asymptotically to zero asnincreases to infinity. For smallλ{\displaystyle \lambda }it diverges from the truncated pdf variance in Springael[22]for example, who quotes a variance of fornsamples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbb {Var} ( \hat \lambda) / \mathbb {Var} ( \lambda) }, ranges from 1 for largeFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }(100% efficient) up to 2 asλ{\displaystyle \lambda }approaches zero (50% efficient). These mean and variance parameter estimates, together with parallel estimates forX, can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning[23]and there is aZero-truncated Poisson distributionWikipedia entry. This distribution is the ratio of twoLaplace distributions.[24]LetXandYbe standard Laplace identically distributed random variables and letz=X/Y. Then the probability distribution ofzis Let the mean of theXandYbea. Then the standard double Lomax distribution is symmetric arounda. This distribution has an infinite mean and variance. IfZhas a standard double Lomax distribution, then 1/Zalso has a standard double Lomax distribution. The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution. For 0 <a< 1, thea-th moment exists. where Γ is thegamma function. Ratio distributions also appear inmultivariate analysis.[25]If the random matricesXandYfollow aWishart distributionthen the ratio of thedeterminants is proportional to the product of independentFrandom variables. In the case whereXandYare from independent standardizedWishart distributionsthen the ratio has aWilks' lambda distribution. In relation to Wishart matrix distributions ifS∼Wp(Σ,ν+1){\displaystyle S\sim W_{p}(\Sigma ,\nu +1)}is a sample Wishart matrix and vectorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V }is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead[26]states The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence ofCochran's theorem. Similarly which is Theorem 3.2.12 of Muirhead[26]
https://en.wikipedia.org/wiki/Ratio_distribution#Poisson_and_truncated_Poisson_distributions
Ahurdle modelis a class ofstatistical modelswhere a random variable is modelled using two parts, the first of which is the probability of attaining the value 0, and the second part models the probability of the non-zero values. The use of hurdle models is often motivated by an excess of zeroes in the data that is not sufficiently accounted for in more standard statistical models. In a hurdle model, a random variablexis modelled as wherepx≠0(x){\displaystyle p_{x\neq 0}(x)}is atruncated probability distributionfunction, truncated at 0. Hurdle models were introduced by John G. Cragg in 1971,[1]where the non-zero values ofxwere modelled using anormal model, and aprobitmodel was used to model the zeros. The probit part of the model was said to model the presence of "hurdles" that must be overcome for the values of x to attain non-zero values, hence the designationhurdle model. Hurdle models were later developed for count data, withPoisson,geometric,[2]andnegative binomial[3]models for the non-zero counts . Hurdle models differ fromzero-inflated modelsin that zero-inflated models model the zeros using a two-componentmixture model. With a mixture model, the probability of the variable being zero is determined by both the main distribution functionp(x=0){\displaystyle p(x=0)}and the mixture weightπ{\displaystyle \pi }. Specifically, a zero-inflated model for a random variablexis whereπ{\displaystyle \pi }is the mixture weight that determines the amount of zero-inflation. A zero-inflated model can only increase the probability ofPr(x=0){\displaystyle \Pr(x=0)}, but this is not a restriction in hurdle models.[4]
https://en.wikipedia.org/wiki/Hurdle_model
Also known as the(Moran-)Gamma Process,[1]thegamma processis a random process studied inmathematics,statistics,probability theory, andstochastics. The gamma process is astochastic or random processconsisting of independently distributedgamma distributionswhereN(t){\displaystyle N(t)}represents the number of event occurrences from time 0 to timet{\displaystyle t}. Thegamma distributionhas shape parameterγ{\displaystyle \gamma }and rate parameterλ{\displaystyle \lambda }, often written asΓ(γ,λ){\displaystyle \Gamma (\gamma ,\lambda )}.[1]Bothγ{\displaystyle \gamma }andλ{\displaystyle \lambda }must be greater than 0. Thegamma processis often written asΓ(t,γ,λ){\displaystyle \Gamma (t,\gamma ,\lambda )}wheret{\displaystyle t}represents the time from 0. The process is a pure-jumpincreasingLévy processwith intensity measureν(x)=γx−1exp⁡(−λx),{\displaystyle \nu (x)=\gamma x^{-1}\exp(-\lambda x),}for all positivex{\displaystyle x}. Thus jumps whose size lies in the interval[x,x+dx){\displaystyle [x,x+dx)}occur as aPoisson processwith intensityν(x)dx.{\displaystyle \nu (x)\,dx.}The parameterγ{\displaystyle \gamma }controls the rate of jump arrivals and the scaling parameterλ{\displaystyle \lambda }inversely controls the jump size. It is assumed that the process starts from a value 0 att= 0 meaningN(0)=0{\displaystyle N(0)=0}. The gamma process is sometimes also parameterised in terms of the mean (μ{\displaystyle \mu }) and variance (v{\displaystyle v}) of the increase per unit time, which is equivalent toγ=μ2/v{\displaystyle \gamma =\mu ^{2}/v}andλ=μ/v{\displaystyle \lambda =\mu /v}. Thegamma processis a process which measures the number ofoccurrencesof independentgamma-distributed variablesover a span oftime. This image below displays two different gamma processes on from time 0 until time 4. The red process has more occurrences in the timeframe compared to the blue process because itsshapeparameter is larger than the blue shape parameter. We use theGamma functionin these properties, so the reader should distinguish betweenΓ(⋅){\displaystyle \Gamma (\cdot )}(the Gamma function) andΓ(t;γ,λ){\displaystyle \Gamma (t;\gamma ,\lambda )}(the Gamma process). We will sometimes abbreviate the process asXt≡Γ(t;γ,λ){\displaystyle X_{t}\equiv \Gamma (t;\gamma ,\lambda )}. Some basic properties of the gamma process are:[citation needed] Themarginal distributionof a gamma process at timet{\displaystyle t}is agamma distributionwith meanγt/λ{\displaystyle \gamma t/\lambda }and varianceγt/λ2.{\displaystyle \gamma t/\lambda ^{2}.} That is, the probability distributionf{\displaystyle f}of the random variableXt{\displaystyle X_{t}}is given by the densityf(x;t,γ,λ)=λγtΓ(γt)xγt−1e−λx.{\displaystyle f(x;t,\gamma ,\lambda )={\frac {\lambda ^{\gamma t}}{\Gamma (\gamma t)}}x^{\gamma t\,-\,1}e^{-\lambda x}.} Multiplication of a gamma process by a scalar constantα{\displaystyle \alpha }is again a gamma process with different mean increase rate. The sum of two independent gamma processes is again a gamma process. Correlationdisplays the statistical relationship between any two gamma processes. The gamma process is used as the distribution for random time change in thevariance gamma process.
https://en.wikipedia.org/wiki/Gamma_process
Thegeneralized normal distribution(GND) orgeneralized Gaussian distribution(GGD) is either of two families ofparametriccontinuous probability distributionson therealline. Both families add ashape parameterto thenormal distribution. To distinguish the two families, they are referred to below as "symmetric" and "asymmetric"; however, this is not a standard nomenclature. β2αΓ(1/β)e−(|x−μ|/α)β{\displaystyle {\frac {\beta }{2\alpha \Gamma (1/\beta )}}\;e^{-(|x-\mu |/\alpha )^{\beta }}} 12+sign(x−μ)12Γ(1/β)γ(1/β,|x−μα|β){\displaystyle {\frac {1}{2}}+{\text{sign}}(x-\mu ){\frac {1}{2\Gamma (1/\beta )}}\gamma \left(1/\beta ,\left|{\frac {x-\mu }{\alpha }}\right|^{\beta }\right)} sign(p−0.5)[αβF−1(2|p−0.5|;1β)]1/β+μ{\displaystyle {\text{sign}}(p-0.5)\left[\alpha ^{\beta }F^{-1}\left(2|p-0.5|;{\frac {1}{\beta }}\right)\right]^{1/\beta }+\mu } Thesymmetric generalized normal distribution, also known as theexponential power distributionor thegeneralized error distribution, is a parametric family ofsymmetric distributions. It includes allnormalandLaplacedistributions, and as limiting cases it includes allcontinuous uniform distributionson bounded intervals of the real line. This family includes thenormal distributionwhenβ=2{\displaystyle \textstyle \beta =2}(with meanμ{\displaystyle \textstyle \mu }and varianceα22{\displaystyle \textstyle {\frac {\alpha ^{2}}{2}}}) and it includes theLaplace distributionwhenβ=1{\displaystyle \textstyle \beta =1}. Asβ→∞{\displaystyle \textstyle \beta \rightarrow \infty }, the densityconverges pointwiseto a uniform density on(μ−α,μ+α){\displaystyle \textstyle (\mu -\alpha ,\mu +\alpha )}. This family allows for tails that are either heavier than normal (whenβ<2{\displaystyle \beta <2}) or lighter than normal (whenβ>2{\displaystyle \beta >2}). It is a useful way to parametrize a continuum of symmetric,platykurticdensities spanning from the normal (β=2{\displaystyle \textstyle \beta =2}) to the uniform density (β=∞{\displaystyle \textstyle \beta =\infty }), and a continuum of symmetric,leptokurticdensities spanning from the Laplace (β=1{\displaystyle \textstyle \beta =1}) to the normal density (β=2{\displaystyle \textstyle \beta =2}). The shape parameterβ{\displaystyle \beta }also controls thepeakednessin addition to the tails. Parameter estimation viamaximum likelihoodand themethod of momentshas been studied.[3]The estimates do not have a closed form and must be obtained numerically. Estimators that do not require numerical calculation have also been proposed.[4] The generalized normal log-likelihood function has infinitely many continuous derivates (i.e. it belongs to the class C∞ofsmooth functions) only ifβ{\displaystyle \textstyle \beta }is a positive, even integer. Otherwise, the function has⌊β⌋{\displaystyle \textstyle \lfloor \beta \rfloor }continuous derivatives. As a result, the standard results for consistency and asymptotic normality ofmaximum likelihoodestimates ofβ{\displaystyle \beta }only apply whenβ≥2{\displaystyle \textstyle \beta \geq 2}. It is possible to fit the generalized normal distribution adopting an approximatemaximum likelihoodmethod.[5][6]Withμ{\displaystyle \mu }initially set to the sample first momentm1{\displaystyle m_{1}},β{\displaystyle \textstyle \beta }is estimated by using aNewton–Raphsoniterative procedure, starting from an initial guess ofβ=β0{\displaystyle \textstyle \beta =\textstyle \beta _{0}}, where is the first statisticalmomentof the absolute values andm2{\displaystyle m_{2}}is the second statisticalmoment. The iteration is where and and whereψ{\displaystyle \psi }andψ′{\displaystyle \psi '}are thedigamma functionandtrigamma function. Given a value forβ{\displaystyle \textstyle \beta }, it is possible to estimateμ{\displaystyle \mu }by finding the minimum of: Finallyα{\displaystyle \textstyle \alpha }is evaluated as Forβ≤1{\displaystyle \beta \leq 1}, median is a more appropriate estimator ofμ{\displaystyle \mu }. Onceμ{\displaystyle \mu }is estimated,β{\displaystyle \beta }andα{\displaystyle \alpha }can be estimated as described above.[7] The symmetric generalized normal distribution has been used in modeling when the concentration of values around the mean and the tail behavior are of particular interest.[8][9]Other families of distributions can be used if the focus is on other deviations from normality. If thesymmetryof the distribution is the main interest, theskew normalfamily or asymmetric version of the generalized normal family discussed below can be used. If the tail behavior is the main interest, thestudent tfamily can be used, which approximates the normal distribution as the degrees of freedom grows to infinity. The t distribution, unlike this generalized normal distribution, obtains heavier than normal tails without acquiring acuspat the origin. It finds uses in plasma physics under the name of Langdon Distribution resulting from inverse bremsstrahlung.[10] In alinear regressionproblem modeled asy∼GeneralizedNormal(X⋅θ,α,p){\displaystyle y\sim \mathrm {GeneralizedNormal} (X\cdot \theta ,\alpha ,p)}, theMLEwill be thearg⁡minθ‖X⋅θ−y‖p{\displaystyle \arg \min _{\theta }\|X\cdot \theta -y\|_{p}}where thep-normis used. LetXβ{\displaystyle X_{\beta }}be zero mean generalized Gaussian distribution of shapeβ{\displaystyle \beta }and scaling parameterα{\displaystyle \alpha }. The moments ofXβ{\displaystyle X_{\beta }}exist and are finite for any k greater than −1. For any non-negative integer k, the plain central moments are[2] From the viewpoint of theStable count distribution,β{\displaystyle \beta }can be regarded as Lévy's stability parameter. This distribution can be decomposed to an integral of kernel density where the kernel is either aLaplace distributionor aGaussian distribution: whereNβ(ν){\displaystyle {\mathfrak {N}}_{\beta }(\nu )}is theStable count distributionandVβ(s){\displaystyle V_{\beta }(s)}is theStable vol distribution. The probability density function of the symmetric generalized normal distribution is apositive-definite functionforβ∈(0,2]{\displaystyle \beta \in (0,2]}.[11][12] The symmetric generalized Gaussian distribution is aninfinitely divisible distributionif and only ifβ∈(0,1]∪{2}{\displaystyle \beta \in (0,1]\cup \{2\}}.[11] The multivariate generalized normal distribution, i.e. the product ofn{\displaystyle n}exponential power distributions with the sameβ{\displaystyle \beta }andα{\displaystyle \alpha }parameters, is the only probability density that can be written in the formp(x)=g(‖x‖β){\displaystyle p(\mathbf {x} )=g(\|\mathbf {x} \|_{\beta })}and has independent marginals.[13]The results for the special case of theMultivariate normal distributionis originally attributed toMaxwell.[14] Theasymmetric generalized normal distributionis a family of continuous probability distributions in which the shape parameter can be used to introduce asymmetry or skewness.[15][16]When the shape parameter is zero, the normal distribution results. Positive values of the shape parameter yield left-skewed distributions bounded to the right, and negative values of the shape parameter yield right-skewed distributions bounded to the left. Only when the shape parameter is zero is the density function for this distribution positive over the whole real line: in this case the distribution is anormal distribution, otherwise the distributions are shifted and possibly reversedlog-normal distributions. Parameters can be estimated viamaximum likelihood estimationor the method of moments. The parameter estimates do not have a closed form, so numerical calculations must be used to compute the estimates. Since the sample space (the set of real numbers where the density is non-zero) depends on the true value of the parameter, some standard results about the performance of parameter estimates will not automatically apply when working with this family. The asymmetric generalized normal distribution can be used to model values that may be normally distributed, or that may be either right-skewed or left-skewed relative to the normal distribution. Theskew normal distributionis another distribution that is useful for modeling deviations from normality due to skew. Other distributions used to model skewed data include thegamma,lognormal, andWeibulldistributions, but these do not include the normal distributions as special cases. Kullback-Leibler divergence(KLD) is a method using for compute the divergence or similarity between two probability density functions.[17] LetP(x){\displaystyle P(x)}andQ(x){\displaystyle Q(x)}two generalized Gaussian distributions with parametersα1,β1,μ1{\displaystyle \alpha _{1},\beta _{1},\mu _{1}}andα2,β2,μ2{\displaystyle \alpha _{2},\beta _{2},\mu _{2}}subject to the constraintμ1=μ2=0{\displaystyle \mu _{1}=\mu _{2}=0}.[18]Then this divergence is given by: The two generalized normal families described here, like theskew normalfamily, are parametric families that extends the normal distribution by adding a shape parameter. Due to the central role of the normal distribution in probability and statistics, many distributions can be characterized in terms of their relationship to the normal distribution. For example, thelog-normal,folded normal, andinverse normaldistributions are defined as transformations of a normally-distributed value, but unlike the generalized normal and skew-normal families, these do not include the normal distributions as special cases. Actually all distributions with finite variance are in the limit highly related to the normal distribution. The Student-t distribution, theIrwin–Hall distributionand theBates distributionalso extend the normal distribution, andincludein the limit the normal distribution. So there is no strong reason to prefer the "generalized" normal distribution of type 1, e.g. over a combination of Student-t and a normalized extended Irwin–Hall – this would include e.g. the triangular distribution (which cannot be modeled by the generalized Gaussian type 1). A symmetric distribution which can model both tail (long and short)andcenter behavior (like flat, triangular or Gaussian) completely independently could be derived e.g. by usingX= IH/chi. TheTukey g- and h-distributionalso allows for a deviation from normality, both through skewness and fat tails.[19]
https://en.wikipedia.org/wiki/Generalized_normal_distribution#Symmetric_version
In the mathematical theory of probability,multivariate Laplace distributionsare extensions of theLaplace distributionand theasymmetric Laplace distributionto multiple variables. Themarginal distributionsof symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions.[1] A typical characterization of the symmetric multivariate Laplace distribution has thecharacteristic function: whereμ{\displaystyle {\boldsymbol {\mu }}}is the vector ofmeansfor each variable andΣ{\displaystyle {\boldsymbol {\Sigma }}}is thecovariance matrix.[2] Unlike themultivariate normal distribution, even if the covariance matrix has zerocovarianceandcorrelationthe variables are not independent.[1]The symmetric multivariate Laplace distribution iselliptical.[1] Ifμ=0{\displaystyle {\boldsymbol {\mu }}=\mathbf {0} }, theprobability density function(pdf) for ak-dimensional multivariate Laplace distribution becomes: where: v=(2−k)/2{\displaystyle v=(2-k)/2}andKv{\displaystyle K_{v}}is themodified Bessel function of the second kind.[1] In the correlated bivariate case, i.e.,k= 2, withμ1=μ2=0{\displaystyle \mu _{1}=\mu _{2}=0}the pdf reduces to: where: σ1{\displaystyle \sigma _{1}}andσ2{\displaystyle \sigma _{2}}are thestandard deviationsofx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}, respectively, andρ{\displaystyle \rho }is thecorrelation coefficientofx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}.[1] For the uncorrelated bivariate Laplace case, that isk= 2,μ1=μ2=ρ=0{\displaystyle \mu _{1}=\mu _{2}=\rho =0}andσ1=σ2=1{\displaystyle \sigma _{1}=\sigma _{2}=1}, the pdf becomes: A typical characterization of the asymmetric multivariate Laplace distribution has thecharacteristic function: As with the symmetric multivariate Laplace distribution, the asymmetric multivariate Laplace distribution has meanμ{\displaystyle {\boldsymbol {\mu }}}, but the covariance becomesΣ+μ′μ{\displaystyle {\boldsymbol {\Sigma }}+{\boldsymbol {\mu }}'{\boldsymbol {\mu }}}.[3]The asymmetric multivariate Laplace distribution is not elliptical unlessμ=0{\displaystyle {\boldsymbol {\mu }}=\mathbf {0} }, in which case the distribution reduces to the symmetric multivariate Laplace distribution withμ=0{\displaystyle {\boldsymbol {\mu }}=\mathbf {0} }.[1] Theprobability density function(pdf) for ak-dimensional asymmetric multivariate Laplace distribution is: where: v=(2−k)/2{\displaystyle v=(2-k)/2}andKv{\displaystyle K_{v}}is themodified Bessel function of the second kind.[1] The asymmetric Laplace distribution, including the special case ofμ=0{\displaystyle {\boldsymbol {\mu }}=\mathbf {0} }, is an example of ageometric stable distribution.[3]It represents the limiting distribution for a sum ofindependent, identically distributed random variableswith finite variance and covariance where the number of elements to be summed is itself an independent random variable distributed according to ageometric distribution.[1]Such geometric sums can arise in practical applications within biology, economics and insurance.[1]The distribution may also be applicable in broader situations to model multivariate data with heavier tails than a normal distribution but finitemoments.[1] The relationship between theexponential distributionand theLaplace distributionallows for a simple method for simulating bivariate asymmetric Laplace variables (including for the case ofμ=0{\displaystyle {\boldsymbol {\mu }}=\mathbf {0} }). Simulate a bivariate normal random variable vectorY{\displaystyle \mathbf {Y} }from a distribution withμ1=μ2=0{\displaystyle \mu _{1}=\mu _{2}=0}and covariance matrixΣ{\displaystyle {\boldsymbol {\Sigma }}}. Independently simulate an exponential random variableW{\displaystyle \mathbf {W} }from an Exp(1) distribution.X=WY+Wμ{\displaystyle \mathbf {X} ={\sqrt {W}}\mathbf {Y} +W{\boldsymbol {\mu }}}will be distributed (asymmetric) bivariate Laplace with meanμ{\displaystyle {\boldsymbol {\mu }}}and covariance matrixΣ{\displaystyle {\boldsymbol {\Sigma }}}.[1]
https://en.wikipedia.org/wiki/Multivariate_Laplace_distribution
Inmathematics— specifically, in the fields ofprobability theoryandinverse problems—Besov measuresand associatedBesov-distributed random variablesare generalisations of the notions ofGaussian measuresandrandom variables,Laplace distributions, and other classical distributions. They are particularly useful in the study ofinverse problemsonfunction spacesfor which a GaussianBayesian prioris an inappropriate model. The construction of a Besov measure is similar to the construction of aBesov space, hence the nomenclature. LetH{\displaystyle H}be aseparableHilbert spaceof functions defined on a domainD⊆Rd{\displaystyle D\subseteq \mathbb {R} ^{d}}, and let{en∣n∈N}{\displaystyle \{e_{n}\mid n\in \mathbb {N} \}}be acomplete orthonormal basisforH{\displaystyle H}. Lets∈R{\displaystyle s\in \mathbb {R} }and1≤p<∞{\displaystyle 1\leq p<\infty }. Foru=∑n∈Nunen∈H{\displaystyle u=\sum _{n\in \mathbb {N} }u_{n}e_{n}\in H}, define This defines anormon the subspace ofH{\displaystyle H}for which it is finite, and we letXs,p{\displaystyle X^{s,p}}denote thecompletionof this subspace with respect to this new norm. The motivation for these definitions arises from the fact that‖u‖Xs,p{\displaystyle \|u\|_{X^{s,p}}}is equivalent to the norm ofu{\displaystyle u}in the Besov spaceBpps(D){\displaystyle B_{pp}^{s}(D)}. Letκ>0{\displaystyle \kappa >0}be a scale parameter, similar to the precision (the reciprocal of thevariance) of a Gaussian measure. We now define aXs,p{\displaystyle X^{s,p}}-valued random variableu{\displaystyle u}by whereξ1,ξ2,…{\displaystyle \xi _{1},\xi _{2},\dots }are sampled independently and identically from the generalized Gaussian measure onR{\displaystyle \mathbb {R} }with Lebesgueprobability density functionproportional toexp⁡(−12|ξn|p){\displaystyle \exp(-{\tfrac {1}{2}}|\xi _{n}|^{p})}. Informally,u{\displaystyle u}can be said to have a probability density function proportional toexp⁡(−κ2‖u‖Xs,pp){\displaystyle \exp(-{\tfrac {\kappa }{2}}\|u\|_{X^{s,p}}^{p})}with respect to infinite-dimensional Lebesgue measure (which does not make rigorous sense), and is therefore a natural candidate for a "typical" element ofXs,p{\displaystyle X^{s,p}}(although this Is not quite true — see below). It is easy to show that, whent≤s, theXt,pnorm is finite whenever theXs,pnorm is. Therefore, the spacesXs,pandXt,pare nested: This is consistent with the usual nesting of smoothness classes of functionsf:D→R: for example, theSobolev spaceH2(D) is a subspace ofH1(D) and in turn of theLebesgue spaceL2(D) =H0(D); theHölder spaceC1(D) of continuously differentiable functions is a subspace of the spaceC0(D) of continuous functions. It can be shown that the series defininguconverges inXt,palmost surelyfor anyt<s−d/p, and therefore gives a well-definedXt,p-valued random variable. Note thatXt,pis a larger space thanXs,p, and in fact thee random variableuisalmost surelynotin the smaller spaceXs,p. The spaceXs,pis rather the Cameron-Martin space of this probability measure in the Gaussian casep= 2. The random variableuis said to beBesov distributedwith parameters (κ,s,p), and the inducedprobability measureis called aBesov measure.
https://en.wikipedia.org/wiki/Besov_measure
Inmathematics, afunction spaceis asetoffunctionsbetween two fixed sets. Often, thedomainand/orcodomainwill have additionalstructurewhich is inherited by the function space. For example, the set of functions from any setXinto avector spacehas anaturalvector space structure given bypointwiseaddition and scalar multiplication. In other scenarios, the function space might inherit atopologicalormetricstructure, hence the name functionspace. LetFbe afieldand letXbe any set. The functionsX→Fcan be given the structure of a vector space overFwhere the operations are defined pointwise, that is, for anyf,g:X→F, anyxinX, and anycinF, define(f+g)(x)=f(x)+g(x)(c⋅f)(x)=c⋅f(x){\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}}When the domainXhas additional structure, one might consider instead thesubset(orsubspace) of all such functions which respect that structure. For example, ifVand alsoXitself are vector spaces overF, the set oflinear mapsX→Vform a vector space overFwith pointwise operations (often denotedHom(X,V)). One such space is thedual spaceofX: the set oflinear functionalsX→Fwith addition and scalar multiplication defined pointwise. The cardinaldimensionof a function space with no extra structure can be found by theErdős–Kaplansky theorem. Function spaces appear in various areas of mathematics: Functional analysisis organized around adequate techniques to bring function spaces astopological vector spaceswithin reach of the ideas that would apply tonormed spacesof finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsetsΩ⊆Rn{\displaystyle \Omega \subseteq \mathbb {R} ^{n}} Ifyis an element of the function spaceC(a,b){\displaystyle {\mathcal {C}}(a,b)}of allcontinuous functionsthat are defined on aclosed interval[a,b], thenorm‖y‖∞{\displaystyle \|y\|_{\infty }}defined onC(a,b){\displaystyle {\mathcal {C}}(a,b)}is the maximumabsolute valueofy(x)fora≤x≤b,[2]‖y‖∞≡maxa≤x≤b|y(x)|wherey∈C(a,b){\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)} is called theuniform normorsupremum norm('sup norm').
https://en.wikipedia.org/wiki/Function_space
Inprobabilityandstatistics, acompound probability distribution(also known as amixture distributionorcontagious distribution) is theprobability distributionthat results from assuming that arandom variableis distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables. If the parameter is ascale parameter, the resulting mixture is also called ascale mixture. The compound distribution ("unconditional distribution") is the result ofmarginalizing(integrating) over thelatentrandom variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution"). Acompound probability distributionis the probability distribution that results from assuming that a random variableX{\displaystyle X}is distributed according to some parametrized distributionF{\displaystyle F}with an unknown parameterθ{\displaystyle \theta }that is again distributed according to some other distributionG{\displaystyle G}. The resulting distributionH{\displaystyle H}is said to be the distribution that results from compoundingF{\displaystyle F}withG{\displaystyle G}. The parameter's distributionG{\displaystyle G}is also called themixing distributionorlatent distribution. Technically, theunconditionaldistributionH{\displaystyle H}results frommarginalizingoverG{\displaystyle G}, i.e., from integrating out the unknown parameter(s)θ{\displaystyle \theta }. Itsprobability density functionis given by: The same formula applies analogously if some or all of the variables are vectors. From the above formula, one can see that a compound distribution essentially is a special case of amarginal distribution: Thejoint distributionofx{\displaystyle x}andθ{\displaystyle \theta }is given byp(x,θ)=p(x|θ)p(θ){\displaystyle p(x,\theta )=p(x|\theta )p(\theta )}, and the compound results as its marginal distribution:p(x)=∫p(x,θ)dθ{\displaystyle {\textstyle p(x)=\int p(x,\theta )\operatorname {d} \!\theta }}. If the domain ofθ{\displaystyle \theta }is discrete, then the distribution is again a special case of amixture distribution. The compound distributionH{\displaystyle H}will depend on the specific expression of each distribution, as well as which parameter ofF{\displaystyle F}is distributed according to the distributionG{\displaystyle G}, and the parameters ofH{\displaystyle H}will include any parameters ofG{\displaystyle G}that are not marginalized, or integrated, out. ThesupportofH{\displaystyle H}is the same as that ofF{\displaystyle F}, and if the latter is a two-parameter distribution parameterized with the mean and variance, some general properties exist. The compound distribution's first twomomentsare given by thelaw of total expectationand thelaw of total variance: EH⁡[X]=EG⁡[EF⁡[X|θ]]{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}{\bigl [}\operatorname {E} _{F}[X|\theta ]{\bigr ]}} VarH⁡(X)=EG⁡[VarF⁡(X|θ)]+VarG⁡(EF⁡[X|θ]){\displaystyle \operatorname {Var} _{H}(X)=\operatorname {E} _{G}{\bigl [}\operatorname {Var} _{F}(X|\theta ){\bigr ]}+\operatorname {Var} _{G}{\bigl (}\operatorname {E} _{F}[X|\theta ]{\bigr )}} If the mean ofF{\displaystyle F}is distributed asG{\displaystyle G}, which in turn has meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}the expressions above implyEH⁡[X]=EG⁡[θ]=μ{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}[\theta ]=\mu }andVarH⁡(X)=VarF⁡(X|θ)+VarG⁡(Y)=τ2+σ2{\displaystyle \operatorname {Var} _{H}(X)=\operatorname {Var} _{F}(X|\theta )+\operatorname {Var} _{G}(Y)=\tau ^{2}+\sigma ^{2}}, whereτ2{\displaystyle \tau ^{2}}is the variance ofF{\displaystyle F}. letF{\displaystyle F}andG{\displaystyle G}be probability distributions parameterized with mean a variance asx∼F(θ,τ2)θ∼G(μ,σ2){\displaystyle {\begin{aligned}x&\sim {\mathcal {F}}(\theta ,\tau ^{2})\\\theta &\sim {\mathcal {G}}(\mu ,\sigma ^{2})\end{aligned}}}then denoting the probability density functions asf(x|θ)=pF(x|θ){\displaystyle f(x|\theta )=p_{F}(x|\theta )}andg(θ)=pG(θ){\displaystyle g(\theta )=p_{G}(\theta )}respectively, andh(x){\displaystyle h(x)}being the probability density ofH{\displaystyle H}we haveEH⁡[X]=∫Fxh(x)dx=∫Fx∫Gf(x|θ)g(θ)dθdx=∫G∫Fxf(x|θ)dxg(θ)dθ=∫GEF⁡[X|θ]g(θ)dθ{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X]=\int _{F}xh(x)dx&=\int _{F}x\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}\int _{F}xf(x|\theta )dx\ g(\theta )d\theta \\&=\int _{G}\operatorname {E} _{F}[X|\theta ]g(\theta )d\theta \end{aligned}}}and we have from the parameterizationF{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}thatEF⁡[X|θ]=∫Fxf(x|θ)dx=θEG⁡[θ]=∫Gθg(θ)dθ=μ{\displaystyle {\begin{aligned}\operatorname {E} _{F}[X|\theta ]&=\int _{F}xf(x|\theta )dx=\theta \\\operatorname {E} _{G}[\theta ]&=\int _{G}\theta g(\theta )d\theta =\mu \end{aligned}}}and therefore the mean of the compound distributionEH⁡[X]=μ{\displaystyle \operatorname {E} _{H}[X]=\mu }as per the expression for its first moment above. The variance ofH{\displaystyle H}is given byEH⁡[X2]−(EH⁡[X])2{\displaystyle \operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}}, andEH⁡[X2]=∫Fx2h(x)dx=∫Fx2∫Gf(x|θ)g(θ)dθdx=∫Gg(θ)∫Fx2f(x|θ)dxdθ=∫Gg(θ)(τ2+θ2)dθ=τ2∫Gg(θ)dθ+∫Gg(θ)θ2dθ=τ2+(σ2+μ2),{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X^{2}]=\int _{F}x^{2}h(x)dx&=\int _{F}x^{2}\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}g(\theta )\int _{F}x^{2}f(x|\theta )dx\ d\theta \\&=\int _{G}g(\theta )(\tau ^{2}+\theta ^{2})d\theta \\&=\tau ^{2}\int _{G}g(\theta )d\theta +\int _{G}g(\theta )\theta ^{2}d\theta \\&=\tau ^{2}+(\sigma ^{2}+\mu ^{2}),\end{aligned}}}given the fact that∫Fx2f(x∣θ)dx=EF⁡[X2∣θ]=VarF⁡(X∣θ)+(EF⁡[X∣θ])2{\displaystyle \int _{F}x^{2}f(x\mid \theta )dx=\operatorname {E} _{F}[X^{2}\mid \theta ]=\operatorname {Var} _{F}(X\mid \theta )+(\operatorname {E} _{F}[X\mid \theta ])^{2}}and∫Gθ2g(θ)dθ=EG⁡[θ2]=VarG⁡(θ)+(EG⁡[θ])2{\displaystyle \int _{G}\theta ^{2}g(\theta )d\theta =\operatorname {E} _{G}[\theta ^{2}]=\operatorname {Var} _{G}(\theta )+(\operatorname {E} _{G}[\theta ])^{2}}. Finally we getVarH⁡(X)=EH⁡[X2]−(EH⁡[X])2=τ2+σ2{\displaystyle {\begin{aligned}\operatorname {Var} _{H}(X)&=\operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}\\&=\tau ^{2}+\sigma ^{2}\end{aligned}}} Distributions of commontest statisticsresult as compound distributions under their null hypothesis, for example inStudent's t-test(where the test statistic results as the ratio of anormaland achi-squaredrandom variable), or in theF-test(where the test statistic is the ratio of twochi-squaredrandom variables). Compound distributions are useful for modeling outcomes exhibitingoverdispersion, i.e., a greater amount of variability than would be expected under a certain model. For example, count data are commonly modeled using thePoisson distribution, whose variance is equal to its mean. The distribution may be generalized by allowing for variability in itsrate parameter, implemented via agamma distribution, which results in a marginalnegative binomial distribution. This distribution is similar in its shape to the Poisson distribution, but it allows for larger variances. Similarly, abinomial distributionmay be generalized to allow for additional variability by compounding it with abeta distributionfor its success probability parameter, which results in abeta-binomial distribution. Besides ubiquitous marginal distributions that may be seen as special cases of compound distributions, inBayesian inference, compound distributions arise when, in the notation above,Frepresents the distribution of future observations andGis theposterior distributionof the parameters ofF, given the information in a set of observed data. This gives aposterior predictive distribution. Correspondingly, for theprior predictive distribution,Fis the distribution of a new data point whileGis theprior distributionof the parameters. Convolutionof probability distributions (to derive the probability distribution of sums of random variables) may also be seen as a special case of compounding; here the sum's distribution essentially results from considering one summand as a randomlocation parameterfor the other summand.[1] Compound distributions derived fromexponential familydistributions often have a closed form. If analytical integration is not possible, numerical methods may be necessary. Compound distributions may relatively easily be investigated usingMonte Carlo methods, i.e., by generating random samples. It is often easy to generate random numbers from the distributionsp(θ){\displaystyle p(\theta )}as well asp(x|θ){\displaystyle p(x|\theta )}and then utilize these to performcollapsed Gibbs samplingto generate samples fromp(x){\displaystyle p(x)}. A compound distribution may usually also be approximated to a sufficient degree by amixture distributionusing a finite number of mixture components, allowing to derive approximate density, distribution function etc.[1] Parameter estimation(maximum-likelihoodormaximum-a-posterioriestimation) within a compound distribution model may sometimes be simplified by utilizing theEM-algorithm.[2] The notion of "compound distribution" as used e.g. in the definition of aCompound Poisson distributionorCompound Poisson processis different from the definition found in this article. The meaning in this article corresponds to what is used in e.g.Bayesian hierarchical modeling. The special case for compound probability distributions where the parametrized distributionF{\displaystyle F}is thePoisson distributionis also calledmixed Poisson distribution.
https://en.wikipedia.org/wiki/Scale_mixture
Infinance, thecapital asset pricing model(CAPM) is a model used to determine a theoretically appropriate requiredrate of returnof anasset, to make decisions about adding assets to awell-diversifiedportfolio. The model takes into account the asset's sensitivity to non-diversifiable risk (also known assystematic riskormarket risk), often represented by the quantitybeta(β) in the financial industry, as well as theexpected returnof the market and the expected return of a theoreticalrisk-free asset. CAPM assumes a particular form of utility functions (in which only first and secondmomentsmatter, that is risk is measured by variance, for example a quadratic utility) or alternatively asset returns whose probability distributions are completely described by the first two moments (for example, thenormal distribution) and zero transaction costs (necessary for diversification to get rid of all idiosyncratic risk). Under these conditions, CAPM shows that the cost of equity capital is determined only by beta.[1][2]Despite its failing numerous empirical tests,[3]and the existence of more modern approaches to asset pricing and portfolio selection (such asarbitrage pricing theoryandMerton's portfolio problem), the CAPM still remains popular due to its simplicity and utility in a variety of situations. The CAPM was introduced byJack Treynor(1961, 1962),[4]William F. Sharpe(1964),John Lintner(1965a,b) andJan Mossin(1966) independently, building on the earlier work ofHarry Markowitzondiversificationandmodern portfolio theory. Sharpe, Markowitz andMerton Millerjointly received the 1990Nobel Memorial Prize in Economicsfor this contribution to the field offinancial economics.Fischer Black(1972) developed another version of CAPM, called Black CAPM or zero-beta CAPM, that does not assume the existence of a riskless asset. This version was more robust against empirical testing and was influential in the widespread adoption of the CAPM. The CAPM is a model for pricing an individual security or portfolio. For individual securities, we make use of thesecurity market line(SML) and its relation to expected return andsystematic risk(beta) to show how the market must price individual securities in relation to their security risk class. The SML enables us to calculate thereward-to-risk ratiofor any security in relation to that of the overall market. Therefore, when the expected rate of return for any security is deflated by its beta coefficient, the reward-to-risk ratio for any individual security in the market is equal to the market reward-to-risk ratio, thus: The market reward-to-risk ratio is effectively the marketrisk premiumand by rearranging the above equation and solving forE(Ri){\displaystyle E(R_{i})}, we obtain the capital asset pricing model (CAPM). where: Restated, in terms of risk premium, we find that: which states that theindividual risk premiumequals themarket premiumtimesβ. Note 1: the expected market rate of return is usually estimated by measuring the arithmetic average of the historical returns on a market portfolio (e.g. S&P 500). Note 2: the risk free rate of return used for determining the risk premium is usually the arithmetic average of historical risk free rates of return and not the current risk free rate of return. For the full derivation seeModern portfolio theory. There has also been research into a mean-reverting beta often referred to as the adjusted beta, as well as the consumption beta. However, in empirical tests the traditional CAPM has been found to do as well as or outperform the modified beta models.[citation needed] TheSMLgraphs the results from the capital asset pricing model (CAPM) formula. Thex-axis represents the risk (beta), and they-axis represents the expected return. The market risk premium is determined from the slope of the SML. The relationship between β and required return is plotted on thesecurity market line(SML), which shows expected return as a function of β. The intercept is the nominal risk-free rate available for the market, while the slope is the market premium, E(Rm)−Rf. The security market line can be regarded as representing a single-factor model of the asset price, where β is the exposure to changes in the value of the Market. The equation of the SML is thus: It is a useful tool for determining if an asset being considered for a portfolio offers a reasonable expected return for its risk. Individual securities are plotted on the SML graph. If the security's expected return versus risk is plotted above the SML, it is undervalued since the investor can expect a greater return for the inherent risk. And a security plotted below the SML is overvalued since the investor would be accepting less return for the amount of risk assumed. Once the expected/required rate of returnE(Ri){\displaystyle E(R_{i})}is calculated using CAPM, we can compare this required rate of return to the asset's estimated rate of return over a specific investment horizon to determine whether it would be an appropriate investment. To make this comparison, you need an independent estimate of the return outlook for the security based on eitherfundamental or technical analysis techniques, including P/E, M/B etc. Assuming that the CAPM is correct, an asset is correctly priced when its estimated price is the same as the present value of future cash flows of the asset, discounted at the rate suggested by CAPM. If the estimated price is higher than the CAPM valuation, then the asset is overvalued (and undervalued when the estimated price is below the CAPM valuation).[5]When the asset does not lie on the SML, this could also suggest mis-pricing. Since the expected return of the asset at timet{\displaystyle t}isE(Rt)=E(Pt+1)−PtPt{\displaystyle E(R_{t})={\frac {E(P_{t+1})-P_{t}}{P_{t}}}}, a higher expected return than what CAPM suggests indicates thatPt{\displaystyle P_{t}}is too low (the asset is currently undervalued), assuming that at timet+1{\displaystyle t+1}the asset returns to the CAPM suggested price.[6] The asset priceP0{\displaystyle P_{0}}using CAPM, sometimes called the certainty equivalent pricing formula, is a linear relationship given by wherePT{\displaystyle P_{T}}is the future price of the asset or portfolio.[5] The CAPM returns the asset-appropriaterequired returnor discount rate—i.e. the rate at which future cash flows produced by the asset should be discounted given that asset's relative riskiness. Betas exceeding one signify more than average "riskiness"; betas below one indicate lower than average. Thus, a more risky stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. Given the accepted concaveutility function, the CAPM is consistent with intuition—investors (should) require a higher return for holding a more risky asset. Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. marketrisk, the market as a whole, by definition, has a beta of one. Stock market indices are frequently used as local proxies for the market—and in that case (by definition) have a beta of one. An investor in a large, diversified portfolio (such as amutual funddesigned to track the total market), therefore, expects performance in line with the market. The risk of aportfoliocomprisessystematic risk, also known as undiversifiable risk, andunsystematic riskwhich is also known as idiosyncratic risk or diversifiable risk. Systematic risk refers to the risk common to all securities—i.e.market risk. Unsystematic risk is the risk associated with individual assets. Unsystematic risk can bediversifiedaway to smaller levels by including a greater number of assets in the portfolio (specific risks "average out"). The same is not possible for systematic risk within one market. Depending on the market, a portfolio of approximately 30–40 securities in developed markets such as the UK or US will render the portfolio sufficiently diversified such that risk exposure is limited to systematic risk only. This number may vary depending on the way securities are weighted in a portfolio which alters the overall risk contribution of each security. For example, market cap weighting means that securities of companies with larger market capitalization will take up a larger portion of the portfolio, making it effectively less diversified. In developing markets a larger number of securities is required for diversification, due to the higher asset volatilities. A rational investor should not take on any diversifiable risk, as only non-diversifiable risks are rewarded within the scope of this model. Therefore, the requiredreturnon an asset, that is, the return that compensates for risk taken, must be linked to its riskiness in a portfolio context—i.e. its contribution to overall portfolio riskiness—as opposed to its "stand alone risk". In the CAPM context, portfolio risk is represented by highervariancei.e. less predictability. In other words, the beta of the portfolio is the defining factor in rewarding the systematic exposure taken by an investor. The CAPM assumes that the risk-return profile of a portfolio can be optimized—an optimal portfolio displays the lowest possible level of risk for its level of return. Additionally, since each additional asset introduced into a portfolio further diversifies the portfolio, the optimal portfolio must comprise every asset, (assuming no trading costs) with each asset value-weighted to achieve the above (assuming that any asset isinfinitely divisible). All such optimal portfolios, i.e., one for each level of return, comprise the efficient frontier. Because the unsystematic risk isdiversifiable, the total risk of a portfolio can be viewed asbeta. All investors:[7] In their 2004 review, economistsEugene FamaandKenneth Frenchargue that "the failure of the CAPM in empirical tests implies that most applications of the model are invalid".[3] Roger Dayala[35]goes a step further and claims the CAPM is fundamentally flawed even within its own narrow assumption set, illustrating the CAPM is either circular or irrational. The circularity refers to the price of total risk being a function of the price of covariance risk only (and vice versa). The irrationality refers to the CAPM proclaimed ‘revision of prices’ resulting in identical discount rates for the (lower) amount of covariance risk only as for the (higher) amount of Total risk (i.e. identical discount rates for different amounts of risk. Roger’s findings have later been supported by Lai & Stohs.[36]
https://en.wikipedia.org/wiki/Capital_asset_pricing_model
Instatistics,ordinary least squares(OLS) is a type oflinear least squaresmethod for choosing the unknownparametersin alinear regressionmodel (with fixed level-one[clarification needed]effects of alinear functionof a set ofexplanatory variables) by the principle ofleast squares: minimizing the sum of the squares of the differences between the observeddependent variable(values of the variable being observed) in the inputdatasetand the output of the (linear) function of theindependent variable. Some sources consider OLS to be linear regression.[1] Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resultingestimatorcan be expressed by a simple formula, especially in the case of asimple linear regression, in which there is a singleregressoron the right side of the regression equation. The OLS estimator isconsistentfor the level-one fixed effects when the regressors areexogenousand forms perfectcolinearity(rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments[2]and—by theGauss–Markov theorem—optimal in the class of linear unbiased estimatorswhen theerrorsarehomoscedasticandserially uncorrelated. Under these conditions, the method of OLS providesminimum-variance mean-unbiasedestimation when the errors have finitevariances. Under the additional assumption that the errors arenormally distributedwith zero mean, OLS is themaximum likelihood estimatorthat outperforms any non-linear unbiased estimator. Suppose the data consists ofn{\displaystyle n}observations{xi,yi}i=1n{\displaystyle \left\{\mathbf {x} _{i},y_{i}\right\}_{i=1}^{n}}. Each observationi{\displaystyle i}includes a scalar responseyi{\displaystyle y_{i}}and a column vectorxi{\displaystyle \mathbf {x} _{i}}ofp{\displaystyle p}parameters (regressors), i.e.,xi=[xi1,xi2,…,xip]T{\displaystyle \mathbf {x} _{i}=\left[x_{i1},x_{i2},\dots ,x_{ip}\right]^{\operatorname {T} }}. In alinear regression model, the response variable,yi{\displaystyle y_{i}}, is a linear function of the regressors: or invectorform, wherexi{\displaystyle \mathbf {x} _{i}}, as introduced previously, is a column vector of thei{\displaystyle i}-th observation of all the explanatory variables;β{\displaystyle {\boldsymbol {\beta }}}is ap×1{\displaystyle p\times 1}vector of unknown parameters; and the scalarεi{\displaystyle \varepsilon _{i}}represents unobserved random variables (errors) of thei{\displaystyle i}-th observation.εi{\displaystyle \varepsilon _{i}}accounts for the influences upon the responsesyi{\displaystyle y_{i}}from sources other than the explanatory variablesxi{\displaystyle \mathbf {x} _{i}}. This model can also be written in matrix notation as wherey{\displaystyle \mathbf {y} }andε{\displaystyle {\boldsymbol {\varepsilon }}}aren×1{\displaystyle n\times 1}vectors of the response variables and the errors of then{\displaystyle n}observations, andX{\displaystyle \mathbf {X} }is ann×p{\displaystyle n\times p}matrix of regressors, also sometimes called thedesign matrix, whose rowi{\displaystyle i}isxiT{\displaystyle \mathbf {x} _{i}^{\operatorname {T} }}and contains thei{\displaystyle i}-th observations on all the explanatory variables. Typically, a constant term is included in the set of regressorsX{\displaystyle \mathbf {X} }, say, by takingxi1=1{\displaystyle x_{i1}=1}for alli=1,…,n{\displaystyle i=1,\dots ,n}. The coefficientβ1{\displaystyle \beta _{1}}corresponding to this regressor is called theintercept. Without the intercept, the fitted line is forced to cross the origin whenxi=0→{\displaystyle x_{i}={\vec {0}}}. Regressors do not have to be independent for estimation to be consistent e.g. they may be non-linearly dependent. Short of perfect multicollinearity, parameter estimates may still be consistent; however, as multicollinearity rises the standard error around such estimates increases and reduces the precision of such estimates. When there is perfect multicollinearity, it is no longer possible to obtain unique estimates for the coefficients to the related regressors; estimation for these parameters cannot converge (thus, it cannot be consistent). As a concrete example where regressors are non-linearly dependent yet estimation may still be consistent, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would bequadraticin the second regressor, but none-the-less is still considered alinearmodel because the modelisstill linear in the parameters (β{\displaystyle {\boldsymbol {\beta }}}). Consider anoverdetermined system ofn{\displaystyle n}linear equationsinp{\displaystyle p}unknowncoefficients,β1,β2,…,βp{\displaystyle \beta _{1},\beta _{2},\dots ,\beta _{p}}, withn>p{\displaystyle n>p}. This can be written inmatrixform as where (Note: for a linear model as above, not all elements inX{\displaystyle \mathbf {X} }contains information on the data points. The first column is populated with ones,Xi1=1{\displaystyle X_{i1}=1}. Only the other columns contain actual data. So herep{\displaystyle p}is equal to the number of regressors plus one). Such a system usually has no exact solution, so the goal is instead to find the coefficientsβ{\displaystyle {\boldsymbol {\beta }}}which fit the equations "best", in the sense of solving thequadraticminimizationproblem where the objective functionS{\displaystyle S}is given by A justification for choosing this criterion is given inPropertiesbelow. This minimization problem has a unique solution, provided that thep{\displaystyle p}columns of the matrixX{\displaystyle \mathbf {X} }arelinearly independent, given by solving the so-callednormal equations: The matrixXTX{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} }is known as thenormal matrixorGram matrixand the matrixXTy{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {y} }is known as themoment matrixof regressand by regressors.[3]Finally,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}is the coefficient vector of the least-squareshyperplane, expressed as or Supposebis a "candidate" value for the parameter vectorβ. The quantityyi−xiTb, called theresidualfor thei-th observation, measures the vertical distance between the data point(xi,yi)and the hyperplaney=xTb, and thus assesses the degree of fit between the actual data and the model. Thesum of squared residuals(SSR) (also called theerror sum of squares(ESS) orresidual sum of squares(RSS))[4]is a measure of the overall model fit: whereTdenotes the matrixtranspose, and the rows ofX, denoting the values of all the independent variables associated with a particular value of the dependent variable, areXi= xiT. The value ofbwhich minimizes this sum is called theOLS estimator forβ. The functionS(b) is quadratic inbwith positive-definiteHessian, and therefore this function possesses a unique global minimum atb=β^{\displaystyle b={\hat {\beta }}}, which can be given by the explicit formula[5][proof] The productN=XTXis aGram matrix, and its inverse,Q=N−1, is thecofactor matrixofβ,[6][7][8]closely related to itscovariance matrix,Cβ. The matrix (XTX)−1XT=QXTis called theMoore–Penrose pseudoinversematrix ofX. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfectmulticollinearitybetween the explanatory variables (which would cause the Gram matrix to have no inverse). After we have estimatedβ, thefitted values(orpredicted values) from the regression will be whereP=X(XTX)−1XTis theprojection matrixonto the spaceVspanned by the columns ofX. This matrixPis also sometimes called thehat matrixbecause it "puts a hat" onto the variabley. Another matrix, closely related toPis theannihilatormatrixM=In−P; this is a projection matrix onto the space orthogonal toV. Both matricesPandMaresymmetricandidempotent(meaning thatP2=PandM2=M), and relate to the data matrixXvia identitiesPX=XandMX= 0.[9]MatrixMcreates theresidualsfrom the regression: The variances of the predicted valuessy^i2{\displaystyle s_{{\hat {y}}_{i}}^{2}}are found in the main diagonal of thevariance-covariance matrixof predicted values: wherePis the projection matrix ands2is the sample variance.[10]The full matrix is very large; its diagonal elements can be calculated individually as: whereXiis thei-th row of matrixX. Using these residuals we can estimate the sample variances2using thereduced chi-squaredstatistic: The denominator,n−p, is thestatistical degrees of freedom. The first quantity,s2, is the OLS estimate forσ2, whereas the second,σ^2{\displaystyle \scriptstyle {\hat {\sigma }}^{2}}, is the MLE estimate forσ2. The two estimators are quite similar in large samples; the first estimator is alwaysunbiased, while the second estimator is biased but has a smallermean squared error. In practices2is used more often, since it is more convenient for the hypothesis testing. The square root ofs2is called theregression standard error,[11]standard error of the regression,[12][13]orstandard error of the equation.[9] It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing ontoX. Thecoefficient of determinationR2is defined as a ratio of "explained" variance to the "total" variance of the dependent variabley, in the cases where the regression sum of squares equals the sum of squares of residuals:[14] where TSS is thetotal sum of squaresfor the dependent variable,L=In−1nJn{\textstyle L=I_{n}-{\frac {1}{n}}J_{n}}, andJn{\textstyle J_{n}}is ann×nmatrix of ones. (L{\displaystyle L}is acentering matrixwhich is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order forR2to be meaningful, the matrixXof data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case,R2will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit. If the data matrixXcontains only two variables, a constant and a scalar regressorxi, then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as(α,β): The least squares estimates in this case are given by simple formulas In the previous section the least squares estimatorβ^{\displaystyle {\hat {\beta }}}was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same:^β= (XTX)−1XTy; the only difference is in how we interpret this result. For mathematicians, OLS is an approximate solution to an overdetermined system of linear equationsXβ≈y, whereβis the unknown. Assuming the system cannot be solved exactly (the number of equationsnis much larger than the number of unknownsp), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies where‖·‖is the standardL2normin then-dimensionalEuclidean spaceRn. The predicted quantityXβis just a certain linear combination of the vectors of regressors. Thus, the residual vectory−Xβwill have the smallest length whenyisprojected orthogonallyonto thelinear subspacespannedby the columns ofX. The OLS estimatorβ^{\displaystyle {\hat {\beta }}}in this case can be interpreted as the coefficients ofvector decompositionof^y=Pyalong the basis ofX. In other words, the gradient equations at the minimum can be written as: A geometrical interpretation of these equations is that the vector of residuals,y−Xβ^{\displaystyle \mathbf {y} -X{\hat {\boldsymbol {\beta }}}}is orthogonal to thecolumn spaceofX, since thedot product(y−Xβ^)⋅Xv{\displaystyle (\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})\cdot \mathbf {X} \mathbf {v} }is equal to zero foranyconformal vector,v. This means thaty−Xβ^{\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\hat {\beta }}}}is the shortest of all possible vectorsy−Xβ{\displaystyle \mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}, that is, the variance of the residuals is the minimum possible. This is illustrated at the right. Introducingγ^{\displaystyle {\hat {\boldsymbol {\gamma }}}}and a matrixKwith the assumption that a matrix[XK]{\displaystyle [\mathbf {X} \ \mathbf {K} ]}is non-singular andKTX= 0 (cf.Orthogonal projections), the residual vector should satisfy the following equation: The equation and solution of linear least squares are thus described as follows: Another way of looking at it is to consider the regression line to be a weighted average of the lines passing through the combination of any two points in the dataset.[15]Although this way of calculation is more computationally expensive, it provides a better intuition on OLS. The OLS estimator is identical to themaximum likelihood estimator(MLE) under the normality assumption for the error terms.[16][proof]This normality assumption has historical importance, as it provided the basis for the early work in linear regression analysis byYuleandPearson.[citation needed]From the properties of MLE, we can infer that the OLS estimator is asymptotically efficient (in the sense of attaining theCramér–Rao boundfor variance) if the normality assumption is satisfied.[17] Iniidcase the OLS estimator can also be viewed as aGMMestimator arising from the moment conditions These moment conditions state that the regressors should be uncorrelated with the errors. Sincexiis ap-vector, the number of moment conditions is equal to the dimension of the parameter vectorβ, and thus the system is exactly identified. This is the so-called classical GMM case, when the estimator does not depend on the choice of the weighting matrix. Note that the original strict exogeneity assumptionE[εi|xi] = 0implies a far richer set of moment conditions than stated above. In particular, this assumption implies that for any vector-functionƒ, the moment conditionE[ƒ(xi)·εi] = 0will hold. However it can be shown using theGauss–Markov theoremthat the optimal choice of functionƒis to takeƒ(x) =x, which results in the moment equation posted above. There are several different frameworks in which thelinear regression modelcan be cast in order to make the OLS technique applicable. Each of these settings produces the same formulas and same results. The only difference is the interpretation and the assumptions which have to be imposed in order for the method to give meaningful results. The choice of the applicable framework depends mostly on the nature of data in hand, and on the inference task which has to be performed. One of the lines of difference in interpretation is whether to treat the regressors as random variables, or as predefined constants. In the first case (random design) the regressorsxiare random and sampled together with theyi's from somepopulation, as in anobservational study. This approach allows for more natural study of theasymptotic propertiesof the estimators. In the other interpretation (fixed design), the regressorsXare treated as known constants set by adesign, andyis sampled conditionally on the values ofXas in anexperiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning onX. All results stated in this article are within the random design framework. The classical model focuses on the "finite sample" estimation and inference, meaning that the number of observationsnis fixed. This contrasts with the other approaches, which study theasymptotic behaviorof OLS, and in which the behavior at a large number of samples is studied. In some applications, especially withcross-sectional data, an additional assumption is imposed — that all observations areindependent and identically distributed. This means that all observations are taken from arandom samplewhich makes all the assumptions listed earlier simpler and easier to interpret. Also this framework allows one to state asymptotic results (as the sample sizen→ ∞), which are understood as a theoretical possibility of fetching new independent observations from thedata generating process. The list of assumptions in this case is: First of all, under thestrict exogeneityassumption the OLS estimatorsβ^{\displaystyle \scriptstyle {\hat {\beta }}}ands2areunbiased, meaning that their expected values coincide with the true values of the parameters:[24][proof] If the strict exogeneity does not hold (as is the case with manytime seriesmodels, where exogeneity is assumed only with respect to the past shocks but not the future ones), then these estimators will be biased in finite samples. Thevariance-covariance matrix(or simplycovariance matrix) ofβ^{\displaystyle \scriptstyle {\hat {\beta }}}is equal to[25] In particular, the standard error of each coefficientβ^j{\displaystyle \scriptstyle {\hat {\beta }}_{j}}is equal to square root of thej-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantityσ2with its estimates2. Thus, It can also be easily shown that the estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is uncorrelated with the residuals from the model:[25] TheGauss–Markov theoremstates that under thespherical errorsassumption (that is, the errors should beuncorrelatedandhomoscedastic) the estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is efficient in the class of linear unbiased estimators. This is called thebest linear unbiased estimator(BLUE). Efficiency should be understood as if we were to find some other estimatorβ~{\displaystyle \scriptstyle {\tilde {\beta }}}which would be linear inyand unbiased, then[25] in the sense that this is anonnegative-definite matrix. This theorem establishes optimality only in the class of linear unbiased estimators, which is quite restrictive. Depending on the distribution of the error termsε, other, non-linear estimators may provide better results than OLS. The properties listed so far are all valid regardless of the underlying distribution of the error terms. However, if you are willing to assume that thenormality assumptionholds (that is, thatε~N(0,σ2In)), then additional properties of the OLS estimators can be stated. The estimatorβ^{\displaystyle \scriptstyle {\hat {\beta }}}is normally distributed, with mean and variance as given before:[26] This estimator reaches theCramér–Rao boundfor the model, and thus is optimal in the class of all unbiased estimators.[17]Note that unlike theGauss–Markov theorem, this result establishes optimality among both linear and non-linear estimators, but only in the case of normally distributed error terms. The estimators2will be proportional to thechi-squared distribution:[27] The variance of this estimator is equal to2σ4/(n−p), which does not attain theCramér–Rao boundof2σ4/n. However it was shown that there are no unbiased estimators ofσ2with variance smaller than that of the estimators2.[28]If we are willing to allow biased estimators, and consider the class of estimators that are proportional to the sum of squared residuals (SSR) of the model, then the best (in the sense of themean squared error) estimator in this class will be~σ2= SSR/(n−p+ 2), which even beats the Cramér–Rao bound in case when there is only one regressor (p= 1).[29] Moreover, the estimatorsβ^{\displaystyle \scriptstyle {\hat {\beta }}}ands2areindependent,[30]the fact which comes in useful when constructing the t- and F-tests for the regression. As was mentioned before, the estimatorβ^{\displaystyle {\hat {\beta }}}is linear iny, meaning that it represents a linear combination of the dependent variablesyi. The weights in this linear combination are functions of the regressorsX, and generally are unequal. The observations with high weights are calledinfluentialbecause they have a more pronounced effect on the value of the estimator. To analyze which observations are influential we remove a specificj-th observation and consider how much the estimated quantities are going to change (similarly to thejackknife method). It can be shown that the change in the OLS estimator forβwill be equal to[31] wherehj=xjT(XTX)−1xjis thej-th diagonal element of the hat matrixP, andxjis the vector of regressors corresponding to thej-th observation. Similarly, the change in the predicted value forj-th observation resulting from omitting that observation from the dataset will be equal to[31] From the properties of the hat matrix,0 ≤hj≤ 1, and they sum up top, so that on averagehj≈p/n. These quantitieshjare called theleverages, and observations with highhjare calledleverage points.[32]Usually the observations with high leverage ought to be scrutinized more carefully, in case they are erroneous, or outliers, or in some other way atypical of the rest of the dataset. Sometimes the variables and corresponding parameters in the regression can be logically split into two groups, so that the regression takes form whereX1andX2have dimensionsn×p1,n×p2, andβ1,β2arep1×1 andp2×1 vectors, withp1+p2=p. TheFrisch–Waugh–Lovell theoremstates that in this regression the residualsε^{\displaystyle {\hat {\varepsilon }}}and the OLS estimateβ^2{\displaystyle \scriptstyle {\hat {\beta }}_{2}}will be numerically identical to the residuals and the OLS estimate forβ2in the following regression:[33] whereM1is theannihilator matrixfor regressorsX1. The theorem can be used to establish a number of theoretical results. For example, having a regression with a constant and another regressor is equivalent to subtracting the means from the dependent variable and the regressor and then running the regression for the de-meaned variables but without the constant term. Suppose it is known that the coefficients in the regression satisfy a system of linear equations whereQis ap×qmatrix of full rank, andcis aq×1 vector of known constants, whereq < p. In this case least squares estimation is equivalent to minimizing the sum of squared residuals of the model subject to the constraintA. Theconstrained least squares (CLS)estimator can be given by an explicit formula:[34] This expression for the constrained estimator is valid as long as the matrixXTXis invertible. It was assumed from the beginning of this article that this matrix is of full rank, and it was noted that when the rank condition fails,βwill not be identifiable. However it may happen that adding the restrictionAmakesβidentifiable, in which case one would like to find the formula for the estimator. The estimator is equal to[35] whereRis ap×(p−q) matrix such that the matrix[Q R]is non-singular, andRTQ= 0. Such a matrix can always be found, although generally it is not unique. The second formula coincides with the first in case whenXTXis invertible.[35] The least squares estimators arepoint estimatesof the linear regression model parametersβ. However, generally we also want to know how close those estimates might be to the true values of parameters. In other words, we want to construct theinterval estimates. Since we have not made any assumption about the distribution of error termεi, it is impossible to infer the distribution of the estimatorsβ^{\displaystyle {\hat {\beta }}}andσ^2{\displaystyle {\hat {\sigma }}^{2}}. Nevertheless, we can apply thecentral limit theoremto derive theirasymptoticproperties as sample sizengoes to infinity. While the sample size is necessarily finite, it is customary to assume thatnis "large enough" so that the true distribution of the OLS estimator is close to its asymptotic limit. We can show that under the model assumptions, the least squares estimator forβisconsistent(that isβ^{\displaystyle {\hat {\beta }}}converges in probabilitytoβ) and asymptotically normal:[proof] whereQxx=XTX.{\displaystyle Q_{xx}=X^{\operatorname {T} }X.} Using this asymptotic distribution, approximate two-sided confidence intervals for thej-th component of the vectorβ^{\displaystyle {\hat {\beta }}}can be constructed as whereqdenotes thequantile functionof standard normal distribution, and [·]jjis thej-th diagonal element of a matrix. Similarly, the least squares estimator forσ2is also consistent and asymptotically normal (provided that the fourth moment ofεiexists) with limiting distribution These asymptotic distributions can be used for prediction, testing hypotheses, constructing other estimators, etc.. As an example consider the problem of prediction. Supposex0{\displaystyle x_{0}}is some point within the domain of distribution of the regressors, and one wants to know what the response variable would have been at that point. Themean responseis the quantityy0=x0Tβ{\displaystyle y_{0}=x_{0}^{\mathrm {T} }\beta }, whereas thepredicted responseisy^0=x0Tβ^{\displaystyle {\hat {y}}_{0}=x_{0}^{\mathrm {T} }{\hat {\beta }}}. Clearly the predicted response is a random variable, its distribution can be derived from that ofβ^{\displaystyle {\hat {\beta }}}: which allows construct confidence intervals for mean responsey0{\displaystyle y_{0}}to be constructed: Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). Thenull hypothesisof no explanatory value of the estimated regression is tested using anF-test. If the calculated F-value is found to be large enough to exceed its critical value for the pre-chosen level of significance, the null hypothesis is rejected and thealternative hypothesis, that the regression has explanatory power, is accepted. Otherwise, the null hypothesis of no explanatory power is accepted. Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero—that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient'st-statistic, as the ratio of the coefficient estimate to itsstandard error. If the t-statistic is larger than a predetermined value, the null hypothesis is rejected and the variable is found to have explanatory power, with its coefficient significantly different from zero. Otherwise, the null hypothesis of a zero value of the true coefficient is accepted. In addition, theChow testis used to test whether two subsamples both have the same underlying true coefficient values. The sum of squared residuals of regressions on each of the subsets and on the combined data set are compared by computing an F-statistic; if this exceeds a critical value, the null hypothesis of no difference between the two subsets is rejected; otherwise, it is accepted. The following data set gives average heights and weights for American women aged 30–39 (source:The World Almanac and Book of Facts, 1975). When only one dependent variable is being modeled, ascatterplotwill suggest the form and strength of the relationship between the dependent variable and regressors. It might also reveal outliers, heteroscedasticity, and other aspects of the data that may complicate the interpretation of a fitted regression model. The scatterplot suggests that the relationship is strong and can be approximated as a quadratic function. OLS can handle non-linear relationships by introducing the regressorHEIGHT2. The regression model then becomes a multiple linear model: The output from most popularstatistical packageswill look similar to this: In this table: Ordinary least squares analysis often includes the use of diagnostic plots designed to detect departures of the data from the assumed form of the model. These are some of the common diagnostic plots: An important consideration when carrying out statistical inference using regression models is how the data were sampled. In this example, the data are averages rather than measurements on individual women. The fit of the model is very good, but this does not imply that the weight of an individual woman can be predicted with high accuracy based only on her height. This example also demonstrates that coefficients determined by these calculations are sensitive to how the data is prepared. The heights were originally given rounded to the nearest inch and have been converted and rounded to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm this isnotan exact conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric without rounding. If this is done the results become: Using either of these equations to predict the weight of a 5' 6" (1.6764 m) woman gives similar values: 62.94 kg with rounding vs. 62.98 kg without rounding. Thus a seemingly small variation in the data has a real effect on the coefficients but a small effect on the results of the equation. While this may look innocuous in the middle of the data range it could become significant at the extremes or in the case where the fitted model is used to project outside the data range (extrapolation). This highlights a common error: this example is an abuse of OLS which inherently requires that the errors in the independent variable (in this case height) are zero or at least negligible. The initial rounding to nearest inch plus any actual measurement errors constitute a finite and non-negligible error. As a result, the fitted parameters are not the best estimates they are presumed to be. Though not totally spurious the error in the estimation will depend upon relative size of thexandyerrors. We can use the least square mechanism to figure out the equation of a two body orbit in polar base co-ordinates. The equation typically used isr(θ)=p1−ecos⁡(θ){\displaystyle r(\theta )={\frac {p}{1-e\cos(\theta )}}}wherer(θ){\displaystyle r(\theta )}is the radius of how far the object is from one of the bodies. In the equation the parametersp{\displaystyle p}ande{\displaystyle e}are used to determine the path of the orbit. We have measured the following data. We need to find the least-squares approximation ofe{\displaystyle e}andp{\displaystyle p}for the given data. First we need to represent e and p in a linear form. So we are going to rewrite the equationr(θ){\displaystyle r(\theta )}as1r(θ)=1p−epcos⁡(θ){\displaystyle {\frac {1}{r(\theta )}}={\frac {1}{p}}-{\frac {e}{p}}\cos(\theta )}. Furthermore, one could fit forapsidesby expandingcos⁡(θ){\displaystyle \cos(\theta )}with an extra parameter ascos⁡(θ−θ0)=cos⁡(θ)cos⁡(θ0)+sin⁡(θ)sin⁡(θ0){\displaystyle \cos(\theta -\theta _{0})=\cos(\theta )\cos(\theta _{0})+\sin(\theta )\sin(\theta _{0})}, which is linear in bothcos⁡(θ){\displaystyle \cos(\theta )}and in the extra basis functionsin⁡(θ){\displaystyle \sin(\theta )}, used to extratan⁡θ0=sin⁡(θ0)/cos⁡(θ0){\displaystyle \tan \theta _{0}=\sin(\theta _{0})/\cos(\theta _{0})}. We use the original two-parameter form to represent our observational data as: ATA(xy)=ATb{\displaystyle A^{T}A{\binom {x}{y}}=A^{T}b}wherex{\displaystyle x}is1p{\displaystyle {\frac {1}{p}}}andy{\displaystyle y}isep{\displaystyle {\frac {e}{p}}}andA{\displaystyle A}is constructed by the first column being the coefficient of1p{\displaystyle {\frac {1}{p}}}and the second column being the coefficient ofep{\displaystyle {\frac {e}{p}}}andb{\displaystyle b}is the values for the respective1r(θ){\displaystyle {\frac {1}{r(\theta )}}}soA=[1−0.7313541−0.7071071−0.61566110.05233610.30901710.438371]{\displaystyle A={\begin{bmatrix}1&-0.731354\\1&-0.707107\\1&-0.615661\\1&\ 0.052336\\1&0.309017\\1&0.438371\end{bmatrix}}}andb=[0.212200.219580.247410.450710.528830.56820].{\displaystyle b={\begin{bmatrix}0.21220\\0.21958\\0.24741\\0.45071\\0.52883\\0.56820\end{bmatrix}}.} On solving we get(xy)=(0.434780.30435){\displaystyle {\binom {x}{y}}={\binom {0.43478}{0.30435}}} sop=1x=2.3000{\displaystyle p={\frac {1}{x}}=2.3000}ande=p⋅y=0.70001{\displaystyle e=p\cdot y=0.70001}
https://en.wikipedia.org/wiki/Ordinary_least_squares#Estimation
Numerical linear algebra, sometimes calledapplied linear algebra, is the study of howmatrix operationscan be used to createcomputer algorithmswhichefficientlyand accurately provide approximate answers to questions incontinuousmathematics. It is a subfield ofnumerical analysis, and a type oflinear algebra.Computersusefloating-point arithmeticand cannot exactly representirrationaldata, so when a computer algorithm is applied to a matrix of data, it can sometimesincrease the differencebetween a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuousmathematicsusing finite precision computers, so its applications to thenaturalandsocial sciencesare as vast as the applications of continuous mathematics. It is often a fundamental part ofengineeringandcomputational scienceproblems, such asimageandsignal processing,telecommunication,computational finance,materials sciencesimulations,structural biology,data mining,bioinformatics, andfluid dynamics. Matrix methods are particularly used infinite difference methods,finite element methods, and the modeling ofdifferential equations. Noting the broad applications of numerical linear algebra,Lloyd N. Trefethenand David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",[1]: xeven though it is a comparatively small field.[2]Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type offunctional analysiswhich has a particular emphasis on practical algorithms.[1]: ix Common problems in numerical linear algebra include obtaining matrix decompositions like thesingular value decomposition, theQR factorization, theLU factorization, or theeigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved byiterativemethods rather than direct ones. Numerical linear algebra was developed by computer pioneers likeJohn von Neumann,Alan Turing,James H. Wilkinson,Alston Scott Householder,George Forsythe, andHeinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems ofpartial differential equations.[2]The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann andHerman Goldstine's work in 1947.[3]The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies likeparallel computinghave made them practical approaches to scientific problems.[2] For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear systemx=A−1b{\displaystyle x=A^{-1}b}, rather than understandingxas the product ofA−1{\displaystyle A^{-1}}withb, it is helpful to think ofxas the vector ofcoefficientsin the linear expansion ofbin thebasisformed by the columns ofA.[1]: 8Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrixA, and another over the rows ofA. For example, for matricesAm×n{\displaystyle A^{m\times n}}and vectorsxn×1{\displaystyle x^{n\times 1}}andym×1{\displaystyle y^{m\times 1}}, we could use the column partitioning perspective to computey:=Ax+yas The singular value decomposition of a matrixAm×n{\displaystyle A^{m\times n}}isA=UΣV∗{\displaystyle A=U\Sigma V^{\ast }}whereUandVareunitary, andΣ{\displaystyle \Sigma }isdiagonal. The diagonal entries ofΣ{\displaystyle \Sigma }are called thesingular valuesofA. Because singular values are the square roots of theeigenvaluesofAA∗{\displaystyle AA^{\ast }}, there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;[1]: 36perhaps the most common method involvesHouseholder procedures.[1]: 253 The QR factorization of a matrixAm×n{\displaystyle A^{m\times n}}is a matrixQm×m{\displaystyle Q^{m\times m}}and a matrixRm×n{\displaystyle R^{m\times n}}so thatA = QR, whereQisorthogonalandRisupper triangular.[1]: 50[4]: 223The two main algorithms for computing QR factorizations are theGram–Schmidt processand theHouseholder transformation. The QR factorization is often used to solvelinear least-squaresproblems, and eigenvalue problems (by way of the iterativeQR algorithm). An LU factorization of a matrixAconsists of a lower triangular matrixLand an upper triangular matrixUso thatA = LU. The matrixUis found by an upper triangularization procedure which involves left-multiplyingAby a series of matricesM1,…,Mn−1{\displaystyle M_{1},\ldots ,M_{n-1}}to form the productMn−1⋯M1A=U{\displaystyle M_{n-1}\cdots M_{1}A=U}, so that equivalentlyL=M1−1⋯Mn−1−1{\displaystyle L=M_{1}^{-1}\cdots M_{n-1}^{-1}}.[1]: 147[4]: 96 The eigenvalue decomposition of a matrixAm×m{\displaystyle A^{m\times m}}isA=XΛX−1{\displaystyle A=X\Lambda X^{-1}}, where the columns ofXare the eigenvectors ofA, andΛ{\displaystyle \Lambda }is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues ofA.[1]: 33There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.[1]: 192 From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrixAinto itsLUfactorization, which Gaussian elimination accomplishes by left-multiplyingAby a succession of matricesLm−1⋯L2L1A=U{\displaystyle L_{m-1}\cdots L_{2}L_{1}A=U}untilUis upper triangular andLis lower triangular, whereL≡L1−1L2−1⋯Lm−1−1{\displaystyle L\equiv L_{1}^{-1}L_{2}^{-1}\cdots L_{m-1}^{-1}}.[1]: 148Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits.[2]The simplest solution is to introducepivoting, which produces a modified Gaussian elimination algorithm that is stable.[1]: 151 Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear systemx=A−1b{\displaystyle x=A^{-1}b}, the traditional algebraic approach is to understandxas the product ofA−1{\displaystyle A^{-1}}withb. Numerical linear algebra instead interpretsxas the vector of coefficients of the linear expansion ofbin the basis formed by the columns ofA.[1]: 8 Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrixAand the vectorsxandb, which may make one factorization much easier to obtain than others. IfA=QRis a QR factorization ofA, then equivalentlyRx=Q∗b{\displaystyle Rx=Q^{\ast }b}. This is as easy to compute as a matrix factorization.[1]: 54IfA=XΛX−1{\displaystyle A=X\Lambda X^{-1}}is an eigendecompositionA, and we seek to findbso thatb=Ax, withb′=X−1b{\displaystyle b'=X^{-1}b}andx′=X−1x{\displaystyle x'=X^{-1}x}, then we haveb′=Λx′{\displaystyle b'=\Lambda x'}.[1]: 33This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrixX∗X{\displaystyle X^{*}X}. And ifA=LUis anLUfactorization ofA, thenAx=bcan be solved using the triangular matricesLy=bandUx=y.[1]: 147[4]: 99 Matrix decompositions suggest a number of ways to solve the linear systemr=b−Axwhere we seek to minimizer, as in theregression problem. The QR algorithm solves this problem by computing the reduced QR factorization ofAand rearranging to obtainR^x=Q^∗b{\displaystyle {\widehat {R}}x={\widehat {Q}}^{\ast }b}. This upper triangular system can then be solved forx. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decompositionA=U^Σ^V∗{\displaystyle A={\widehat {U}}{\widehat {\Sigma }}V^{\ast }}and then computing the vectorU^∗b{\displaystyle {\widehat {U}}^{\ast }b}, we reduce the least squares problem to a simple diagonal system.[1]: 84The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classicalnormal equationsmethod for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. Allow that a problem is a functionf:X→Y{\displaystyle f:X\to Y}, whereXis a normed vector space of data andYis a normed vector space of solutions. For some data pointx∈X{\displaystyle x\in X}, the problem is said to be ill-conditioned if a small perturbation inxproduces a large change in the value off(x). We can quantify this by defining acondition numberwhich represents how well-conditioned a problem is, defined asκ^=limδ→0sup‖δx‖≤δ‖δf‖‖δx‖.{\displaystyle {\widehat {\kappa }}=\lim _{\delta \to 0}\sup _{\|\delta x\|\leq \delta }{\frac {\|\delta f\|}{\|\delta x\|}}.} Instability is the tendency of computer algorithms, which depend onfloating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with manysignificant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stablemodified Gram–Schmidt.[1]: 140Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrarym×m{\displaystyle m\times m}matrix requireO(m3){\displaystyle O(m^{3})}time, which is a surprisingly high floor given that matrices contain onlym2{\displaystyle m^{2}}numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix issparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensionalKrylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. WhenAis symmetric and we wish to solve the linear problemAx=b, the classical iterative approach is theconjugate gradient method. IfAis not symmetric, then examples of iterative solutions to the linear problem are thegeneralized minimal residual methodandCGN. IfAis symmetric, then to solve the eigenvalue and eigenvector problem we can use theLanczos algorithm, and ifAis non-symmetric, then we can useArnoldi iteration. Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages includeMATLAB,Analytica,Maple, andMathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation;CandFortranhave packages likeBasic Linear Algebra SubprogramsandLAPACK,Pythonhas the libraryNumPy, andPerlhas thePerl Data Language. Many numerical linear algebra commands inRrely on these more fundamental libraries likeLAPACK.[5]More libraries can be found on theList of numerical libraries.
https://en.wikipedia.org/wiki/Numerical_linear_algebra
Non-linear least squaresis the form ofleast squaresanalysis used to fit a set ofmobservations with a model that is non-linear innunknown parameters (m≥n). It is used in some forms ofnonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities tolinear least squares, but also somesignificant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors (m(x,θi)=θ1+θ2x(θ3){\displaystyle m(x,\theta _{i})=\theta _{1}+\theta _{2}x^{(\theta _{3})}}). Consider a set ofm{\displaystyle m}data points,(x1,y1),(x2,y2),…,(xm,ym),{\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\dots ,(x_{m},y_{m}),}and a curve (model function)y^=f(x,β),{\displaystyle {\hat {y}}=f(x,{\boldsymbol {\beta }}),}that in addition to the variablex{\displaystyle x}also depends onn{\displaystyle n}parameters,β=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}withm≥n.{\displaystyle m\geq n.}It is desired to find the vectorβ{\displaystyle {\boldsymbol {\beta }}}of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squaresS=∑i=1mri2{\displaystyle S=\sum _{i=1}^{m}r_{i}^{2}}is minimized, where theresiduals(in-sample prediction errors)riare given byri=yi−f(xi,β){\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}fori=1,2,…,m.{\displaystyle i=1,2,\dots ,m.} Theminimumvalue ofSoccurs when thegradientis zero. Since the model containsnparameters there arengradient equations:∂S∂βj=2∑iri∂ri∂βj=0(j=1,…,n).{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0\quad (j=1,\ldots ,n).} In a nonlinear system, the derivatives∂ri∂βj{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,βj≈βjk+1=βjk+Δβj.{\displaystyle \beta _{j}\approx \beta _{j}^{k+1}=\beta _{j}^{k}+\Delta \beta _{j}.} Here,kis an iteration number and the vector of increments,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}is known as the shift vector. At each iteration the model is linearized by approximation to a first-orderTaylor polynomialexpansion aboutβk{\displaystyle {\boldsymbol {\beta }}^{k}}f(xi,β)≈f(xi,βk)+∑j∂f(xi,βk)∂βj(βj−βjk)=f(xi,βk)+∑jJijΔβj.{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }}^{k})}{\partial \beta _{j}}}\left(\beta _{j}-\beta _{j}^{k}\right)=f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}J_{ij}\,\Delta \beta _{j}.}TheJacobian matrix,J, is a function of constants, the independent variableandthe parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,∂ri∂βj=−Jij{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{j}}}=-J_{ij}}and the residuals are given byΔyi=yi−f(xi,βk),{\displaystyle \Delta y_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k}),}ri=yi−f(xi,β)=(yi−f(xi,βk))+(f(xi,βk)−f(xi,β))≈Δyi−∑s=1nJisΔβs.{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})=\left(y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k})\right)+\left(f(x_{i},{\boldsymbol {\beta }}^{k})-f(x_{i},{\boldsymbol {\beta }})\right)\approx \Delta y_{i}-\sum _{s=1}^{n}J_{is}\Delta \beta _{s}.} Substituting these expressions into the gradient equations, they become−2∑i=1mJij(Δyi−∑s=1nJisΔβs)=0,{\displaystyle -2\sum _{i=1}^{m}J_{ij}\left(\Delta y_{i}-\sum _{s=1}^{n}J_{is}\ \Delta \beta _{s}\right)=0,}which, on rearrangement, becomensimultaneous linear equations, thenormal equations∑i=1m∑s=1nJijJisΔβs=∑i=1mJijΔyi(j=1,…,n).{\displaystyle \sum _{i=1}^{m}\sum _{s=1}^{n}J_{ij}J_{is}\ \Delta \beta _{s}=\sum _{i=1}^{m}J_{ij}\ \Delta y_{i}\qquad (j=1,\dots ,n).} The normal equations are written in matrix notation as(JTJ)Δβ=JTΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\ \Delta \mathbf {y} .} These equations form the basis for theGauss–Newton algorithmfor a non-linear least squares problem. Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear inJ{\displaystyle J}may appear with factor of−1{\displaystyle -1}in other articles or the literature. When the observations are not equally reliable, a weighted sum of squares may be minimized,S=∑i=1mWiiri2.{\displaystyle S=\sum _{i=1}^{m}W_{ii}r_{i}^{2}.} Each element of thediagonalweight matrixWshould, ideally, be equal to the reciprocal of the errorvarianceof the measurement.[1]The normal equations are then, more generally,(JTWJ)Δβ=JTWΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .} In linear least squares theobjective function,S, is aquadratic functionof the parameters.S=∑iWii(yi−∑jXijβj)2{\displaystyle S=\sum _{i}W_{ii}\left(y_{i}-\sum _{j}X_{ij}\beta _{j}\right)^{2}}When there is only one parameter the graph ofSwith respect to that parameter will be aparabola. With two or more parameters the contours ofSwith respect to any pair of parameters will be concentricellipses(assuming that the normal equations matrixXTWX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {WX} }ispositive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical. In NLLSQ the objective function is quadratic with respect to the parameters only ina region closeto its minimum value, where the truncated Taylor series is a good approximation to the model.S≈∑iWii(yi−∑jJijβj)2{\displaystyle S\approx \sum _{i}W_{ii}\left(y_{i}-\sum _{j}J_{ij}\beta _{j}\right)^{2}}The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters. Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is bycomputer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates.[citation needed]Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient[citation needed]. Any method among the ones describedbelowcan be applied to find a solution. The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is|Sk−Sk+1Sk|<0.0001.{\displaystyle \left|{\frac {S^{k}-S^{k+1}}{S^{k}}}\right|<0.0001.}The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is|Δβjβj|<0.001,j=1,…,n.{\displaystyle \left|{\frac {\Delta \beta _{j}}{\beta _{j}}}\right|<0.001,\qquad j=1,\dots ,n.} Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters. There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation∂f(xi,β)∂βj≈δf(xi,β)δβj{\displaystyle {\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\approx {\frac {\delta f(x_{i},{\boldsymbol {\beta }})}{\delta \beta _{j}}}}is obtained by calculation off(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}forβj{\displaystyle \beta _{j}}andβj+δβj{\displaystyle \beta _{j}+\delta \beta _{j}}. The increment,δβj{\displaystyle \delta \beta _{j}}, size should be chosen so the numerical derivative is not subject to approximation error by being too large, orround-offerror by being too small. Some information is given inthe corresponding sectionon theWeighted least squarespage. Multiple minima can occur in a variety of circumstances some of which are: Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum. When multiple minima exist there is an important consequence: the objective function will have a stationary point (e.g. a maximum or asaddle point) somewhere between two minima. The normal equations matrix is not positive definite at a stationary point in the objective function, because the gradient vanishes and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a stationary point will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the Lorentzian is zero.[2] A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms. When a linear approximation is valid, the model can directly be used for inference with ageneralized least squares, where the equations of theLinear Template Fit[3]apply. Another example of a linear approximation would be when the model is a simple exponential function,f(xi,β)=αeβxi,{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\alpha e^{\beta x_{i}},}which can be transformed into a linear model by taking logarithms.log⁡f(xi,β)=log⁡α+βxi{\displaystyle \log f(x_{i},{\boldsymbol {\beta }})=\log \alpha +\beta x_{i}}Graphically this corresponds to working on asemi-log plot. The sum of squares becomesS=∑i(log⁡yi−log⁡α−βxi)2.{\displaystyle S=\sum _{i}(\log y_{i}-\log \alpha -\beta x_{i})^{2}.}This procedure should be avoided unless the errors are multiplicative andlog-normally distributedbecause it can give misleading results. This comes from the fact that whatever the experimental errors onymight be, the errors onlogyare different. Therefore, when the transformed sum of squares is minimized, different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates. Another example is furnished byMichaelis–Menten kinetics, used to determine two parametersVmax{\displaystyle V_{\max }}andKm{\displaystyle K_{m}}:v=Vmax[S]Km+[S].{\displaystyle v={\frac {V_{\max }[S]}{K_{m}+[S]}}.}TheLineweaver–Burk plot1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}}of1v{\textstyle {\frac {1}{v}}}against1[S]{\textstyle {\frac {1}{[S]}}}is linear in the parameters1Vmax{\textstyle {\frac {1}{V_{\max }}}}andKmVmax{\textstyle {\frac {K_{m}}{V_{\max }}}}but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable[S]{\displaystyle [S]}. The normal equations(JTWJ)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }may be solved forΔβ{\displaystyle \Delta {\boldsymbol {\beta }}}byCholesky decomposition, as described inlinear least squares. The parameters are updated iterativelyβk+1=βk+Δβ{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+\Delta {\boldsymbol {\beta }}}wherekis an iteration number. While this method may be adequate for simple models, it will fail if divergence occurs. Therefore, protection against divergence is essential. If divergence occurs, a simple expedient is to reduce the length of the shift vector,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}, by a fraction,fβk+1=βk+fΔβ.{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+f\ \Delta {\boldsymbol {\beta }}.}For example, the length of the shift vector may be successively halved until the new value of the objective function is less than its value at the last iteration. The fraction,fcould be optimized by aline search.[4]As each trial value offrequires the objective function to be re-calculated it is not worth optimizing its value too stringently. When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters,βk.{\displaystyle {\boldsymbol {\beta }}^{k}.} If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction,frequired to avoid divergence is very small, the direction must be changed. This can be achieved by using theMarquardtparameter.[5]In this method the normal equations are modified(JTWJ+λI)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} +\lambda \mathbf {I} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }whereλ{\displaystyle \lambda }is the Marquardt parameter andIis an identity matrix. Increasing the value ofλ{\displaystyle \lambda }has the effect of changing both the direction and the length of the shift vector. The shift vector is rotated towards the direction ofsteepest descentwhenλI≫JTWJ,Δβ≈1λJTWΔy.{\displaystyle \lambda \mathbf {I} \gg \mathbf {J} ^{\mathsf {T}}\mathbf {WJ} ,\ {\Delta {\boldsymbol {\beta }}}\approx {\frac {1}{\lambda }}\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .}JTWΔy{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {W} \,\Delta \mathbf {y} }is the steepest descent vector. So, whenλ{\displaystyle \lambda }becomes very large, the shift vector becomes a small fraction of the steepest descent vector. Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian.[6]A bound for this value is given by1/tr⁡(JTWJ)−1{\displaystyle 1/\operatorname {tr} \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)^{-1}}wheretris thetrace function.[7] The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written asr=Δy−JΔβ.{\displaystyle \mathbf {r} =\Delta \mathbf {y} -\mathbf {J} \,\Delta {\boldsymbol {\beta }}.}The Jacobian is subjected to an orthogonal decomposition; theQR decompositionwill serve to illustrate the process.J=QR{\displaystyle \mathbf {J} =\mathbf {QR} }whereQis anorthogonalm×m{\displaystyle m\times m}matrix andRis anm×n{\displaystyle m\times n}matrix which ispartitionedinto ann×n{\displaystyle n\times n}block,Rn{\displaystyle \mathbf {R} _{n}}, and a(m−n)×n{\displaystyle (m-n)\times n}zero block.Rn{\displaystyle \mathbf {R} _{n}}is upper triangular. R=[Rn0]{\displaystyle \mathbf {R} ={\begin{bmatrix}\mathbf {R} _{n}\\\mathbf {0} \end{bmatrix}}} The residual vector is left-multiplied byQT{\displaystyle \mathbf {Q} ^{\mathsf {T}}}. QTr=QTΔy−RΔβ=[(QTΔy−RΔβ)n(QTΔy)m−n]{\displaystyle \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}={\begin{bmatrix}\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}\right)_{n}\\\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{m-n}\end{bmatrix}}} This has no effect on the sum of squares sinceS=rTQQTr=rTr{\displaystyle S=\mathbf {r} ^{\mathsf {T}}\mathbf {Q} \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {r} ^{\mathsf {T}}\mathbf {r} }becauseQisorthogonal. The minimum value ofSis attained when the upper block is zero. Therefore, the shift vector is found by solvingRnΔβ=(QTΔy)n.{\displaystyle \mathbf {R} _{n}\ \Delta {\boldsymbol {\beta }}=\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} These equations are easily solved asRis upper triangular. A variant of the method of orthogonal decomposition involvessingular value decomposition, in whichRis diagonalized by further orthogonal transformations. J=UΣVT{\displaystyle \mathbf {J} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{\mathsf {T}}}whereU{\displaystyle \mathbf {U} }is orthogonal,Σ{\displaystyle {\boldsymbol {\Sigma }}}is a diagonal matrix of singular values andV{\displaystyle \mathbf {V} }is the orthogonal matrix of the eigenvectors ofJTJ{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {J} }or equivalently the right singular vectors ofJ{\displaystyle \mathbf {J} }. In this case the shift vector is given byΔβ=VΣ−1(UTΔy)n.{\displaystyle \Delta {\boldsymbol {\beta }}=\mathbf {V} {\boldsymbol {\Sigma }}^{-1}\left(\mathbf {U} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.[6] There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems. Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods. More detailed descriptions of these, and other, methods are available, inNumerical Recipes, together with computer code in various languages.
https://en.wikipedia.org/wiki/Numerical_methods_for_non-linear_least_squares
Inmathematics, theHessian matrix,Hessianor (less commonly)Hesse matrixis asquare matrixof second-orderpartial derivativesof a scalar-valuedfunction, orscalar field. It describes the localcurvatureof a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematicianLudwig Otto Hesseand later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or,∇∇{\displaystyle \nabla \nabla }or∇⊗∇{\displaystyle \nabla \otimes \nabla }orD2{\displaystyle D^{2}}. Supposef:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a function taking as input a vectorx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}and outputting a scalarf(x)∈R.{\displaystyle f(\mathbf {x} )\in \mathbb {R} .}If all second-orderpartial derivativesoff{\displaystyle f}exist, then the Hessian matrixH{\displaystyle \mathbf {H} }off{\displaystyle f}is a squaren×n{\displaystyle n\times n}matrix, usually defined and arranged asHf=[∂2f∂x12∂2f∂x1∂x2⋯∂2f∂x1∂xn∂2f∂x2∂x1∂2f∂x22⋯∂2f∂x2∂xn⋮⋮⋱⋮∂2f∂xn∂x1∂2f∂xn∂x2⋯∂2f∂xn2].{\displaystyle \mathbf {H} _{f}={\begin{bmatrix}{\dfrac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}}.}That is, the entry of theith row and thejth column is(Hf)i,j=∂2f∂xi∂xj.{\displaystyle (\mathbf {H} _{f})_{i,j}={\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}.} If furthermore the second partial derivatives are all continuous, the Hessian matrix is asymmetric matrixby thesymmetry of second derivatives. Thedeterminantof the Hessian matrix is called theHessian determinant.[1] The Hessian matrix of a functionf{\displaystyle f}is theJacobian matrixof thegradientof the functionf{\displaystyle f}; that is:H(f(x))=J(∇f(x)).{\displaystyle \mathbf {H} (f(\mathbf {x} ))=\mathbf {J} (\nabla f(\mathbf {x} )).} Iff{\displaystyle f}is ahomogeneous polynomialin three variables, the equationf=0{\displaystyle f=0}is theimplicit equationof aplane projective curve. Theinflection pointsof the curve are exactly the non-singular points where the Hessian determinant is zero. It follows byBézout's theoremthat acubic plane curvehas at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3. The Hessian matrix of aconvex functionispositive semi-definite. Refining this property allows us to test whether acritical pointx{\displaystyle x}is a local maximum, local minimum, or a saddle point, as follows: If the Hessian ispositive-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local minimum atx.{\displaystyle x.}If the Hessian isnegative-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local maximum atx.{\displaystyle x.}If the Hessian has both positive and negativeeigenvalues, thenx{\displaystyle x}is asaddle pointforf.{\displaystyle f.}Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positive-semidefinite, and at a local maximum the Hessian is negative-semidefinite. For positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view ofMorse theory. Thesecond-derivative testfor functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, thenx{\displaystyle x}is a local minimum, and if it is negative, thenx{\displaystyle x}is a local maximum; if it is zero, then the test is inconclusive. In two variables, thedeterminantcan be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the second-derivative test is inconclusive. Equivalently, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost)minors(determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the1×1{\displaystyle 1\times 1}minor being negative. If thegradient(the vector of the partial derivatives) of a functionf{\displaystyle f}is zero at some pointx,{\displaystyle \mathbf {x} ,}thenf{\displaystyle f}has acritical point(orstationary point) atx.{\displaystyle \mathbf {x} .}Thedeterminantof the Hessian atx{\displaystyle \mathbf {x} }is called, in some contexts, adiscriminant. If this determinant is zero thenx{\displaystyle \mathbf {x} }is called adegenerate critical pointoff,{\displaystyle f,}or anon-Morse critical pointoff.{\displaystyle f.}Otherwise it is non-degenerate, and called aMorse critical pointoff.{\displaystyle f.} The Hessian matrix plays an important role inMorse theoryandcatastrophe theory, because itskernelandeigenvaluesallow classification of the critical points.[2][3][4] The determinant of the Hessian matrix, when evaluated at a critical point of a function, is equal to theGaussian curvatureof the function considered as a manifold. The eigenvalues of the Hessian at that point are the principal curvatures of the function, and the eigenvectors are the principal directions of curvature. (SeeGaussian curvature § Relation to principal curvatures.) Hessian matrices are used in large-scaleoptimizationproblems withinNewton-type methods because they are the coefficient of the quadratic term of a localTaylor expansionof a function. That is,y=f(x+Δx)≈f(x)+∇f(x)TΔx+12ΔxTH(x)Δx{\displaystyle y=f(\mathbf {x} +\Delta \mathbf {x} )\approx f(\mathbf {x} )+\nabla f(\mathbf {x} )^{\mathsf {T}}\Delta \mathbf {x} +{\frac {1}{2}}\,\Delta \mathbf {x} ^{\mathsf {T}}\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} }where∇f{\displaystyle \nabla f}is thegradient(∂f∂x1,…,∂f∂xn).{\displaystyle \left({\frac {\partial f}{\partial x_{1}}},\ldots ,{\frac {\partial f}{\partial x_{n}}}\right).}Computing and storing the full Hessian matrix takesΘ(n2){\displaystyle \Theta \left(n^{2}\right)}memory, which is infeasible for high-dimensional functions such as theloss functionsofneural nets,conditional random fields, and otherstatistical modelswith large numbers of parameters. For such situations,truncated-Newtonandquasi-Newtonalgorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasi-Newton algorithms isBFGS.[5] Such approximations may use the fact that an optimization algorithm uses the Hessian only as alinear operatorH(v),{\displaystyle \mathbf {H} (\mathbf {v} ),}and proceed by first noticing that the Hessian also appears in the local expansion of the gradient:∇f(x+Δx)=∇f(x)+H(x)Δx+O(‖Δx‖2){\displaystyle \nabla f(\mathbf {x} +\Delta \mathbf {x} )=\nabla f(\mathbf {x} )+\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} +{\mathcal {O}}(\|\Delta \mathbf {x} \|^{2})} LettingΔx=rv{\displaystyle \Delta \mathbf {x} =r\mathbf {v} }for some scalarr,{\displaystyle r,}this givesH(x)Δx=H(x)rv=rH(x)v=∇f(x+rv)−∇f(x)+O(r2),{\displaystyle \mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} =\mathbf {H} (\mathbf {x} )r\mathbf {v} =r\mathbf {H} (\mathbf {x} )\mathbf {v} =\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )+{\mathcal {O}}(r^{2}),}that is,H(x)v=1r[∇f(x+rv)−∇f(x)]+O(r){\displaystyle \mathbf {H} (\mathbf {x} )\mathbf {v} ={\frac {1}{r}}\left[\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )\right]+{\mathcal {O}}(r)}so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable sincer{\displaystyle r}has to be made small to prevent error due to theO(r){\displaystyle {\mathcal {O}}(r)}term, but decreasing it loses precision in the first term.[6]) Notably regarding Randomized Search Heuristics, theevolution strategy's covariance matrix adapts to the inverse of the Hessian matrix,up toa scalar factor and small random fluctuations. This result has been formally proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation.[7] The Hessian matrix is commonly used for expressing image processing operators inimage processingandcomputer vision(see theLaplacian of Gaussian(LoG) blob detector,the determinant of Hessian (DoH) blob detectorandscale space). It can be used innormal modeanalysis to calculate the different molecular frequencies ininfrared spectroscopy.[8]It can also be used in local sensitivity and statistical diagnostics.[9] Abordered Hessianis used for the second-derivative test in certain constrained optimization problems. Given the functionf{\displaystyle f}considered previously, but adding a constraint functiong{\displaystyle g}such thatg(x)=c,{\displaystyle g(\mathbf {x} )=c,}the bordered Hessian is the Hessian of theLagrange functionΛ(x,λ)=f(x)+λ[g(x)−c]{\displaystyle \Lambda (\mathbf {x} ,\lambda )=f(\mathbf {x} )+\lambda [g(\mathbf {x} )-c]}:[10]H(Λ)=[∂2Λ∂λ2∂2Λ∂λ∂x(∂2Λ∂λ∂x)T∂2Λ∂x2]=[0∂g∂x1∂g∂x2⋯∂g∂xn∂g∂x1∂2Λ∂x12∂2Λ∂x1∂x2⋯∂2Λ∂x1∂xn∂g∂x2∂2Λ∂x2∂x1∂2Λ∂x22⋯∂2Λ∂x2∂xn⋮⋮⋮⋱⋮∂g∂xn∂2Λ∂xn∂x1∂2Λ∂xn∂x2⋯∂2Λ∂xn2]=[0∂g∂x(∂g∂x)T∂2Λ∂x2]{\displaystyle \mathbf {H} (\Lambda )={\begin{bmatrix}{\dfrac {\partial ^{2}\Lambda }{\partial \lambda ^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\\\left({\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial g}{\partial x_{2}}}&\cdots &{\dfrac {\partial g}{\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial g}{\partial x_{n}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial \mathbf {x} }}\\\left({\dfrac {\partial g}{\partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}} If there are, say,m{\displaystyle m}constraints then the zero in the upper-left corner is anm×m{\displaystyle m\times m}block of zeros, and there arem{\displaystyle m}border rows at the top andm{\displaystyle m}border columns at the left. The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, aszTHz=0{\displaystyle \mathbf {z} ^{\mathsf {T}}\mathbf {H} \mathbf {z} =0}ifz{\displaystyle \mathbf {z} }is any vector whose sole non-zero entry is its first. The second derivative test consists here of sign restrictions of the determinants of a certain set ofn−m{\displaystyle n-m}submatrices of the bordered Hessian.[11]Intuitively, them{\displaystyle m}constraints can be thought of as reducing the problem to one withn−m{\displaystyle n-m}free variables. (For example, the maximization off(x1,x2,x3){\displaystyle f\left(x_{1},x_{2},x_{3}\right)}subject to the constraintx1+x2+x3=1{\displaystyle x_{1}+x_{2}+x_{3}=1}can be reduced to the maximization off(x1,x2,1−x1−x2){\displaystyle f\left(x_{1},x_{2},1-x_{1}-x_{2}\right)}without constraint.) Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first2m{\displaystyle 2m}leading principal minors are neglected, the smallest minor consisting of the truncated first2m+1{\displaystyle 2m+1}rows and columns, the next consisting of the truncated first2m+2{\displaystyle 2m+2}rows and columns, and so on, with the last being the entire bordered Hessian; if2m+1{\displaystyle 2m+1}is larger thann+m,{\displaystyle n+m,}then the smallest leading principal minor is the Hessian itself.[12]There are thusn−m{\displaystyle n-m}minors to consider, each evaluated at the specific point being considered as acandidate maximum or minimum. A sufficient condition for a localmaximumis that these minors alternate in sign with the smallest one having the sign of(−1)m+1.{\displaystyle (-1)^{m+1}.}A sufficient condition for a localminimumis that all of these minors have the sign of(−1)m.{\displaystyle (-1)^{m}.}(In the unconstrained case ofm=0{\displaystyle m=0}these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively). Iff{\displaystyle f}is instead avector fieldf:Rn→Rm,{\displaystyle \mathbf {f} :\mathbb {R} ^{n}\to \mathbb {R} ^{m},}that is,f(x)=(f1(x),f2(x),…,fm(x)),{\displaystyle \mathbf {f} (\mathbf {x} )=\left(f_{1}(\mathbf {x} ),f_{2}(\mathbf {x} ),\ldots ,f_{m}(\mathbf {x} )\right),}then the collection of second partial derivatives is not an×n{\displaystyle n\times n}matrix, but rather a third-ordertensor. This can be thought of as an array ofm{\displaystyle m}Hessian matrices, one for each component off{\displaystyle \mathbf {f} }:H(f)=(H(f1),H(f2),…,H(fm)).{\displaystyle \mathbf {H} (\mathbf {f} )=\left(\mathbf {H} (f_{1}),\mathbf {H} (f_{2}),\ldots ,\mathbf {H} (f_{m})\right).}This tensor degenerates to the usual Hessian matrix whenm=1.{\displaystyle m=1.} In the context ofseveral complex variables, the Hessian may be generalized. Supposef:Cn→C,{\displaystyle f\colon \mathbb {C} ^{n}\to \mathbb {C} ,}and writef(z1,…,zn).{\displaystyle f\left(z_{1},\ldots ,z_{n}\right).}IdentifyingCn{\displaystyle {\mathbb {C} }^{n}}withR2n{\displaystyle {\mathbb {R} }^{2n}}, the normal "real" Hessian is a2n×2n{\displaystyle 2n\times 2n}matrix. As the object of study in several complex variables areholomorphic functions, that is, solutions to the n-dimensionalCauchy–Riemann conditions, we usually look on the part of the Hessian that contains information invariant under holomorphic changes of coordinates. This "part" is the so-called complex Hessian, which is the matrix(∂2f∂zj∂z¯k)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial {\bar {z}}_{k}}}\right)_{j,k}.}Note that iff{\displaystyle f}is holomorphic, then its complex Hessian matrix is identically zero, so the complex Hessian is used to study smooth but not holomorphic functions, see for exampleLevi pseudoconvexity. When dealing with holomorphic functions, we could consider the Hessian matrix(∂2f∂zj∂zk)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial z_{k}}}\right)_{j,k}.} Let(M,g){\displaystyle (M,g)}be aRiemannian manifoldand∇{\displaystyle \nabla }itsLevi-Civita connection. Letf:M→R{\displaystyle f:M\to \mathbb {R} }be a smooth function. Define the Hessian tensor byHess⁡(f)∈Γ(T∗M⊗T∗M)byHess⁡(f):=∇∇f=∇df,{\displaystyle \operatorname {Hess} (f)\in \Gamma \left(T^{*}M\otimes T^{*}M\right)\quad {\text{ by }}\quad \operatorname {Hess} (f):=\nabla \nabla f=\nabla df,}where this takes advantage of the fact that the first covariant derivative of a function is the same as its ordinary differential. Choosing local coordinates{xi}{\displaystyle \left\{x^{i}\right\}}gives a local expression for the Hessian asHess⁡(f)=∇i∂jfdxi⊗dxj=(∂2f∂xi∂xj−Γijk∂f∂xk)dxi⊗dxj{\displaystyle \operatorname {Hess} (f)=\nabla _{i}\,\partial _{j}f\ dx^{i}\!\otimes \!dx^{j}=\left({\frac {\partial ^{2}f}{\partial x^{i}\partial x^{j}}}-\Gamma _{ij}^{k}{\frac {\partial f}{\partial x^{k}}}\right)dx^{i}\otimes dx^{j}}whereΓijk{\displaystyle \Gamma _{ij}^{k}}are theChristoffel symbolsof the connection. Other equivalent forms for the Hessian are given byHess⁡(f)(X,Y)=⟨∇Xgrad⁡f,Y⟩andHess⁡(f)(X,Y)=X(Yf)−df(∇XY).{\displaystyle \operatorname {Hess} (f)(X,Y)=\langle \nabla _{X}\operatorname {grad} f,Y\rangle \quad {\text{ and }}\quad \operatorname {Hess} (f)(X,Y)=X(Yf)-df(\nabla _{X}Y).}
https://en.wikipedia.org/wiki/Hessian_matrix#Bordered_Hessian
Inmathematics, adifferentiable functionof onerealvariable is afunctionwhosederivativeexists at each point in itsdomain. In other words, thegraphof a differentiable function has a non-verticaltangent lineat each interior point in its domain. A differentiable function issmooth(the function is locally well approximated as alinear functionat each interior point) and does not contain any break, angle, orcusp. Ifx0is an interior point in the domain of a functionf, thenfis said to bedifferentiable atx0if the derivativef′(x0){\displaystyle f'(x_{0})}exists. In other words, the graph offhas a non-vertical tangent line at the point(x0,f(x0)).fis said to be differentiable onUif it is differentiable at every point ofU.fis said to becontinuously differentiableif its derivative is also a continuous function over the domain of the functionf{\textstyle f}. Generally speaking,fis said to be of classCk{\displaystyle C^{k}}if its firstk{\displaystyle k}derivativesf′(x),f′′(x),…,f(k)(x){\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}exist and are continuous over the domain of the functionf{\textstyle f}. For a multivariable function, as shownhere, the differentiability of it is something more complex than the existence of the partial derivatives of it. A functionf:U→R{\displaystyle f:U\to \mathbb {R} }, defined on an open setU⊂R{\textstyle U\subset \mathbb {R} }, is said to bedifferentiableata∈U{\displaystyle a\in U}if the derivative exists. This implies that the function iscontinuousata. This functionfis said to bedifferentiableonUif it is differentiable at every point ofU. In this case, the derivative offis thus a function fromUintoR.{\displaystyle \mathbb {R} .} A continuous function is not necessarily differentiable, but a differentiable function is necessarilycontinuous(at every point where it is differentiable) as is shown below (in the sectionDifferentiability and continuity). A function is said to becontinuously differentiableif its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the sectionDifferentiability classes). The above definition can be extended to define the derivative atboundary points. The derivative of a functionf:A→R{\textstyle f:A\to \mathbb {R} }defined on a closed subsetA⊊R{\textstyle A\subsetneq \mathbb {R} }of the real numbers, evaluated at a boundary pointc{\textstyle c}, can be defined as the following one-sided limit, where the argumentx{\textstyle x}approachesc{\textstyle c}such that it is always withinA{\textstyle A}: Forx{\textstyle x}to remain withinA{\textstyle A}, which is a subset of the reals, it follows that this limit will be defined as either Iffis differentiable at a pointx0, thenfmust also becontinuousatx0. In particular, any differentiable function must be continuous at every point in its domain.The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend,cusp, orvertical tangentmay be continuous, but fails to be differentiable at the location of the anomaly. Most functions that occur in practice have derivatives at all points or atalmost everypoint. However, a result ofStefan Banachstates that the set of functions that have a derivative at some point is ameagre setin the space of all continuous functions.[1]Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is theWeierstrass function. A functionf{\textstyle f}is said to becontinuously differentiableif the derivativef′(x){\textstyle f^{\prime }(x)}exists and is itself a continuous function. Although the derivative of a differentiable function never has ajump discontinuity, it is possible for the derivative to have anessential discontinuity. For example, the functionf(x)={x2sin⁡(1/x)ifx≠00ifx=0{\displaystyle f(x)\;=\;{\begin{cases}x^{2}\sin(1/x)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}is differentiable at 0, sincef′(0)=limε→0(ε2sin⁡(1/ε)−0ε)=0{\displaystyle f'(0)=\lim _{\varepsilon \to 0}\left({\frac {\varepsilon ^{2}\sin(1/\varepsilon )-0}{\varepsilon }}\right)=0}exists. However, forx≠0,{\displaystyle x\neq 0,}differentiation rulesimplyf′(x)=2xsin⁡(1/x)−cos⁡(1/x),{\displaystyle f'(x)=2x\sin(1/x)-\cos(1/x)\;,}which has no limit asx→0.{\displaystyle x\to 0.}Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless,Darboux's theoremimplies that the derivative of any function satisfies the conclusion of theintermediate value theorem. Similarly to howcontinuous functionsare said to be ofclassC0,{\displaystyle C^{0},}continuously differentiable functions are sometimes said to be ofclassC1{\displaystyle C^{1}}. A function is ofclassC2{\displaystyle C^{2}}if the first andsecond derivativeof the function both exist and are continuous. More generally, a function is said to be ofclassCk{\displaystyle C^{k}}if the firstk{\displaystyle k}derivativesf′(x),f′′(x),…,f(k)(x){\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}all exist and are continuous. If derivativesf(n){\displaystyle f^{(n)}}exist for all positive integersn,{\textstyle n,}the function issmoothor equivalently, ofclassC∞.{\displaystyle C^{\infty }.} Afunction of several real variablesf:Rm→Rnis said to be differentiable at a pointx0ifthere existsalinear mapJ:Rm→Rnsuch that If a function is differentiable atx0, then all of thepartial derivativesexist atx0, and the linear mapJis given by theJacobian matrix, ann×mmatrix in this case. A similar formulation of the higher-dimensional derivative is provided by thefundamental increment lemmafound in single-variable calculus. If all the partial derivatives of a function exist in aneighborhoodof a pointx0and are continuous at the pointx0, then the function is differentiable at that pointx0. However, the existence of the partial derivatives (or even of all thedirectional derivatives) does not guarantee that a function is differentiable at a point. For example, the functionf:R2→Rdefined by is not differentiable at(0, 0), but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function is not differentiable at(0, 0), but again all of the partial derivatives and directional derivatives exist. Incomplex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividingcomplex numbers. So, a functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }is said to be differentiable atx=a{\textstyle x=a}when Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }, that is complex-differentiable at a pointx=a{\textstyle x=a}is automatically differentiable at that point, when viewed as a functionf:R2→R2{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}. This is because the complex-differentiability implies that However, a functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }can be differentiable as a multi-variable function, while not being complex-differentiable. For example,f(z)=z+z¯2{\displaystyle f(z)={\frac {z+{\overline {z}}}{2}}}is differentiable at every point, viewed as the 2-variablereal functionf(x,y)=x{\displaystyle f(x,y)=x}, but it is not complex-differentiable at any point because the limitlimh→0h+h¯2h{\textstyle \lim _{h\to 0}{\frac {h+{\bar {h}}}{2h}}}gives different values for different approaches to 0. Any function that is complex-differentiable in a neighborhood of a point is calledholomorphicat that point. Such a function is necessarily infinitely differentiable, and in factanalytic. IfMis adifferentiable manifold, a real or complex-valued functionfonMis said to be differentiable at a pointpif it is differentiable with respect to some (or any) coordinate chart defined aroundp. IfMandNare differentiable manifolds, a functionf:M→Nis said to be differentiable at a pointpif it is differentiable with respect to some (or any) coordinate charts defined aroundpandf(p).
https://en.wikipedia.org/wiki/Differentiability
Inmathematics, theinterior extremum theorem, also known asFermat's theorem, is a theorem which states that at thelocal extremaof adifferentiable function, itsderivativeis always zero. It belongs to the mathematical field ofreal analysisand is named after French mathematicianPierre de Fermat. By using the interior extremum theorem, the potential extrema of a functionf{\displaystyle f}, with derivativef′{\displaystyle f'}, can found by solving anequationinvolvingf′{\displaystyle f'}. The interior extremum theorem gives only anecessary conditionfor extreme function values, as some stationary points areinflection points(not a maximum or minimum). The function'ssecond derivative, if it exists, can sometimes be used to determine whether a stationary point is a maximum or minimum. Pierre de Fermatproposed in a collection of treatises titledMaxima et minimaa method to find maximum or minimum, similar to the modern interior extremum theorem, albeit with the use ofinfinitesimalsrather than derivatives.[1]: 456–457[2]: 2AfterMarin Mersennepassed the treatises ontoRené Descartes, Descartes was doubtful, remarking "if [...] he speaks of wanting to send you still more papers, I beg of you to ask him to think them out more carefully than those preceding".[2]: 3Descartes later agreed that the method was valid.[2]: 8 One way to state the interior extremum theorem is that, if a function has a localextremumat some point and isdifferentiablethere, then the function's derivative at that point must be zero. In precise mathematical language: Another way to understand the theorem is via thecontrapositivestatement: if the derivative of a function at any point is not zero, then there is not a local extremum at that point. Formally: The global extrema of a functionfon adomainAoccur only atboundaries, non-differentiable points, and stationary points. Ifx0{\displaystyle x_{0}}is a global extremum off, then one of the following is true:[2]: 1 A similar statement holds for thepartial derivativesofmultivariate functions. Suppose that some real-valued function of the real numbersf=f(t1,t2,…,tk){\displaystyle f=f(t_{1},t_{2},\ldots ,t_{k})}has an extremum at a pointC{\displaystyle C}, defined byC=(a1,a2,…,ak){\displaystyle C=(a_{1},a_{2},\ldots ,a_{k})}. Iff{\displaystyle f}is differentiable atC{\displaystyle C}, then:∂∂tif(ai)=0{\displaystyle {\frac {\partial }{\partial t_{i}}}f(a_{i})=0}wherei=1,2,…,k{\displaystyle i=1,2,\ldots ,k}.[4]: 16 The statement can also be extended todifferentiable manifolds. Iff:M→R{\displaystyle f:M\to \mathbb {R} }is adifferentiable functionon a manifoldM{\displaystyle M}, then its local extrema must becritical pointsoff{\displaystyle f}, in particular points where theexterior derivativedf{\displaystyle df}is zero.[5][better source needed] The interior extremum theorem is central for determiningmaxima and minimaofpiecewise differentiable functionsof one variable: an extremum is either astationary point(that is, azeroof the derivative), a non-differentiable point (that is a point where the function is notdifferentiable), or aboundary pointof thedomain of the function. Since the number of these points is typically finite, the computation of the values of the function at these points provide the maximum and the minimun, simply by comparing the obtained values.[6]: 25[2]: 1 Suppose thatx0{\displaystyle x_{0}}is a local maximum. (A similar argument applies ifx0{\displaystyle x_{0}}is a local minimum.) Then there is someneighbourhoodaroundx0{\displaystyle x_{0}}such thatf(x0)≥f(x){\displaystyle f(x_{0})\geq f(x)}for allx{\displaystyle x}within that neighborhood. Ifx>x0{\displaystyle x>x_{0}}, then thedifference quotientf(x)−f(x0)x−x0{\displaystyle {\frac {f(x)-f(x_{0})}{x-x_{0}}}}is non-positive forx{\displaystyle x}in this neighborhood. This implieslimx→x0+f(x)−f(x0)x−x0≤0.{\displaystyle \lim _{x\rightarrow x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}\leq 0.}Similarly, ifx<x0{\displaystyle x<x_{0}}, then the difference quotient is non-negative, and solimx→x0−f(x)−f(x0)x−x0≥0.{\displaystyle \lim _{x\rightarrow x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}\geq 0.}Sincef{\displaystyle f}is differentiable, the above limits must both be equal tof′(x0){\displaystyle f'(x_{0})}. This is only possible if both limits are equal to 0, sof′(x0)=0{\displaystyle f'(x_{0})=0}.[7]: 182
https://en.wikipedia.org/wiki/Fermat%27s_theorem_(stationary_points)
Indifferential calculusanddifferential geometry, aninflection point,point of inflection,flex, orinflection(rarelyinflexion) is a point on asmooth plane curveat which thecurvaturechanges sign. In particular, in the case of thegraph of a function, it is a point where the function changes from beingconcave(concave downward) toconvex(concave upward), or vice versa. For the graph of a functionfofdifferentiability classC2(its first derivativef', and itssecond derivativef'', exist and are continuous), the conditionf'' =0can also be used to find an inflection point since a point off'' =0must be passed to changef''from a positive value (concave upward) to a negative value (concave downward) or vice versa asf''is continuous; an inflection point of the curve is wheref'' =0and changes its sign at the point (from positive to negative or from negative to positive).[1]A point where the second derivative vanishes but does not change its sign is sometimes called apoint of undulationorundulation point. In algebraic geometry an inflection point is defined slightly more generally, as aregular pointwhere the tangent meets the curve toorderat least 3, and an undulation point orhyperflexis defined as a point where the tangent meets the curve to order at least 4. Inflection points in differential geometry are the points of the curve where thecurvaturechanges its sign.[2][3] For example, the graph of thedifferentiable functionhas an inflection point at(x,f(x))if and only if itsfirst derivativef'has anisolatedextremumatx. (this is not the same as saying thatfhas an extremum). That is, in some neighborhood,xis the one and only point at whichf'has a (local) minimum or maximum. If allextremaoff'areisolated, then an inflection point is a point on the graph offat which thetangentcrosses the curve. Afalling point of inflectionis an inflection point where the derivative is negative on both sides of the point; in other words, it is an inflection point near which the function is decreasing. Arising point of inflectionis a point where the derivative is positive on both sides of the point; in other words, it is an inflection point near which the function is increasing. For a smooth curve given byparametric equations, a point is an inflection point if itssigned curvaturechanges from plus to minus or from minus to plus, i.e., changessign. For a smooth curve which is a graph of a twice differentiable function, an inflection point is a point on the graph at which thesecond derivativehas an isolated zero and changes sign. Inalgebraic geometry, a non singular point of analgebraic curveis aninflection pointif and only if theintersection numberof the tangent line and the curve (at the point of tangency) is greater than 2. The main motivation of this different definition, is that otherwise the set of the inflection points of a curve would not be analgebraic set. In fact, the set of the inflection points of a plane algebraic curve are exactly itsnon-singular pointsthat are zeros of theHessian determinantof itsprojective completion. For a functionf, if its second derivativef″(x)exists atx0andx0is an inflection point forf, thenf″(x0) = 0, but this condition is notsufficientfor having a point of inflection, even if derivatives of any order exist. In this case, one also needs the lowest-order (above the second) non-zero derivative to be of odd order (third, fifth, etc.). If the lowest-order non-zero derivative is of even order, the point is not a point of inflection, but anundulation point. However, in algebraic geometry, both inflection points and undulation points are usually calledinflection points. An example of an undulation point isx= 0for the functionfgiven byf(x) =x4. In the preceding assertions, it is assumed thatfhas some higher-order non-zero derivative atx, which is not necessarily the case. If it is the case, the condition that the first nonzero derivative has an odd order implies that the sign off'(x)is the same on either side ofxin aneighborhoodofx. If this sign ispositive, the point is arising point of inflection; if it isnegative, the point is afalling point of inflection. Points of inflection can also be categorized according to whetherf'(x)is zero or nonzero. A stationary point of inflection is not alocal extremum. More generally, in the context offunctions of several real variables, a stationary point that is not a local extremum is called asaddle point. An example of a stationary point of inflection is the point(0, 0)on the graph ofy=x3. The tangent is thex-axis, which cuts the graph at this point. An example of a non-stationary point of inflection is the point(0, 0)on the graph ofy=x3+ax, for any nonzeroa. The tangent at the origin is the liney=ax, which cuts the graph at this point. Some functions change concavity without having points of inflection. Instead, they can change concavity around vertical asymptotes or discontinuities. For example, the functionx↦1x{\displaystyle x\mapsto {\frac {1}{x}}}is concave for negativexand convex for positivex, but it has no points of inflection because 0 is not in the domain of the function. Some continuous functions have an inflection point even though the second derivative is never 0. For example, the cube root function is concave upward when x is negative, and concave downward when x is positive, but has no derivatives of any order at the origin.
https://en.wikipedia.org/wiki/Inflection_point
Inmathematical analysis, themaximumandminimum[a]of afunctionare, respectively, the greatest and least value taken by the function. Known generically asextremum,[b]they may be defined either within a givenrange(thelocalorrelativeextrema) or on the entiredomain(theglobalorabsoluteextrema) of a function.[1][2][3]Pierre de Fermatwas one of the first mathematicians to propose a general technique,adequality, for finding the maxima and minima of functions. As defined inset theory, the maximum and minimum of asetare thegreatest and least elementsin the set, respectively. Unboundedinfinite sets, such as the set ofreal numbers, have no minimum or maximum. Instatistics, the corresponding concept is thesample maximum and minimum. A real-valuedfunctionfdefined on adomainXhas aglobal(orabsolute)maximum pointatx∗, iff(x∗) ≥f(x)for allxinX. Similarly, the function has aglobal(orabsolute)minimum pointatx∗, iff(x∗) ≤f(x)for allxinX. The value of the function at a maximum point is called themaximum valueof the function, denotedmax(f(x)){\displaystyle \max(f(x))}, and the value of the function at a minimum point is called theminimum valueof the function, (denotedmin(f(x)){\displaystyle \min(f(x))}for clarity). Symbolically, this can be written as follows: The definition of global minimum point also proceeds similarly. If the domainXis ametric space, thenfis said to have alocal(orrelative)maximum pointat the pointx∗, if there exists someε> 0 such thatf(x∗) ≥f(x)for allxinXwithin distanceεofx∗. Similarly, the function has alocal minimum pointatx∗, iff(x∗) ≤f(x) for allxinXwithin distanceεofx∗. A similar definition can be used whenXis atopological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: The definition of local minimum point can also proceed similarly. In both the global and local cases, the concept of astrict extremumcan be defined. For example,x∗is astrict global maximum pointif for allxinXwithx≠x∗, we havef(x∗) >f(x), andx∗is astrict local maximum pointif there exists someε> 0such that, for allxinXwithin distanceεofx∗withx≠x∗, we havef(x∗) >f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. Acontinuousreal-valued function with acompactdomain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and boundedintervalofreal numbers(see the graph above). Finding global maxima and minima is the goal ofmathematical optimization. If a function is continuous on a closed interval, then by theextreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one. Fordifferentiable functions,Fermat's theoremstates that local extrema in the interior of a domain must occur atcritical points(or points where the derivative equals zero).[4]However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using thefirst derivative test,second derivative test, orhigher-order derivative test, given sufficient differentiability.[5] For any function that is definedpiecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least). For a practical example,[6]assume a situation where someone has200{\displaystyle 200}feet of fencing and is trying to maximize the square footage of a rectangular enclosure, wherex{\displaystyle x}is the length,y{\displaystyle y}is the width, andxy{\displaystyle xy}is the area: The derivative with respect tox{\displaystyle x}is: Setting this equal to0{\displaystyle 0} reveals thatx=50{\displaystyle x=50}is our onlycritical point. Now retrieve theendpointsby determining the interval to whichx{\displaystyle x}is restricted. Since width is positive, thenx>0{\displaystyle x>0}, and sincex=100−y{\displaystyle x=100-y},that implies thatx<100{\displaystyle x<100}.Plug in critical point50{\displaystyle 50},as well as endpoints0{\displaystyle 0}and100{\displaystyle 100},intoxy=x(100−x){\displaystyle xy=x(100-x)},and the results are2500,0,{\displaystyle 2500,0,}and0{\displaystyle 0}respectively. Therefore, the greatest area attainable with a rectangle of200{\displaystyle 200}feet of fencing is50×50=2500{\displaystyle 50\times 50=2500}.[6] For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for alocalmaximum are similar to those of a function with only one variable. The firstpartial derivativesas toz(the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of asaddle point. For use of these conditions to solve for a maximum, the functionzmust also bedifferentiablethroughout. Thesecond partial derivative testcan help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable functionfdefined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use theintermediate value theoremandRolle's theoremto prove this bycontradiction). In two and more dimensions, this argument fails. This is illustrated by the function whose only critical point is at (0,0), which is a local minimum withf(0,0) = 0. However, it cannot be a global one, becausef(2,3) = −5. If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of afunctional), then the extremum is found using thecalculus of variations. Maxima and minima can also be defined for sets. In general, if anordered setShas agreatest elementm, thenmis amaximal elementof the set, also denoted asmax(S){\displaystyle \max(S)}. Furthermore, ifSis a subset of an ordered setTandmis the greatest element ofSwith (respect to order induced byT), thenmis aleast upper boundofSinT. Similar results hold forleast element,minimal elementandgreatest lower bound. The maximum and minimum function for sets are used indatabases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions. In the case of a generalpartial order, aleast element(i.e., one that is less than all others) should not be confused with theminimal element(nothing is lesser). Likewise, agreatest elementof apartially ordered set(poset) is anupper boundof the set which is contained within the set, whereas themaximal elementmof a posetAis an element ofAsuch that ifm≤b(for anybinA), thenm=b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In atotally orderedset, orchain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the termsminimumandmaximum. If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set ofnatural numbershas no maximum, though it has a minimum. If an infinite chainSis bounded, then theclosureCl(S) of the set occasionally has a minimum and a maximum, in which case they are called thegreatest lower boundand theleast upper boundof the setS, respectively.
https://en.wikipedia.org/wiki/Maxima_and_minima
Inmathematics, aphase lineis a diagram that shows the qualitative behaviour of anautonomousordinary differential equationin a single variable,dydx=f(y){\displaystyle {\tfrac {dy}{dx}}=f(y)}. The phase line is the 1-dimensional form of the generaln{\displaystyle n}-dimensionalphase space, and can be readily analyzed. A line, usually vertical, represents an interval of the domain of thederivative. Thecritical points(i.e.,rootsof the derivativedydx{\displaystyle {\tfrac {dy}{dx}}}, pointsy{\displaystyle y}such thatf(y)=0{\displaystyle f(y)=0}) are indicated, and the intervals between the critical points have their signs indicated with arrows: an interval over which the derivative is positive has an arrow pointing in the positive direction along the line (up or right), and an interval over which the derivative is negative has an arrow pointing in the negative direction along the line (down or left). The phase line is identical in form to the line used in thefirst derivative test, other than being drawn vertically instead of horizontally, and the interpretation is virtually identical, with the same classification of critical points. The simplest examples of a phase line are the trivial phase lines, corresponding to functionsf(y){\displaystyle f(y)}which do not change sign: iff(y)=0{\displaystyle f(y)=0}, every point is a stable equilibrium (y{\displaystyle y}does not change); iff(y)>0{\displaystyle f(y)>0}for ally{\displaystyle y}, theny{\displaystyle y}is always increasing, and iff(y)<0{\displaystyle f(y)<0}theny{\displaystyle y}is always decreasing. The simplest non-trivial examples are theexponential growth model/decay (one unstable/stable equilibrium) and thelogistic growth model(two equilibria, one stable, one unstable). A critical point can be classified as stable, unstable, or semi-stable (equivalently, sink, source, or node), by inspection of its neighbouring arrows. If both arrows point toward the critical point, it is stable (a sink): nearby solutions will convergeasymptoticallyto the critical point, and the solution is stable under small perturbations, meaning that if the solution is disturbed, it will return to (converge to) the solution. If both arrows point away from the critical point, it is unstable (a source): nearby solutions will diverge from the critical point, and the solution is unstable under small perturbations, meaning that if the solution is disturbed, it willnotreturn to the solution. Otherwise – if one arrow points towards the critical point, and one points away – it is semi-stable (a node): it is stable in one direction (where the arrow points towards the point), and unstable in the other direction (where the arrow points away from the point).
https://en.wikipedia.org/wiki/Phase_line_(mathematics)
Inmathematics, thesecond partial derivative testis a method inmultivariable calculusused to determine if acritical pointof a function is alocal minimum, maximum orsaddle point. Suppose thatf(x,y)is a differentiablereal functionof two variables whose secondpartial derivativesexist and arecontinuous. TheHessian matrixHoffis the 2 × 2 matrix of partial derivatives off:H(x,y)=[fxx(x,y)fxy(x,y)fyx(x,y)fyy(x,y)].{\displaystyle H(x,y)={\begin{bmatrix}f_{xx}(x,y)&f_{xy}(x,y)\\f_{yx}(x,y)&f_{yy}(x,y)\end{bmatrix}}.} DefineD(x,y)to be thedeterminantD(x,y)=det(H(x,y))=fxx(x,y)fyy(x,y)−(fxy(x,y))2{\displaystyle D(x,y)=\det(H(x,y))=f_{xx}(x,y)f_{yy}(x,y)-\left(f_{xy}(x,y)\right)^{2}}ofH. Finally, suppose that(a,b)is a critical point off, that is, thatfx(a,b) =fy(a,b) = 0. Then the second partial derivative test asserts the following:[1] Sometimes other equivalent versions of the test are used. In cases 1 and 2, the requirement thatfxxfyy−fxy2is positive at(x,y)implies thatfxxandfyyhave the same sign there. Therefore, the second condition, thatfxxbe greater (or less) than zero, could equivalently be thatfyyortr(H) =fxx+fyybe greater (or less) than zero at that point. A condition implicit in the statement of the test is that iffxx=0{\displaystyle f_{xx}=0}orfyy=0{\displaystyle f_{yy}=0}, it must be the case thatD(a,b)≤0,{\displaystyle D(a,b)\leq 0,}and therefore only cases 3 or 4 are possible. For a functionfof three or more variables, there is a generalization of the rule shown above. In this context, instead of examining the determinant of the Hessian matrix, one must look at theeigenvaluesof the Hessian matrix at the critical point. The following test can be applied at any critical pointafor which the Hessian matrix isinvertible: In those cases not listed above, the test is inconclusive.[2] For functions of three or more variables, thedeterminantof the Hessian does not provide enough information to classify the critical point, because the number of jointly sufficient second-order conditions is equal to the number of variables, and the sign condition on the determinant of the Hessian is only one of the conditions. Note that in the one-variable case, the Hessian condition simply gives the usualsecond derivative test. In the two variable case,D(a,b){\displaystyle D(a,b)}andfxx(a,b){\displaystyle f_{xx}(a,b)}are the principalminorsof the Hessian. The first two conditions listed above on the signs of these minors are the conditions for the positive or negative definiteness of the Hessian. For the general case of an arbitrary numbernof variables, there arensign conditions on thenprincipal minors of the Hessian matrix that together are equivalent to positive or negative definiteness of the Hessian (Sylvester's criterion): for a local minimum, all the principal minors need to be positive, while for a local maximum, the minors with an odd number of rows and columns need to be negative and the minors with an even number of rows and columns need to be positive. SeeHessian matrix#Bordered Hessianfor a discussion that generalizes these rules to the case of equality-constrained optimization. To find and classify the critical points of the function we first set the partial derivatives equal to zero and solve the resulting equations simultaneously to find the four critical points In order to classify the critical points, we examine the value of the determinantD(x,y) of the Hessian offat each of the four critical points. We have Now we plug in all the different critical values we found to label them; we have Thus, the second partial derivative test indicates thatf(x,y) has saddle points at (0, −1) and (1, −1) and has a local maximum at(38,−34){\displaystyle \left({\frac {3}{8}},-{\frac {3}{4}}\right)}sincefxx=−38<0{\displaystyle f_{xx}=-{\frac {3}{8}}<0}. At the remaining critical point (0, 0) the second derivative test is insufficient, and one must use higher order tests or other tools to determine the behavior of the function at this point. (In fact, one can show thatftakes both positive and negative values in small neighborhoods around (0, 0) and so this point is a saddle point off.)
https://en.wikipedia.org/wiki/Second_partial_derivative_test
Inmathematics, particularly incalculus, astationary pointof adifferentiable functionof one variable is a point on thegraphof the function where the function'sderivativeis zero.[1][2][3]Informally, it is a point where the function "stops" increasing or decreasing (hence the name). For a differentiablefunction of several real variables, a stationary point is a point on thesurfaceof the graph where all itspartial derivativesare zero (equivalently, thegradienthas zeronorm). The notion of stationary points of areal-valued functionis generalized ascritical pointsforcomplex-valued functions. Stationary points are easy to visualize on the graph of a function of one variable: they correspond to the points on the graph where thetangentis horizontal (i.e.,parallelto thex-axis). For a function of two variables, they correspond to the points on the graph where the tangent plane is parallel to thexyplane. The notion of astationary pointallows the mathematical description of anastronomicalphenomenon that was unexplained before the time ofCopernicus. Astationary pointis the point in the apparent trajectory of the planet on thecelestial sphere, where the motion of the planet seems to stop, before restarting in the other direction (seeapparent retrograde motion). This occurs because of the projection of the planetorbitinto theecliptic circle. Aturning pointof adifferentiable functionis a point at which the derivative has anisolated zeroand changes sign at the point.[2]A turning point may be either a relative maximum or a relative minimum (also known as local minimum and maximum). A turning point is thus a stationary point, but not all stationary points are turning points. If the function is twice differentiable, the isolated stationary points that are not turning points are horizontalinflection points. For example, the functionx↦x3{\displaystyle x\mapsto x^{3}}has a stationary point atx= 0, which is also an inflection point, but is not a turning point.[3] Isolated stationary points of aC1{\displaystyle C^{1}}real valued functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }are classified into four kinds, by thefirst derivative test: The first two options are collectively known as "local extrema". Similarly a point that is either a global (or absolute) maximum or a global (or absolute) minimum is called a global (or absolute) extremum. The last two options—stationary points that arenotlocal extrema—are known assaddle points. ByFermat's theorem, global extrema must occur (for aC1{\displaystyle C^{1}}function) on the boundary or at stationary points. Determining the position and nature of stationary points aids incurve sketchingof differentiable functions. Solving the equationf′(x) = 0 returns thex-coordinates of all stationary points; they-coordinates are trivially the function values at thosex-coordinates. The specific nature of a stationary point atxcan in some cases be determined by examining thesecond derivativef″(x): A more straightforward way of determining the nature of a stationary point is by examining the function values between the stationary points (if the function is defined and continuous between them). A simple example of a point of inflection is the functionf(x) =x3. There is a clear change of concavity about the pointx= 0, and we can prove this by means ofcalculus. The second derivative offis the everywhere-continuous 6x, and atx= 0,f″= 0, and the sign changes about this point. Sox= 0 is a point of inflection. More generally, the stationary points of a real valued functionf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} }are those pointsx0where the derivative in every direction equals zero, or equivalently, thegradientis zero. For the functionf(x) =x4we havef′(0) = 0 andf″(0) = 0. Even thoughf″(0) = 0, this point is not a point of inflection. The reason is that the sign off′(x) changes from negative to positive. For the functionf(x) = sin(x) we havef′(0) ≠ 0 andf″(0) = 0. But this is not a stationary point, rather it is a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign off′(x) does not change; it stays positive. For the functionf(x) =x3we havef′(0) = 0 andf″(0) = 0. This is both a stationary point and a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign off′(x) does not change; it stays positive. For the functionf(x) = 0, one hasf′(0) = 0 andf″(0) = 0. The point 0 is a non-isolated stationary point which is not a turning point nor a horizontal point of inflection as the signs off′(x) andf″(x) do not change. The functionf(x) =x5sin(1/x) forx≠ 0, andf(0) = 0, gives an example wheref′(x) andf″(x) are both continuous,f′(0) = 0 andf″(0) = 0, and yetf(x) does not have a local maximum, a local minimum, nor a point of inflection at 0. So,0is a stationary point that is not isolated.
https://en.wikipedia.org/wiki/Stationary_point
Inmathematics, the concepts ofessential infimumandessential supremumare related to the notions ofinfimum and supremum, but adapted tomeasure theoryandfunctional analysis, where one often deals with statements that are not valid forallelements in aset, but ratheralmost everywhere, that is, except on aset of measure zero. While the exact definition is not immediately straightforward, intuitively the essential supremum of a function is the smallest value that is greater than or equal to the function values everywhere while ignoring what the function does at a set of points of measure zero. For example, if one takes the functionf(x){\displaystyle f(x)}that is equal to zero everywhere except atx=0{\displaystyle x=0}wheref(0)=1,{\displaystyle f(0)=1,}then the supremum of the function equals one. However, its essential supremum is zero since (under theLebesgue measure) one can ignore what the function does at the single point wheref{\displaystyle f}is peculiar. The essential infimum is defined in a similar way. As is often the case in measure-theoretic questions, the definition of essential supremum and infimum does not start by asking what a functionf{\displaystyle f}does at pointsx{\displaystyle x}(that is, theimageoff{\displaystyle f}), but rather by asking for the set of pointsx{\displaystyle x}wheref{\displaystyle f}equals a specific valuey{\displaystyle y}(that is, thepreimageofy{\displaystyle y}underf{\displaystyle f}). Letf:X→R{\displaystyle f:X\to \mathbb {R} }be arealvaluedfunctiondefined on a setX.{\displaystyle X.}Thesupremumof a functionf{\displaystyle f}is characterized by the following property:f(x)≤supf≤∞{\displaystyle f(x)\leq \sup f\leq \infty }forallx∈X{\displaystyle x\in X}and if for somea∈R∪{+∞}{\displaystyle a\in \mathbb {R} \cup \{+\infty \}}we havef(x)≤a{\displaystyle f(x)\leq a}forallx∈X{\displaystyle x\in X}thensupf≤a.{\displaystyle \sup f\leq a.}More concretely, a real numbera{\displaystyle a}is called anupper boundforf{\displaystyle f}iff(x)≤a{\displaystyle f(x)\leq a}for allx∈X;{\displaystyle x\in X;}that is, if the setf−1(a,∞)={x∈X:f(x)>a}{\displaystyle f^{-1}(a,\infty )=\{x\in X:f(x)>a\}}isempty. LetUf={a∈R:f−1(a,∞)=∅}{\displaystyle U_{f}=\{a\in \mathbb {R} :f^{-1}(a,\infty )=\varnothing \}\,}be the set of upper bounds off{\displaystyle f}and define theinfimumof the empty set byinf∅=+∞.{\displaystyle \inf \varnothing =+\infty .}Then the supremum off{\displaystyle f}issupf=infUf{\displaystyle \sup f=\inf U_{f}}if the set of upper boundsUf{\displaystyle U_{f}}is nonempty, andsupf=+∞{\displaystyle \sup f=+\infty }otherwise. Now assume in addition that(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}is ameasure spaceand, for simplicity, assume that the functionf{\displaystyle f}ismeasurable. Similar to the supremum, the essential supremum of a function is characterised by the following property:f(x)≤ess⁡supf≤∞{\displaystyle f(x)\leq \operatorname {ess} \sup f\leq \infty }forμ{\displaystyle \mu }-almost allx∈X{\displaystyle x\in X}and if for somea∈R∪{+∞}{\displaystyle a\in \mathbb {R} \cup \{+\infty \}}we havef(x)≤a{\displaystyle f(x)\leq a}forμ{\displaystyle \mu }-almost allx∈X{\displaystyle x\in X}theness⁡supf≤a.{\displaystyle \operatorname {ess} \sup f\leq a.}More concretely, a numbera{\displaystyle a}is called anessential upper boundoff{\displaystyle f}if the measurable setf−1(a,∞){\displaystyle f^{-1}(a,\infty )}is a set ofμ{\displaystyle \mu }-measure zero,[a]That is, iff(x)≤a{\displaystyle f(x)\leq a}forμ{\displaystyle \mu }-almost allx{\displaystyle x}inX.{\displaystyle X.}LetUfess={a∈R:μ(f−1(a,∞))=0}{\displaystyle U_{f}^{\operatorname {ess} }=\{a\in \mathbb {R} :\mu (f^{-1}(a,\infty ))=0\}}be the set of essential upper bounds. Then theessential supremumis defined similarly asess⁡supf=infUfess{\displaystyle \operatorname {ess} \sup f=\inf U_{f}^{\mathrm {ess} }}ifUfess≠∅,{\displaystyle U_{f}^{\operatorname {ess} }\neq \varnothing ,}andess⁡supf=+∞{\displaystyle \operatorname {ess} \sup f=+\infty }otherwise. Exactly in the same way one defines theessential infimumas the supremum of theessential lower bounds, that is,ess⁡inff=sup{b∈R:μ({x:f(x)<b})=0}{\displaystyle \operatorname {ess} \inf f=\sup\{b\in \mathbb {R} :\mu (\{x:f(x)<b\})=0\}}if the set of essential lower bounds is nonempty, and as−∞{\displaystyle -\infty }otherwise; again there is an alternative expression asess⁡inff=sup{a∈R:f(x)≥afor almost allx∈X}{\displaystyle \operatorname {ess} \inf f=\sup\{a\in \mathbb {R} :f(x)\geq a{\text{ for almost all }}x\in X\}}(with this being−∞{\displaystyle -\infty }if the set is empty). On the real line consider theLebesgue measureand its corresponding𝜎-algebraΣ.{\displaystyle \Sigma .}Define a functionf{\displaystyle f}by the formulaf(x)={5,ifx=1−4,ifx=−12,otherwise.{\displaystyle f(x)={\begin{cases}5,&{\text{if }}x=1\\-4,&{\text{if }}x=-1\\2,&{\text{otherwise.}}\end{cases}}} The supremum of this function (largest value) is 5, and the infimum (smallest value) is −4. However, the function takes these values only on the sets{1}{\displaystyle \{1\}}and{−1},{\displaystyle \{-1\},}respectively, which are of measure zero. Everywhere else, the function takes the value 2. Thus, the essential supremum and the essential infimum of this function are both 2. As another example, consider the functionf(x)={x3,ifx∈Qarctan⁡x,ifx∈R∖Q{\displaystyle f(x)={\begin{cases}x^{3},&{\text{if }}x\in \mathbb {Q} \\\arctan x,&{\text{if }}x\in \mathbb {R} \smallsetminus \mathbb {Q} \\\end{cases}}}whereQ{\displaystyle \mathbb {Q} }denotes therational numbers. This function is unbounded both from above and from below, so its supremum and infimum are∞{\displaystyle \infty }and−∞,{\displaystyle -\infty ,}respectively. However, from the point of view of the Lebesgue measure, the set of rational numbers is of measure zero; thus, what really matters is what happens in the complement of this set, where the function is given asarctan⁡x.{\displaystyle \arctan x.}It follows that the essential supremum isπ/2{\displaystyle \pi /2}while the essential infimum is−π/2.{\displaystyle -\pi /2.} On the other hand, consider the functionf(x)=x3{\displaystyle f(x)=x^{3}}defined for all realx.{\displaystyle x.}Its essential supremum is+∞,{\displaystyle +\infty ,}and its essential infimum is−∞.{\displaystyle -\infty .} Lastly, consider the functionf(x)={1/x,ifx≠00,ifx=0.{\displaystyle f(x)={\begin{cases}1/x,&{\text{if }}x\neq 0\\0,&{\text{if }}x=0.\\\end{cases}}}Then for anya∈R,{\displaystyle a\in \mathbb {R} ,}μ({x∈R:1/x>a})≥1|a|{\displaystyle \mu (\{x\in \mathbb {R} :1/x>a\})\geq {\tfrac {1}{|a|}}}and soUfess=∅{\displaystyle U_{f}^{\operatorname {ess} }=\varnothing }andess⁡supf=+∞.{\displaystyle \operatorname {ess} \sup f=+\infty .} Ifμ(X)>0{\displaystyle \mu (X)>0}theninff≤ess⁡inff≤ess⁡supf≤supf.{\displaystyle \inf f~\leq ~\operatorname {ess} \inf f~\leq ~\operatorname {ess} \sup f~\leq ~\sup f.}and otherwise, ifX{\displaystyle X}has measure zero then[1]+∞=ess⁡inff≥ess⁡supf=−∞.{\displaystyle +\infty ~=~\operatorname {ess} \inf f~\geq ~\operatorname {ess} \sup f~=~-\infty .} Iff{\displaystyle f}andg{\displaystyle g}are measurable, then and Iff{\displaystyle f}andg{\displaystyle g}are measurable and iff≤g{\displaystyle f\leq g}almost everywhere, then and If the essential supremums of two functionsf{\displaystyle f}andg{\displaystyle g}are both nonnegative, theness⁡sup(fg)≤(ess⁡supf)(ess⁡supg).{\displaystyle \operatorname {ess} \sup(fg)~\leq ~(\operatorname {ess} \sup f)\,(\operatorname {ess} \sup g).} The essential supremum of a function is not just the infimum of the essential lower bounds, but also their minimum. A similar result holds for the essential infimum. Given ameasure space(S,Σ,μ),{\displaystyle (S,\Sigma ,\mu ),}thespaceL∞(S,μ){\displaystyle {\mathcal {L}}^{\infty }(S,\mu )}consisting of all of measurable functions that are bounded almost everywhere is aseminormed spacewhoseseminorm‖f‖∞=inf{C∈R≥0:|f(x)|≤Cfor almost everyx}={ess⁡sup|f|if0<μ(S),0if0=μ(S),{\displaystyle \|f\|_{\infty }=\inf\{C\in \mathbb {R} _{\geq 0}:|f(x)|\leq C{\text{ for almost every }}x\}={\begin{cases}\operatorname {ess} \sup |f|&{\text{ if }}0<\mu (S),\\0&{\text{ if }}0=\mu (S),\end{cases}}}is the essential supremum of a function's absolute value whenμ(S)≠0.{\displaystyle \mu (S)\neq 0.}[b] This article incorporates material fromEssential supremumonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Essential_supremum_and_essential_infimum
Inmathematics, especially inorder theory, thegreatest elementof a subsetS{\displaystyle S}of apartially ordered set(poset) is an element ofS{\displaystyle S}that is greater than every other element ofS{\displaystyle S}. The termleast elementis defineddually, that is, it is an element ofS{\displaystyle S}that is smaller than every other element ofS.{\displaystyle S.} Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}An elementg∈P{\displaystyle g\in P}is said to beagreatest element ofS{\displaystyle S}ifg∈S{\displaystyle g\in S}and if it also satisfies: By switching the side of the relation thats{\displaystyle s}is on in the above definition, the definition of a least element ofS{\displaystyle S}is obtained. Explicitly, an elementl∈P{\displaystyle l\in P}is said to bealeast element ofS{\displaystyle S}ifl∈S{\displaystyle l\in S}and if it also satisfies: If(P,≤){\displaystyle (P,\leq )}is also apartially ordered setthenS{\displaystyle S}can have at most one greatest element and it can have at most one least element. Whenever a greatest element ofS{\displaystyle S}exists and is unique then this element is calledthegreatest element ofS{\displaystyle S}. The terminologytheleast element ofS{\displaystyle S}is defined similarly. If(P,≤){\displaystyle (P,\leq )}has a greatest element (resp. a least element) then this element is also calledatop(resp.abottom) of(P,≤).{\displaystyle (P,\leq ).} Greatest elements are closely related toupper bounds. Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}Anupper boundofS{\displaystyle S}in(P,≤){\displaystyle (P,\leq )}is an elementu{\displaystyle u}such thatu∈P{\displaystyle u\in P}ands≤u{\displaystyle s\leq u}for alls∈S.{\displaystyle s\in S.}Importantly, an upper bound ofS{\displaystyle S}inP{\displaystyle P}isnotrequired to be an element ofS.{\displaystyle S.} Ifg∈P{\displaystyle g\in P}theng{\displaystyle g}is a greatest element ofS{\displaystyle S}if and only ifg{\displaystyle g}is an upper bound ofS{\displaystyle S}in(P,≤){\displaystyle (P,\leq )}andg∈S.{\displaystyle g\in S.}In particular, any greatest element ofS{\displaystyle S}is also an upper bound ofS{\displaystyle S}(inP{\displaystyle P}) but an upper bound ofS{\displaystyle S}inP{\displaystyle P}is a greatest element ofS{\displaystyle S}if and only if itbelongstoS.{\displaystyle S.}In the particular case whereP=S,{\displaystyle P=S,}the definition of "u{\displaystyle u}is an upper bound ofS{\displaystyle S}inS{\displaystyle S}" becomes:u{\displaystyle u}is an element such thatu∈S{\displaystyle u\in S}ands≤u{\displaystyle s\leq u}for alls∈S,{\displaystyle s\in S,}which iscompletely identicalto the definition of a greatest element given before. Thusg{\displaystyle g}is a greatest element ofS{\displaystyle S}if and only ifg{\displaystyle g}is an upper bound ofS{\displaystyle S}inS{\displaystyle S}. Ifu{\displaystyle u}is an upper bound ofS{\displaystyle S}inP{\displaystyle P}that is not an upper bound ofS{\displaystyle S}inS{\displaystyle S}(which can happen if and only ifu∉S{\displaystyle u\not \in S}) thenu{\displaystyle u}cannotbe a greatest element ofS{\displaystyle S}(however, it may be possible that some other elementisa greatest element ofS{\displaystyle S}). In particular, it is possible forS{\displaystyle S}to simultaneouslynothave a greatest elementandfor there to exist some upper bound ofS{\displaystyle S}inP{\displaystyle P}. Even if a set has some upper bounds, it need not have a greatest element, as shown by the example of the negativereal numbers. This example also demonstrates that the existence of aleast upper bound(the number 0 in this case) does not imply the existence of a greatest element either. A greatest element of a subset of a preordered set should not be confused with amaximal elementof the set, which are elements that are not strictly smaller than any other element in the set. Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}An elementm∈S{\displaystyle m\in S}is said to be amaximal elementofS{\displaystyle S}if the following condition is satisfied: If(P,≤){\displaystyle (P,\leq )}is apartially ordered setthenm∈S{\displaystyle m\in S}is a maximal element ofS{\displaystyle S}if and only if there doesnotexist anys∈S{\displaystyle s\in S}such thatm≤s{\displaystyle m\leq s}ands≠m.{\displaystyle s\neq m.}Amaximal element of(P,≤){\displaystyle (P,\leq )}is defined to mean a maximal element of the subsetS:=P.{\displaystyle S:=P.} A set can have several maximal elements without having a greatest element. Like upper bounds and maximal elements, greatest elements may fail to exist. In atotally ordered setthe maximal element and the greatest element coincide; and it is also calledmaximum; in the case of function values it is also called theabsolute maximum, to avoid confusion with alocal maximum.[1]The dual terms areminimumandabsolute minimum. Together they are called theabsolute extrema. Similar conclusions hold for least elements. One of the most important differences between a greatest elementg{\displaystyle g}and a maximal elementm{\displaystyle m}of a preordered set(P,≤){\displaystyle (P,\leq )}has to do with what elements they are comparable to. Two elementsx,y∈P{\displaystyle x,y\in P}are said to becomparableifx≤y{\displaystyle x\leq y}ory≤x{\displaystyle y\leq x}; they are calledincomparableif they are not comparable. Because preorders arereflexive(which means thatx≤x{\displaystyle x\leq x}is true for all elementsx{\displaystyle x}), every elementx{\displaystyle x}is always comparable to itself. Consequently, the only pairs of elements that could possibly be incomparable aredistinctpairs. In general, however, preordered sets (and evendirectedpartially ordered sets) may have elements that are incomparable. By definition, an elementg∈P{\displaystyle g\in P}is a greatest element of(P,≤){\displaystyle (P,\leq )}ifs≤g,{\displaystyle s\leq g,}for everys∈P{\displaystyle s\in P}; so by its very definition, a greatest element of(P,≤){\displaystyle (P,\leq )}must, in particular, be comparable toeveryelement inP.{\displaystyle P.}This is not required of maximal elements. Maximal elements of(P,≤){\displaystyle (P,\leq )}arenotrequired to be comparable to every element inP.{\displaystyle P.}This is because unlike the definition of "greatest element", the definition of "maximal element" includes an importantifstatement. The defining condition form∈P{\displaystyle m\in P}to be a maximal element of(P,≤){\displaystyle (P,\leq )}can be reworded as: Suppose thatS{\displaystyle S}is a set containingat least two(distinct) elements and define a partial order≤{\displaystyle \,\leq \,}onS{\displaystyle S}by declaring thati≤j{\displaystyle i\leq j}if and only ifi=j.{\displaystyle i=j.}Ifi≠j{\displaystyle i\neq j}belong toS{\displaystyle S}then neitheri≤j{\displaystyle i\leq j}norj≤i{\displaystyle j\leq i}holds, which shows that all pairs of distinct (i.e. non-equal) elements inS{\displaystyle S}areincomparable. Consequently,(S,≤){\displaystyle (S,\leq )}can not possibly have a greatest element (because a greatest element ofS{\displaystyle S}would, in particular, have to be comparable toeveryelement ofS{\displaystyle S}butS{\displaystyle S}has no such element). However,everyelementm∈S{\displaystyle m\in S}is a maximal element of(S,≤){\displaystyle (S,\leq )}because there is exactly one element inS{\displaystyle S}that is both comparable tom{\displaystyle m}and≥m,{\displaystyle \geq m,}that element beingm{\displaystyle m}itself (which of course, is≤m{\displaystyle \leq m}).[note 1] In contrast, if apreordered set(P,≤){\displaystyle (P,\leq )}does happen to have a greatest elementg{\displaystyle g}theng{\displaystyle g}will necessarily be a maximal element of(P,≤){\displaystyle (P,\leq )}and moreover, as a consequence of the greatest elementg{\displaystyle g}being comparable toeveryelement ofP,{\displaystyle P,}if(P,≤){\displaystyle (P,\leq )}is also partially ordered then it is possible to conclude thatg{\displaystyle g}is theonlymaximal element of(P,≤).{\displaystyle (P,\leq ).}However, the uniqueness conclusion is no longer guaranteed if the preordered set(P,≤){\displaystyle (P,\leq )}isnotalso partially ordered. For example, suppose thatR{\displaystyle R}is a non-empty set and define a preorder≤{\displaystyle \,\leq \,}onR{\displaystyle R}by declaring thati≤j{\displaystyle i\leq j}alwaysholds for alli,j∈R.{\displaystyle i,j\in R.}Thedirectedpreordered set(R,≤){\displaystyle (R,\leq )}is partially ordered if and only ifR{\displaystyle R}has exactly one element. All pairs of elements fromR{\displaystyle R}are comparable andeveryelement ofR{\displaystyle R}is a greatest element (and thus also a maximal element) of(R,≤).{\displaystyle (R,\leq ).}So in particular, ifR{\displaystyle R}has at least two elements then(R,≤){\displaystyle (R,\leq )}has multipledistinctgreatest elements. Throughout, let(P,≤){\displaystyle (P,\leq )}be apartially ordered setand letS⊆P.{\displaystyle S\subseteq P.} The least and greatest element of the whole partially ordered set play a special role and are also calledbottom(⊥) andtop(⊤), orzero(0) andunit(1), respectively. If both exist, the poset is called abounded poset. The notation of 0 and 1 is used preferably when the poset is acomplemented lattice, and when no confusion is likely, i.e. when one is not talking about partial orders of numbers that already contain elements 0 and 1 different from bottom and top. The existence of least and greatest elements is a specialcompleteness propertyof a partial order. Further introductory information is found in the article onorder theory.
https://en.wikipedia.org/wiki/Greatest_element_and_least_element
Inmathematics, especially inorder theory, amaximal elementof asubsetS{\displaystyle S}of somepreordered setis an element ofS{\displaystyle S}that is not smaller than any other element inS{\displaystyle S}. Aminimal elementof a subsetS{\displaystyle S}of some preordered set is definedduallyas an element ofS{\displaystyle S}that is not greater than any other element inS{\displaystyle S}. The notions of maximal and minimal elements are weaker than those ofgreatest element and least elementwhich are also known, respectively, as maximum and minimum. The maximum of a subsetS{\displaystyle S}of a preordered set is an element ofS{\displaystyle S}which is greater than or equal to any other element ofS,{\displaystyle S,}and the minimum ofS{\displaystyle S}is again defined dually. In the particular case of apartially ordered set, while there can be at most one maximum and at most one minimum there may be multiple maximal or minimal elements.[1][2]Specializing further tototally ordered sets, the notions of maximal element and maximum coincide, and the notions of minimal element and minimum coincide. As an example, in the collectionS:={{d,o},{d,o,g},{g,o,a,d},{o,a,f}}{\displaystyle S:=\left\{\{d,o\},\{d,o,g\},\{g,o,a,d\},\{o,a,f\}\right\}}ordered bycontainment, the element {d,o} is minimal as it contains no sets in the collection, the element {g,o,a,d} is maximal as there are no sets in the collection which contain it, the element {d,o,g} is neither, and the element {o,a,f} is both minimal and maximal. By contrast, neither a maximum nor a minimum exists forS.{\displaystyle S.} Zorn's lemmastates that every partially ordered set for which every totally ordered subset has anupper boundcontains at least one maximal element. This lemma is equivalent to thewell-ordering theoremand theaxiom of choice[3]and implies major results in other mathematical areas like theHahn–Banach theorem, theKirszbraun theorem,Tychonoff's theorem, the existence of aHamel basisfor every vector space, and the existence of analgebraic closurefor everyfield. Let(P,≤){\displaystyle (P,\leq )}be apreordered setand letS⊆P.{\displaystyle S\subseteq P.}Amaximal elementofS{\displaystyle S}with respect to≤{\displaystyle \,\leq \,}is an elementm∈S{\displaystyle m\in S}such that Similarly,aminimal elementofS{\displaystyle S}with respect to≤{\displaystyle \,\leq \,}is an elementm∈S{\displaystyle m\in S}such that Equivalently,m∈S{\displaystyle m\in S}is a minimal element ofS{\displaystyle S}with respect to≤{\displaystyle \,\leq \,}if and only ifm{\displaystyle m}is a maximal element ofS{\displaystyle S}with respect to≥,{\displaystyle \,\geq ,\,}where by definition,q≥p{\displaystyle q\geq p}if and only ifp≤q{\displaystyle p\leq q}(for allp,q∈P{\displaystyle p,q\in P}). If the subsetS{\displaystyle S}is not specified then it should be assumed thatS:=P.{\displaystyle S:=P.}Explicitly, amaximal element(respectively,minimal element)of(P,≤){\displaystyle (P,\leq )}is a maximal (resp. minimal) element ofS:=P{\displaystyle S:=P}with respect to≤.{\displaystyle \,\leq .} If the preordered set(P,≤){\displaystyle (P,\leq )}also happens to be apartially ordered set(or more generally, if the restriction(S,≤){\displaystyle (S,\leq )}is a partially ordered set) thenm∈S{\displaystyle m\in S}is a maximal element ofS{\displaystyle S}if and only ifS{\displaystyle S}contains no element strictly greater thanm;{\displaystyle m;}explicitly, this means that there does not exist any elements∈S{\displaystyle s\in S}such thatm≤s{\displaystyle m\leq s}andm≠s.{\displaystyle m\neq s.}The characterization for minimal elements is obtained by using≥{\displaystyle \,\geq \,}in place of≤.{\displaystyle \,\leq .} Maximal elements need not exist. In general≤{\displaystyle \,\leq \,}is only a partial order onS.{\displaystyle S.}Ifm{\displaystyle m}is a maximal element ands∈S,{\displaystyle s\in S,}then it remains possible that neithers≤m{\displaystyle s\leq m}norm≤s.{\displaystyle m\leq s.}This leaves open the possibility that there exist more than one maximal elements. For a partially ordered set(P,≤),{\displaystyle (P,\leq ),}theirreflexive kernelof≤{\displaystyle \,\leq \,}is denoted as<{\displaystyle \,<\,}and is defined byx<y{\displaystyle x<y}ifx≤y{\displaystyle x\leq y}andx≠y.{\displaystyle x\neq y.}For arbitrary membersx,y∈P,{\displaystyle x,y\in P,}exactly one of the following cases applies: Given a subsetS⊆P{\displaystyle S\subseteq P}and somex∈S,{\displaystyle x\in S,} Thus the definition of a greatest element is stronger than that of a maximal element. Equivalently, a greatest element of a subsetS{\displaystyle S}can be defined as an element ofS{\displaystyle S}that is greater than every other element ofS.{\displaystyle S.}A subset may have at most one greatest element.[proof 1] The greatest element ofS,{\displaystyle S,}if it exists, is also a maximal element ofS,{\displaystyle S,}[proof 2]and the only one.[proof 3]Bycontraposition, ifS{\displaystyle S}has several maximal elements, it cannot have a greatest element; see example 3. IfP{\displaystyle P}satisfies theascending chain condition, a subsetS{\displaystyle S}ofP{\displaystyle P}has a greatest elementif, and only if, it has one maximal element.[proof 4] When the restriction of≤{\displaystyle \,\leq \,}toS{\displaystyle S}is atotal order(S={1,2,4}{\displaystyle S=\{1,2,4\}}in the topmost picture is an example), then the notions of maximal element and greatest element coincide.[proof 5]This is not a necessary condition: wheneverS{\displaystyle S}has a greatest element, the notions coincide, too, as stated above. If the notions of maximal element and greatest element coincide on every two-element subsetS{\displaystyle S}ofP.{\displaystyle P.}then≤{\displaystyle \,\leq \,}is a total order onP.{\displaystyle P.}[proof 6] Dual togreatestis the notion ofleast elementthat relates tominimalin the same way asgreatesttomaximal. In atotally ordered set, the terms maximal element and greatest element coincide, which is why both terms are used interchangeably in fields likeanalysiswhere only total orders are considered. This observation applies not only to totally ordered subsets of any partially ordered set, but also to their order theoretic generalization viadirected sets. In a directed set, every pair of elements (particularly pairs of incomparable elements) has a common upper bound within the set. If a directed set has a maximal element, it is also its greatest element,[proof 7]and hence its only maximal element. For a directed set without maximal or greatest elements, see examples 1 and 2above. Similar conclusions are true for minimal elements. Further introductory information is found in the article onorder theory. In economics, one may relax the axiom of antisymmetry, using preorders (generallytotal preorders) instead of partial orders; the notion analogous to maximal element is very similar, but different terminology is used, as detailed below. Inconsumer theorythe consumption space is some setX{\displaystyle X}, usually the positive orthant of some vector space so that eachx∈X{\displaystyle x\in X}represents a quantity of consumption specified for each existing commodity in the economy.Preferencesof a consumer are usually represented by atotal preorder⪯{\displaystyle \preceq }so thatx,y∈X{\displaystyle x,y\in X}andx⪯y{\displaystyle x\preceq y}reads:x{\displaystyle x}is at most as preferred asy{\displaystyle y}. Whenx⪯y{\displaystyle x\preceq y}andy⪯x{\displaystyle y\preceq x}it is interpreted that the consumer is indifferent betweenx{\displaystyle x}andy{\displaystyle y}but is no reason to conclude thatx=y.{\displaystyle x=y.}preference relations are never assumed to be antisymmetric. In this context, for anyB⊆X,{\displaystyle B\subseteq X,}an elementx∈B{\displaystyle x\in B}is said to be amaximal elementify∈B{\displaystyle y\in B}impliesy⪯x{\displaystyle y\preceq x}where it is interpreted as a consumption bundle that is not dominated by any other bundle in the sense thatx≺y,{\displaystyle x\prec y,}that isx⪯y{\displaystyle x\preceq y}and noty⪯x.{\displaystyle y\preceq x.} It should be remarked that the formal definition looks very much like that of a greatest element for an ordered set. However, when⪯{\displaystyle \preceq }is only a preorder, an elementx{\displaystyle x}with the property above behaves very much like a maximal element in an ordering. For instance, a maximal elementx∈B{\displaystyle x\in B}is not unique fory⪯x{\displaystyle y\preceq x}does not preclude the possibility thatx⪯y{\displaystyle x\preceq y}(whiley⪯x{\displaystyle y\preceq x}andx⪯y{\displaystyle x\preceq y}do not implyx=y{\displaystyle x=y}but simply indifferencex∼y{\displaystyle x\sim y}). The notion of greatest element for a preference preorder would be that ofmost preferredchoice. That is, somex∈B{\displaystyle x\in B}withy∈B{\displaystyle y\in B}impliesy≺x.{\displaystyle y\prec x.} An obvious application is to the definition of demand correspondence. LetP{\displaystyle P}be the class of functionals onX{\displaystyle X}. An elementp∈P{\displaystyle p\in P}is called aprice functionalorprice systemand maps every consumption bundlex∈X{\displaystyle x\in X}into its market valuep(x)∈R+{\displaystyle p(x)\in \mathbb {R} _{+}}. Thebudget correspondenceis a correspondenceΓ:P×R+→X{\displaystyle \Gamma \colon P\times \mathbb {R} _{+}\rightarrow X}mapping any price system and any level of income into a subsetΓ(p,m)={x∈X:p(x)≤m}.{\displaystyle \Gamma (p,m)=\{x\in X~:~p(x)\leq m\}.} Thedemand correspondencemaps any pricep{\displaystyle p}and any level of incomem{\displaystyle m}into the set of⪯{\displaystyle \preceq }-maximal elements ofΓ(p,m){\displaystyle \Gamma (p,m)}.D(p,m)={x∈X:xis a maximal element ofΓ(p,m)}.{\displaystyle D(p,m)=\left\{x\in X~:~x{\text{ is a maximal element of }}\Gamma (p,m)\right\}.} It is called demand correspondence because the theory predicts that forp{\displaystyle p}andm{\displaystyle m}given, therational choiceof a consumerx∗{\displaystyle x^{*}}will be some elementx∗∈D(p,m).{\displaystyle x^{*}\in D(p,m).} A subsetQ{\displaystyle Q}of a partially ordered setP{\displaystyle P}is said to becofinalif for everyx∈P{\displaystyle x\in P}there exists somey∈Q{\displaystyle y\in Q}such thatx≤y.{\displaystyle x\leq y.}Every cofinal subset of a partially ordered set with maximal elements must contain all maximal elements. A subsetL{\displaystyle L}of a partially ordered setP{\displaystyle P}is said to be alower setofP{\displaystyle P}if it is downward closed: ify∈L{\displaystyle y\in L}andx≤y{\displaystyle x\leq y}thenx∈L.{\displaystyle x\in L.}Every lower setL{\displaystyle L}of a finite ordered setP{\displaystyle P}is equal to the smallest lower set containing all maximal elements ofL.{\displaystyle L.}
https://en.wikipedia.org/wiki/Maximal_and_minimal_elements
Inmathematics, particularly inorder theory, anupper boundormajorant[1]of asubsetSof somepreordered set(K, ≤)is anelementofKthat isgreater than or equal toevery element ofS.[2][3]Dually, alower boundorminorantofSis defined to be an element ofKthat is less than or equal to every element ofS. A set with an upper (respectively, lower) bound is said to bebounded from aboveormajorized[1](respectivelybounded from beloworminorized) by that bound. The termsbounded above(bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds.[4] For example,5is a lower bound for the setS= {5, 8, 42, 34, 13934}(as a subset of theintegersor of thereal numbers, etc.), and so is4. On the other hand,6is not a lower bound forSsince it is not smaller than every element inS.13934and other numbersxsuch thatx ≥ 13934would be an upper bound forS. The setS= {42}has42as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for thatS. Every subset of thenatural numbershas a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of therational numbersmay or may not be bounded from below, and may or may not be bounded from above. Every finite subset of a non-emptytotally ordered sethas both upper and lower bounds. The definitions can be generalized tofunctionsand even to sets of functions. Given a functionfwithdomainDand a preordered set(K, ≤)ascodomain, an elementyofKis an upper bound offify≥f(x)for eachxinD. The upper bound is calledsharpif equality holds for at least one value ofx. It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality. Similarly, a functiongdefined on domainDand having the same codomain(K, ≤)is an upper bound off, ifg(x) ≥f(x)for eachxinD. The functiongis further said to be an upper bound of a set of functions, if it is an upper bound ofeachfunction in that set. The notion of lower bound for (sets of) functions is defined analogously, by replacing ≥ with ≤. An upper bound is said to be atight upper bound, aleast upper bound, or asupremum, if no smaller value is an upper bound. Similarly, a lower bound is said to be atight lower bound, agreatest lower bound, or aninfimum, if no greater value is a lower bound. An upper bounduof a subsetSof a preordered set(K, ≤)is said to be anexact upper boundforSif every element ofKthat is strictly majorized byuis also majorized by some element ofS. Exact upper bounds ofreduced productsoflinear ordersplay an important role inPCF theory.[5]
https://en.wikipedia.org/wiki/Upper_and_lower_bounds
Incombinatorics, theinclusion–exclusion principleis a counting technique which generalizes the familiar method of obtaining the number of elements in theunionof twofinite sets; symbolically expressed as whereAandBare two finite sets and |S| indicates thecardinalityof a setS(which may be considered as the number of elements of the set, if the set isfinite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in theintersectionof the two sets and the count is corrected by subtracting the size of the intersection. The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the setsA,BandCis given by This formula can be verified by counting how many times each region in theVenn diagramfigure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total. Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union ofnsets: The name comes from the idea that the principle is based on over-generousinclusion, followed by compensatingexclusion. This concept is attributed toAbraham de Moivre(1718),[1]although it first appears in a paper ofDaniel da Silva(1854)[2]and later in a paper byJ. J. Sylvester(1883).[3]Sometimes the principle is referred to as the formula of Da Silva or Sylvester, due to these publications. The principle can be viewed as an example of thesieve methodextensively used innumber theoryand is sometimes referred to as thesieve formula.[4] As finite probabilities are computed as counts relative to the cardinality of theprobability space, the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella ofmeasure theory. In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix.[5]This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. AsGian-Carlo Rotaput it:[6] "One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem." In its general formula, the principle of inclusion–exclusion states that for finite setsA1, ...,An, one has the identity This can be compactly written as or In words, to count the number of elements in a finite union of finite sets, first sum the cardinalities of the individual sets, then subtract the number of elements that appear in at least two sets, then add back the number of elements that appear in at least three sets, then subtract the number of elements that appear in at least four sets, and so on. This process always ends since there can be no elements that appear in more than the number of sets in the union. (For example, ifn=4,{\displaystyle n=4,}there can be no elements that appear in more than4{\displaystyle 4}sets; equivalently, there can be no elements that appear in at least5{\displaystyle 5}sets.) In applications it is common to see the principle expressed in its complementary form. That is, lettingSbe a finiteuniversal setcontaining all of theAiand lettingAi¯{\displaystyle {\bar {A_{i}}}}denote the complement ofAiinS, byDe Morgan's lawswe have As another variant of the statement, letP1, ...,Pnbe a list of properties that elements of a setSmay or may not have, then the principle of inclusion–exclusion provides a way to calculate the number of elements ofSthat have none of the properties. Just letAibe the subset of elements ofSwhich have the propertyPiand use the principle in its complementary form. This variant is due toJ. J. Sylvester.[1] Notice that if you take into account only the firstm<nsums on the right (in the general form of the principle), then you will get an overestimate ifmis odd and an underestimate ifmis even. A more complex example is the following. Suppose there is a deck ofncards numbered from 1 ton. Suppose a card numberedmis in the correct position if it is themthcard in the deck. How many ways,W, can the cards be shuffled with at least 1 card being in the correct position? Begin by defining setAm, which is all of the orderings of cards with themthcard correct. Then the number of orders,W, withat leastone card being in the correct position,m, is Apply the principle of inclusion–exclusion, Each valueAm1∩⋯∩Amp{\displaystyle A_{m_{1}}\cap \cdots \cap A_{m_{p}}}represents the set of shuffles having at leastpvaluesm1, ...,mpin the correct position. Note that the number of shuffles with at leastpvalues correct only depends onp, not on the particular values ofm{\displaystyle m}. For example, the number of shuffles having the 1st, 3rd, and 17th cards in the correct position is the same as the number of shuffles having the 2nd, 5th, and 13th cards in the correct positions. It only matters that of thencards, 3 were chosen to be in the correct position. Thus there are(np){\textstyle {n \choose p}}equal terms in thepthsummation (seecombination). |A1∩⋯∩Ap|{\displaystyle |A_{1}\cap \cdots \cap A_{p}|}is the number of orderings havingpelements in the correct position, which is equal to the number of ways of ordering the remainingn−pelements, or (n−p)!. Thus we finally get: A permutation wherenocard is in the correct position is called aderangement. Takingn! to be the total number of permutations, the probabilityQthat a random shuffle produces a derangement is given by a truncation ton+ 1 terms of theTaylor expansionofe−1. Thus the probability of guessing an order for a shuffled deck of cards and being incorrect about every card is approximatelye−1or 37%. The situation that appears in the derangement example above occurs often enough to merit special attention.[7]Namely, when the size of the intersection sets appearing in the formulas for the principle of inclusion–exclusion depend only on the number of sets in the intersections and not on which sets appear. More formally, if the intersection has the same cardinality, sayαk= |AJ|, for everyk-element subsetJof {1, ...,n}, then Or, in the complementary form, where the universal setShas cardinalityα0, Given afamily (repeats allowed) of subsetsA1,A2, ...,Anof a universal setS, the principle of inclusion–exclusion calculates the number of elements ofSin none of these subsets. A generalization of this concept would calculate the number of elements ofSwhich appear in exactly some fixedmof these sets. LetN= [n] = {1,2,...,n}. If we defineA∅=S{\displaystyle A_{\emptyset }=S}, then the principle of inclusion–exclusion can be written as, using the notation of the previous section; the number of elements ofScontained in none of theAiis: IfIis a fixed subset of the index setN, then the number of elements which belong toAifor alliinIand for no other values is:[8] Define the sets We seek the number of elements in none of theBkwhich, by the principle of inclusion–exclusion (withB∅=AI{\displaystyle B_{\emptyset }=A_{I}}), is The correspondenceK↔J=I∪Kbetween subsets ofN\Iand subsets ofNcontainingIis a bijection and ifJandKcorrespond under this map thenBK=AJ, showing that the result is valid. Inprobability, for eventsA1, ...,Anin aprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}, the inclusion–exclusion principle becomes forn= 2 forn= 3 and in general which can be written in closed form as where the last sum runs over all subsetsIof the indices 1, ...,nwhich contain exactlykelements, and denotes the intersection of all thoseAiwith index inI. According to theBonferroni inequalities, the sum of the first terms in the formula is alternately an upper bound and a lower bound for theLHS. This can be used in cases where the full formula is too cumbersome. For a generalmeasure space(S,Σ,μ) andmeasurablesubsetsA1, ...,Anoffinite measure, the above identities also hold when the probability measureP{\displaystyle \mathbb {P} }is replaced by the measureμ. If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersectionAIonly depends on the cardinality ofI, meaning that for everykin {1, ...,n} there is anaksuch that then the above formula simplifies to due to the combinatorial interpretation of thebinomial coefficient(nk){\textstyle {\binom {n}{k}}}. For example, if the eventsAi{\displaystyle A_{i}}areindependent and identically distributed, thenP(Ai)=p{\displaystyle \mathbb {P} (A_{i})=p}for alli, and we haveak=pk{\displaystyle a_{k}=p^{k}}, in which case the expression above simplifies to (This result can also be derived more simply by considering the intersection of the complements of the eventsAi{\displaystyle A_{i}}.) An analogous simplification is possible in the case of a general measure space(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}and measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}of finite measure. There is another formula used inpoint processes. LetS{\displaystyle S}be a finite set andP{\displaystyle P}be a random subset ofS{\displaystyle S}. LetA{\displaystyle A}be any subset ofS{\displaystyle S}, then P(P=A)=P(P⊃A)−∑j1∈S∖AP(P⊃A∪j1)+∑j1,j2∈S∖Aj1≠j2P(P⊃A∪j1,j2)+…+(−1)|S|−|A|P(P⊃S)=∑A⊂J⊂S(−1)|J|−|A|P(P⊃J).{\displaystyle {\begin{aligned}\mathbb {P} (P=A)&=\mathbb {P} (P\supset A)-\sum _{j_{1}\in S\setminus A}\mathbb {P} (P\supset A\cup {j_{1}})\\&+\sum _{j_{1},j_{2}\in S\setminus A\ j_{1}\neq j_{2}}\mathbb {P} (P\supset A\cup {j_{1},j_{2}})+\dots \\&+(-1)^{|S|-|A|}\mathbb {P} (P\supset S)\\&=\sum _{A\subset J\subset S}(-1)^{|J|-|A|}\mathbb {P} (P\supset J).\end{aligned}}} The principle is sometimes stated in the form[9]that says that if then The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of (2). Takem_={1,2,…,m}{\displaystyle {\underline {m}}=\{1,2,\ldots ,m\}},f(m_)=0{\displaystyle f({\underline {m}})=0}, and respectively for allsetsS{\displaystyle S}withS⊊m_{\displaystyle S\subsetneq {\underline {m}}}. Then we obtain respectively for all setsA{\displaystyle A}withA⊊m_{\displaystyle A\subsetneq {\underline {m}}}. This is becauseelementsa{\displaystyle a}of∩i∈m_∖AAi{\displaystyle \cap _{i\in {\underline {m}}\smallsetminus A}A_{i}}can becontainedin otherAi{\displaystyle A_{i}}(Ai{\displaystyle A_{i}}withi∈A{\displaystyle i\in A}) as well, and the∩∖∪{\displaystyle \cap \smallsetminus \cup }-formularuns exactly through all possible extensions of the sets{Ai∣i∈m_∖A}{\displaystyle \{A_{i}\mid i\in {\underline {m}}\smallsetminus A\}}with otherAi{\displaystyle A_{i}}, countinga{\displaystyle a}only for the set that matches the membership behavior ofa{\displaystyle a}, ifS{\displaystyle S}runs through allsubsetsofA{\displaystyle A}(as in the definition ofg(A){\displaystyle g(A)}). Sincef(m_)=0{\displaystyle f({\underline {m}})=0}, we obtain from (2) withA=m_{\displaystyle A={\underline {m}}}that and by interchanging sides, the combinatorial and the probabilistic version of the inclusion–exclusion principle follow. If one sees a numbern{\displaystyle n}as a set of its prime factors, then (2) is a generalization ofMöbius inversion formulaforsquare-freenatural numbers. Therefore, (2) is seen as the Möbius inversion formula for theincidence algebraof thepartially ordered setof all subsets ofA. For a generalization of the full version of Möbius inversion formula, (2) must be generalized tomultisets. For multisets instead of sets, (2) becomes whereA−S{\displaystyle A-S}is the multiset for which(A−S)⊎S=A{\displaystyle (A-S)\uplus S=A}, and Notice thatμ(A−S){\displaystyle \mu (A-S)}is just the(−1)|A|−|S|{\displaystyle (-1)^{|A|-|S|}}of (2) in caseA−S{\displaystyle A-S}is a set. Substituteg(S)=∑T⊆Sf(T){\displaystyle g(S)=\sum _{T\subseteq S}f(T)}on the right hand side of (3). Notice thatf(A){\displaystyle f(A)}appears once on both sides of (3). So we must show that for allT{\displaystyle T}withT⊊A{\displaystyle T\subsetneq A}, the termsf(T){\displaystyle f(T)}cancel out on the right hand side of (3). For that purpose, take a fixedT{\displaystyle T}such thatT⊊A{\displaystyle T\subsetneq A}and take an arbitrary fixeda∈A{\displaystyle a\in A}such thata∉T{\displaystyle a\notin T}. Notice thatA−S{\displaystyle A-S}must be a set for eachpositiveornegativeappearance off(T){\displaystyle f(T)}on the right hand side of (3) that is obtained by way of the multisetS{\displaystyle S}such thatT⊆S⊆A{\displaystyle T\subseteq S\subseteq A}. Now each appearance off(T){\displaystyle f(T)}on the right hand side of (3) that is obtained by way ofS{\displaystyle S}such thatA−S{\displaystyle A-S}is a set that containsa{\displaystyle a}cancels out with the one that is obtained by way of the correspondingS{\displaystyle S}such thatA−S{\displaystyle A-S}is a set that does not containa{\displaystyle a}. This gives the desired result. The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here. A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting allderangementsof a finite set. Aderangementof a setAis abijectionfromAinto itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality ofAisn, then the number of derangements is [n! /e] where [x] denotes thenearest integertox; a detailed proof is availablehereand also seethe examples sectionabove. The first occurrence of the problem of counting the number of derangements is in an early book on games of chance:Essai d'analyse sur les jeux de hazardby P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, "problème des rencontres."[10]The problem is also known as thehatcheck problem. The number of derangements is also known as thesubfactorialofn, written !n. It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/easngrows. The principle of inclusion–exclusion, combined withDe Morgan's law, can be used to count the cardinality of the intersection of sets as well. LetAk¯{\displaystyle {\overline {A_{k}}}}represent the complement ofAkwith respect to some universal setAsuch thatAk⊆A{\displaystyle A_{k}\subseteq A}for eachk. Then we have thereby turning the problem of finding an intersection into the problem of finding a union. The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such asgraph coloring.[11] A well known application of the principle is the construction of thechromatic polynomialof a graph.[12] The number ofperfect matchingsof abipartite graphcan be calculated using the principle.[13] Given finite setsAandB, how manysurjective functions(onto functions) are there fromAtoB?Without any loss of generalitywe may takeA= {1, ...,k} andB= {1, ...,n}, since only the cardinalities of the sets matter. By usingSas the set of allfunctionsfromAtoB, and defining, for eachiinB, the propertyPias "the function misses the elementiinB" (iis not in theimageof the function), the principle of inclusion–exclusion gives the number of onto functions betweenAandBas:[14] Apermutationof the setS= {1, ...,n} where each element ofSis restricted to not being in certain positions (here the permutation is considered as an ordering of the elements ofS) is called apermutation with forbidden positions. For example, withS= {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By lettingAibe the set of positions that the elementiis not allowed to be in, and the propertyPito be the property that a permutation puts elementiinto a position inAi, the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions.[15] In the given example, there are 12 = 2(3!) permutations with propertyP1, 6 = 3! permutations with propertyP2and no permutations have propertiesP3orP4as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus: The final 4 in this computation is the number of permutations having both propertiesP1andP2. There are no other non-zero contributions to the formula. TheStirling numbers of the second kind,S(n,k) count the number ofpartitionsof a set ofnelements intoknon-empty subsets (indistinguishableboxes). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of ann-set intoknon-empty but distinguishable boxes (orderednon-empty subsets). Using the universal set consisting of all partitions of then-set intok(possibly empty) distinguishable boxes,A1,A2, ...,Ak, and the propertiesPimeaning that the partition has boxAiempty, the principle of inclusion–exclusion gives an answer for the related result. Dividing byk! to remove the artificial ordering gives the Stirling number of the second kind:[16] A rook polynomial is thegenerating functionof the number of ways to place non-attackingrookson aboard Bthat looks like a subset of the squares of acheckerboard; that is, no two rooks may be in the same row or column. The boardBis any subset of the squares of a rectangular board withnrows andmcolumns; we think of it as the squares in which one is allowed to put a rook. Thecoefficient,rk(B) ofxkin the rook polynomialRB(x) is the number of wayskrooks, none of which attacks another, can be arranged in the squares ofB. For any boardB, there is a complementary boardB′{\displaystyle B'}consisting of the squares of the rectangular board that are not inB. This complementary board also has a rook polynomialRB′(x){\displaystyle R_{B'}(x)}with coefficientsrk(B′).{\displaystyle r_{k}(B').} It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume thatn≤m, so this coefficient isrn(B). The number of ways to placennon-attacking rooks on the completen×m"checkerboard" (without regard as to whether the rooks are placed in the squares of the boardB) is given by thefalling factorial: LettingPibe the property that an assignment ofnnon-attacking rooks on the complete board has a rook in columniwhich is not in a square of the boardB, then by the principle of inclusion–exclusion we have:[17] Euler's totient or phi function,φ(n) is anarithmetic functionthat counts the number of positive integers less than or equal tonthat arerelatively primeton. That is, ifnis apositive integer, then φ(n) is the number of integerskin the range 1 ≤k≤nwhich have no common factor withnother than 1. The principle of inclusion–exclusion is used to obtain a formula for φ(n). LetSbe the set {1, ...,n} and define the propertyPito be that a number inSis divisible by the prime numberpi, for 1 ≤i≤r, where theprime factorizationof Then,[18] The Dirichlet hyperbola method re-expresses a sum of amultiplicative functionf(n){\displaystyle f(n)}by selecting a suitableDirichlet convolutionf=g∗h{\displaystyle f=g\ast h}, recognizing that the sum can be recast as a sum over thelattice pointsin a region bounded byx≥1{\displaystyle x\geq 1},y≥1{\displaystyle y\geq 1}, andxy≤n{\displaystyle xy\leq n}, splitting this region into two overlapping subregions, and finally using the inclusion–exclusion principle to conclude that In many cases where the principle could give an exact formula (in particular, countingprime numbersusing thesieve of Eratosthenes), the formula arising does not offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula is not directly applicable. Innumber theory, this difficulty was addressed byViggo Brun. After a slow start, his ideas were taken up by others, and a large variety ofsieve methodsdeveloped. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula. LetA1, ...,Anbe arbitrary sets andp1, ...,pnreal numbers in the closed unit interval[0, 1]. Then, for every even numberkin {0, ...,n}, theindicator functionssatisfy the inequality:[19] Choose an element contained in the union of all sets and letA1,A2,…,At{\displaystyle A_{1},A_{2},\dots ,A_{t}}be the individual sets containing it. (Note thatt> 0.) Since the element is counted precisely once by the left-hand side of equation (1), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected fromA1,A2,…,At{\displaystyle A_{1},A_{2},\dots ,A_{t}}. The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have: By thebinomial theorem, Using the fact that(t0)=1{\displaystyle {\binom {t}{0}}=1}and rearranging terms, we have and so, the chosen element is counted only once by the right-hand side of equation (1). An algebraic proof can be obtained usingindicator functions(also known as characteristic functions). The indicator function of a subsetSof a setXis the function IfA{\displaystyle A}andB{\displaystyle B}are two subsets ofX{\displaystyle X}, then LetAdenote the union⋃i=1nAi{\textstyle \bigcup _{i=1}^{n}A_{i}}of the setsA1, ...,An. To prove the inclusion–exclusion principle in general, we first verify the identity for indicator functions, where: The following function is identically zero because: ifxis not inA, then all factors are 0−0 = 0; and otherwise, ifxdoes belong to someAm, then the correspondingmthfactor is 1−1=0. By expanding the product on the left-hand side, equation (4) follows. To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation (4) over allxin the union ofA1, ...,An. To derive the version used in probability, take theexpectationin (4). In general,integratethe equation (4) with respect toμ. Always use linearity in these derivations. This article incorporates material from principle of inclusion–exclusion onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
Inmathematical analysis, themaximumandminimum[a]of afunctionare, respectively, the greatest and least value taken by the function. Known generically asextremum,[b]they may be defined either within a givenrange(thelocalorrelativeextrema) or on the entiredomain(theglobalorabsoluteextrema) of a function.[1][2][3]Pierre de Fermatwas one of the first mathematicians to propose a general technique,adequality, for finding the maxima and minima of functions. As defined inset theory, the maximum and minimum of asetare thegreatest and least elementsin the set, respectively. Unboundedinfinite sets, such as the set ofreal numbers, have no minimum or maximum. Instatistics, the corresponding concept is thesample maximum and minimum. A real-valuedfunctionfdefined on adomainXhas aglobal(orabsolute)maximum pointatx∗, iff(x∗) ≥f(x)for allxinX. Similarly, the function has aglobal(orabsolute)minimum pointatx∗, iff(x∗) ≤f(x)for allxinX. The value of the function at a maximum point is called themaximum valueof the function, denotedmax(f(x)){\displaystyle \max(f(x))}, and the value of the function at a minimum point is called theminimum valueof the function, (denotedmin(f(x)){\displaystyle \min(f(x))}for clarity). Symbolically, this can be written as follows: The definition of global minimum point also proceeds similarly. If the domainXis ametric space, thenfis said to have alocal(orrelative)maximum pointat the pointx∗, if there exists someε> 0 such thatf(x∗) ≥f(x)for allxinXwithin distanceεofx∗. Similarly, the function has alocal minimum pointatx∗, iff(x∗) ≤f(x) for allxinXwithin distanceεofx∗. A similar definition can be used whenXis atopological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: The definition of local minimum point can also proceed similarly. In both the global and local cases, the concept of astrict extremumcan be defined. For example,x∗is astrict global maximum pointif for allxinXwithx≠x∗, we havef(x∗) >f(x), andx∗is astrict local maximum pointif there exists someε> 0such that, for allxinXwithin distanceεofx∗withx≠x∗, we havef(x∗) >f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. Acontinuousreal-valued function with acompactdomain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and boundedintervalofreal numbers(see the graph above). Finding global maxima and minima is the goal ofmathematical optimization. If a function is continuous on a closed interval, then by theextreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one. Fordifferentiable functions,Fermat's theoremstates that local extrema in the interior of a domain must occur atcritical points(or points where the derivative equals zero).[4]However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using thefirst derivative test,second derivative test, orhigher-order derivative test, given sufficient differentiability.[5] For any function that is definedpiecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least). For a practical example,[6]assume a situation where someone has200{\displaystyle 200}feet of fencing and is trying to maximize the square footage of a rectangular enclosure, wherex{\displaystyle x}is the length,y{\displaystyle y}is the width, andxy{\displaystyle xy}is the area: The derivative with respect tox{\displaystyle x}is: Setting this equal to0{\displaystyle 0} reveals thatx=50{\displaystyle x=50}is our onlycritical point. Now retrieve theendpointsby determining the interval to whichx{\displaystyle x}is restricted. Since width is positive, thenx>0{\displaystyle x>0}, and sincex=100−y{\displaystyle x=100-y},that implies thatx<100{\displaystyle x<100}.Plug in critical point50{\displaystyle 50},as well as endpoints0{\displaystyle 0}and100{\displaystyle 100},intoxy=x(100−x){\displaystyle xy=x(100-x)},and the results are2500,0,{\displaystyle 2500,0,}and0{\displaystyle 0}respectively. Therefore, the greatest area attainable with a rectangle of200{\displaystyle 200}feet of fencing is50×50=2500{\displaystyle 50\times 50=2500}.[6] For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for alocalmaximum are similar to those of a function with only one variable. The firstpartial derivativesas toz(the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of asaddle point. For use of these conditions to solve for a maximum, the functionzmust also bedifferentiablethroughout. Thesecond partial derivative testcan help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable functionfdefined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use theintermediate value theoremandRolle's theoremto prove this bycontradiction). In two and more dimensions, this argument fails. This is illustrated by the function whose only critical point is at (0,0), which is a local minimum withf(0,0) = 0. However, it cannot be a global one, becausef(2,3) = −5. If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of afunctional), then the extremum is found using thecalculus of variations. Maxima and minima can also be defined for sets. In general, if anordered setShas agreatest elementm, thenmis amaximal elementof the set, also denoted asmax(S){\displaystyle \max(S)}. Furthermore, ifSis a subset of an ordered setTandmis the greatest element ofSwith (respect to order induced byT), thenmis aleast upper boundofSinT. Similar results hold forleast element,minimal elementandgreatest lower bound. The maximum and minimum function for sets are used indatabases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions. In the case of a generalpartial order, aleast element(i.e., one that is less than all others) should not be confused with theminimal element(nothing is lesser). Likewise, agreatest elementof apartially ordered set(poset) is anupper boundof the set which is contained within the set, whereas themaximal elementmof a posetAis an element ofAsuch that ifm≤b(for anybinA), thenm=b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In atotally orderedset, orchain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the termsminimumandmaximum. If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set ofnatural numbershas no maximum, though it has a minimum. If an infinite chainSis bounded, then theclosureCl(S) of the set occasionally has a minimum and a maximum, in which case they are called thegreatest lower boundand theleast upper boundof the setS, respectively.
https://en.wikipedia.org/wiki/Maxima_and_minima#In_relation_to_sets
In thephysicalscience ofdynamics,rigid-body dynamicsstudies the movement ofsystemsof interconnectedbodiesunder the action of externalforces. The assumption that the bodies arerigid(i.e. they do notdeformunder the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation ofreference framesattached to each body.[1][2]This excludes bodies that displayfluid, highlyelastic, andplasticbehavior. The dynamics of a rigid body system is described by the laws ofkinematicsand by the application of Newton's second law (kinetics) or their derivative form,Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as afunction of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation ofmechanical systems. If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, Pi,i=1,...,N, simplify because there is no movement in thekdirection. Determine theresultant forceandtorqueat a reference pointR, to obtainF=∑i=1NmiAi,T=∑i=1N(ri−R)×miAi,{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {A} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {r} _{i}-\mathbf {R} )\times m_{i}\mathbf {A} _{i},} whereridenotes the planar trajectory of each particle. Thekinematicsof a rigid body yields the formula for the acceleration of the particle Piin terms of the positionRand accelerationAof the reference particle as well as the angular velocity vectorωand angular acceleration vectorαof the rigid system of particles as,Ai=α×(ri−R)+ω×(ω×(ri−R))+A.{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times (\mathbf {r} _{i}-\mathbf {R} )+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times (\mathbf {r} _{i}-\mathbf {R} ))+\mathbf {A} .} For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed alongkperpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectorseifrom the reference pointRto a pointriand the unit vectorsti=k×ei{\textstyle \mathbf {t} _{i}=\mathbf {k} \times \mathbf {e} _{i}}, soAi=α(Δriti)−ω2(Δriei)+A.{\displaystyle \mathbf {A} _{i}=\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} .} This yields the resultant force on the system asF=α∑i=1Nmi(Δriti)−ω2∑i=1Nmi(Δriei)+(∑i=1Nmi)A,{\displaystyle \mathbf {F} =\alpha \sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {t} _{i}\right)-\omega ^{2}\sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {e} _{i}\right)+\left(\sum _{i=1}^{N}m_{i}\right)\mathbf {A} ,}and torque asT=∑i=1N(miΔriei)×(α(Δriti)−ω2(Δriei)+A)=(∑i=1NmiΔri2)αk+(∑i=1NmiΔriei)×A,{\displaystyle {\begin{aligned}\mathbf {T} ={}&\sum _{i=1}^{N}(m_{i}\Delta r_{i}\mathbf {e} _{i})\times \left(\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} \right)\\{}={}&\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}^{2}\right)\alpha \mathbf {k} +\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}\mathbf {e} _{i}\right)\times \mathbf {A} ,\end{aligned}}} whereei×ei=0{\textstyle \mathbf {e} _{i}\times \mathbf {e} _{i}=0}andei×ti=k{\textstyle \mathbf {e} _{i}\times \mathbf {t} _{i}=\mathbf {k} }is the unit vector perpendicular to the plane for all of the particles Pi. Use thecenter of massCas the reference point, so these equations for Newton's laws simplify to becomeF=MA,T=ICαk,{\displaystyle \mathbf {F} =M\mathbf {A} ,\quad \mathbf {T} =I_{\textbf {C}}\alpha \mathbf {k} ,} whereMis the total mass andICis themoment of inertiaabout an axis perpendicular to the movement of the rigid system and through the center of mass. Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections. The first attempt to represent an orientation is attributed toLeonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are calledEuler angles. Commonly,ψ{\displaystyle \psi }is used to denote precession,θ{\displaystyle \theta }nutation, andϕ{\displaystyle \phi }intrinsic rotation. These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles. Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector. A similar method, calledaxis-angle representation, describes a rotation or orientation using aunit vectoraligned with the rotation axis, and a separate value to indicate the angle (see figure). With the introduction of matrices the Euler theorems were rewritten. The rotations were described byorthogonal matricesreferred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix. The above-mentioned Euler vector is theeigenvectorof a rotation matrix (a rotation matrix has a unique realeigenvalue). The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe. Theconfiguration spaceof a non-symmetricalobject inn-dimensional space isSO(n)×Rn. Orientation may be visualized by attaching a basis oftangent vectorsto an object. The direction in which each vector points determines its orientation. Another way to describe rotations is usingrotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions. To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it. Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed."[3]Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written asF=ma,{\displaystyle \mathbf {F} =m\mathbf {a} ,}whereFis understood to be the only external force acting on the particle,mis the mass of the particle, andais its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles. If a system ofNparticles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. IfFiis the external force applied to particle Piwith massmi, thenFi+∑j=1NFij=miai,i=1,…,N,{\displaystyle \mathbf {F} _{i}+\sum _{j=1}^{N}\mathbf {F} _{ij}=m_{i}\mathbf {a} _{i},\quad i=1,\ldots ,N,}whereFijis the internal force of particle Pjacting on particle Pithat maintains the constant distance between these particles. An important simplification to these force equations is obtained by introducing theresultant forceand torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point,R, where each of the external forces are applied with the addition of an associated torque. The resultant forceFand torqueTare given by the formulas,F=∑i=1NFi,T=∑i=1N(Ri−R)×Fi,{\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},}whereRiis the vector that defines the position of particle Pi. Newton's second law for a particle combines with these formulas for the resultant force and torque to yield,F=∑i=1Nmiai,T=∑i=1N(Ri−R)×(miai),{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {a} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times (m_{i}\mathbf {a} _{i}),}where the internal forcesFijcancel in pairs. Thekinematicsof a rigid body yields the formula for the acceleration of the particle Piin terms of the positionRand accelerationaof the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,ai=α×(Ri−R)+ω×(ω×(Ri−R))+a.{\displaystyle \mathbf {a} _{i}=\alpha \times (\mathbf {R} _{i}-\mathbf {R} )+\omega \times (\omega \times (\mathbf {R} _{i}-\mathbf {R} ))+\mathbf {a} .} The mass properties of the rigid body are represented by itscenter of massandinertia matrix. Choose the reference pointRso that it satisfies the condition∑i=1Nmi(Ri−R)=0,{\displaystyle \sum _{i=1}^{N}m_{i}(\mathbf {R} _{i}-\mathbf {R} )=0,} then it is known as the center of mass of the system. The inertia matrix [IR] of the system relative to the reference pointRis defined by[IR]=∑i=1Nmi(I(SiTSi)−SiSiT),{\displaystyle [I_{R}]=\sum _{i=1}^{N}m_{i}\left(\mathbf {I} \left(\mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}\right)-\mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}\right),} whereSi{\displaystyle \mathbf {S} _{i}}is the column vectorRi−R;SiT{\displaystyle \mathbf {S} _{i}^{\textsf {T}}}is its transpose, andI{\displaystyle \mathbf {I} }is the 3 by 3 identity matrix. SiTSi{\displaystyle \mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}}is the scalar product ofSi{\displaystyle \mathbf {S} _{i}}with itself, whileSiSiT{\displaystyle \mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}}is the tensor product ofSi{\displaystyle \mathbf {S} _{i}}with itself. Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the formF=ma,T=[IR]α+ω×[IR]ω,{\displaystyle \mathbf {F} =m\mathbf {a} ,\quad \mathbf {T} =[I_{R}]\alpha +\omega \times [I_{R}]\omega ,}and are known as Newton's second law of motion for a rigid body. The dynamics of an interconnected system of rigid bodies,Bi,j= 1, ...,M, is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equationsFj=mjaj,Tj=[IR]jαj+ωj×[IR]jωj,j=1,…,M.{\displaystyle \mathbf {F} _{j}=m_{j}\mathbf {a} _{j},\quad \mathbf {T} _{j}=[I_{R}]_{j}\alpha _{j}+\omega _{j}\times [I_{R}]_{j}\omega _{j},\quad j=1,\ldots ,M.} Newton's formulation yields 6Mequations that define the dynamics of a system ofMrigid bodies.[4] A rotating object, whether under the influence of torques or not, may exhibit the behaviours ofprecessionandnutation. The fundamental equation describing the behavior of a rotating solid body isEuler's equation of motion:τ=DLDt=dLdt+ω×L=d(Iω)dt+ω×Iω=Iα+ω×Iω{\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}}where thepseudovectorsτandLare, respectively, thetorqueson the body and itsangular momentum, the scalarIis itsmoment of inertia, the vectorωis its angular velocity, the vectorαis its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body. The solution to this equation when there is no applied torque is discussed in the articlesEuler's equation of motionandPoinsot's ellipsoid. It follows from Euler's equation that a torqueτapplied perpendicular to the axis of rotation, and therefore perpendicular toL, results in a rotation about an axis perpendicular to bothτandL. This motion is calledprecession. The angular velocity of precessionΩPis given by thecross product:[citation needed]τ=ΩP×L.{\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\Omega }}_{\mathrm {P} }\times \mathbf {L} .} Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point. Under a constant torque of magnitudeτ, the speed of precessionΩPis inversely proportional toL, the magnitude of its angular momentum:τ=ΩPLsin⁡θ,{\displaystyle \tau ={\mathit {\Omega }}_{\mathrm {P} }L\sin \theta ,}whereθis the angle between the vectorsΩPandL. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall. By convention, these three vectors – torque, spin, and precession – are all oriented with respect to each other according to theright-hand rule. An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering thevirtual workof forces acting on a rigid body. The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and theresultant force and torque. To see this, let the forcesF1,F2...Fnact on the pointsR1,R2...Rnin a rigid body. The trajectories ofRi,i= 1, ...,nare defined by the movement of the rigid body. The velocity of the pointsRialong their trajectories areVi=ω×(Ri−R)+V,{\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} ,}whereωis the angular velocity vector of the body. Work is computed from thedot productof each force with the displacement of its point of contactδW=∑i=1nFi⋅δri.{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}.}If the trajectory of a rigid body is defined by a set ofgeneralized coordinatesqj,j= 1, ...,m, then the virtual displacementsδriare given byδri=∑j=1m∂ri∂qjδqj=∑j=1m∂Vi∂q˙jδqj.{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j}.}The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomesδW=F1⋅(∑j=1m∂V1∂q˙jδqj)+⋯+Fn⋅(∑j=1m∂Vn∂q˙jδqj){\displaystyle \delta W=\mathbf {F} _{1}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{1}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)+\dots +\mathbf {F} _{n}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{n}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)} or collecting the coefficients ofδqjδW=(∑i=1nFi⋅∂Vi∂q˙1)δq1+⋯+(∑1=1nFi⋅∂Vi∂q˙m)δqm.{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{1}}}\right)\delta q_{1}+\dots +\left(\sum _{1=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{m}}}\right)\delta q_{m}.} For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomesδW=(∑i=1nFi⋅∂Vi∂q˙)δq=(∑i=1nFi⋅∂(ω×(Ri−R)+V)∂q˙)δq.{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}\right)\delta q=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial ({\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} )}{\partial {\dot {q}}}}\right)\delta q.} Introduce the resultant forceFand torqueTso this equation takes the formδW=(F⋅∂V∂q˙+T⋅∂ω∂q˙)δq.{\displaystyle \delta W=\left(\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}\right)\delta q.} The quantityQdefined byQ=F⋅∂V∂q˙+T⋅∂ω∂q˙,{\displaystyle Q=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}},} is known as thegeneralized forceassociated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that isδW=∑j=1mQjδqj,{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}whereQj=F⋅∂V∂q˙j+T⋅∂ω∂q˙j,j=1,…,m.{\displaystyle Q_{j}=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}_{j}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.} It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential functionV(q1, ...,qn), known as apotential energy. In this case the generalized forces are given byQj=−∂V∂qj,j=1,…,m.{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.} The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium. The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as theprinciple of virtual work.[5]This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that isQi=0. Let a mechanical system be constructed fromnrigid bodies, Bi,i= 1, ...,n, and let the resultant of the applied forces on each body be the force-torque pairs,FiandTi,i= 1, ...,n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocityViand angular velocitiesωi,i= 1, ...,n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have onedegree of freedom. The virtual work of the forces and torques,FiandTi, applied to this one degree of freedom system is given byδW=∑i=1n(Fi⋅∂Vi∂q˙+Ti⋅∂ωi∂q˙)δq=Qδq,{\displaystyle \delta W=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right)\delta q=Q\delta q,}whereQ=∑i=1n(Fi⋅∂Vi∂q˙+Ti⋅∂ωi∂q˙),{\displaystyle Q=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right),}is the generalized force acting on this one degree of freedom system. If the mechanical system is defined by m generalized coordinates,qj,j= 1, ...,m, then the system has m degrees of freedom and the virtual work is given by,δW=∑j=1mQjδqj,{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}whereQj=∑i=1n(Fi⋅∂Vi∂q˙j+Ti⋅∂ωi∂q˙j),j=1,…,m.{\displaystyle Q_{j}=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}_{j}}}\right),\quad j=1,\ldots ,m.}is the generalized force associated with the generalized coordinateqj. The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that isQj=0,j=1,…,m.{\displaystyle Q_{j}=0,\quad j=1,\ldots ,m.} Thesemequations define the static equilibrium of the system of rigid bodies. Consider a single rigid body which moves under the action of a resultant forceFand torqueT, with one degree of freedom defined by the generalized coordinateq. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia forceQ*associated with the generalized coordinateqis given byQ∗=−(MA)⋅∂V∂q˙−([IR]α+ω×[IR]ω)⋅∂ω∂q˙.{\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-\left([I_{R}]{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times [I_{R}]{\boldsymbol {\omega }}\right)\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.} This inertia force can be computed from the kinetic energy of the rigid body,T=12MV⋅V+12ω⋅[IR]ω,{\displaystyle T={\tfrac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},}by using the formulaQ∗=−(ddt∂T∂q˙−∂T∂q).{\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).} A system ofnrigid bodies with m generalized coordinates has the kinetic energyT=∑i=1n(12MVi⋅Vi+12ωi⋅[IR]ωi),{\displaystyle T=\sum _{i=1}^{n}\left({\tfrac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\tfrac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),}which can be used to calculate the m generalized inertia forces[6]Qj∗=−(ddt∂T∂q˙j−∂T∂qj),j=1,…,m.{\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.} D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires thatδW=(Q1+Q1∗)δq1+⋯+(Qm+Qm∗)δqm=0,{\displaystyle \delta W=\left(Q_{1}+Q_{1}^{*}\right)\delta q_{1}+\dots +\left(Q_{m}+Q_{m}^{*}\right)\delta q_{m}=0,}for any set of virtual displacementsδqj. This condition yieldsmequations,Qj+Qj∗=0,j=1,…,m,{\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,}which can also be written asddt∂T∂q˙j−∂T∂qj=Qj,j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.}The result is a set of m equations of motion that define the dynamics of the rigid body system. If the generalized forces Qjare derivable from a potential energyV(q1, ...,qm), then these equations of motion take the formddt∂T∂q˙j−∂T∂qj=−∂V∂qj,j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.} In this case, introduce theLagrangian,L=T−V, so these equations of motion becomeddt∂L∂q˙j−∂L∂qj=0j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.}These are known asLagrange's equations of motion. The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi,i= 1, ...,nbe located at the coordinatesriand velocitiesvi. Select a reference pointRand compute the relative position and velocity vectors,ri=(ri−R)+R,vi=ddt(ri−R)+V.{\displaystyle \mathbf {r} _{i}=\left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {R} ,\quad \mathbf {v} _{i}={\frac {d}{dt}}(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} .} The total linear and angular momentum vectors relative to the reference pointRarep=ddt(∑i=1nmi(ri−R))+(∑i=1nmi)V,{\displaystyle \mathbf {p} ={\frac {d}{dt}}\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,}andL=∑i=1nmi(ri−R)×ddt(ri−R)+(∑i=1nmi(ri−R))×V.{\displaystyle \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right)+\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)\times \mathbf {V} .} IfRis chosen as the center of mass these equations simplify top=MV,L=∑i=1nmi(ri−R)×ddt(ri−R).{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right).} To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so Pi, i=1,...,n are located by the coordinatesriand velocitiesvi. Select a reference pointRand compute the relative position and velocity vectors,ri=(ri−R)+R,vi=ω×(ri−R)+V,{\displaystyle \mathbf {r} _{i}=(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {R} ,\quad \mathbf {v} _{i}=\omega \times (\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} ,}where ω is the angular velocity of the system.[7][8][9] Thelinear momentumandangular momentumof this rigid system measured relative to the center of massRisp=(∑i=1nmi)V,L=∑i=1nmi(ri−R)×vi=∑i=1nmi(ri−R)×(ω×(ri−R)).{\displaystyle \mathbf {p} =\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times \mathbf {v} _{i}=\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times (\omega \times (\mathbf {r} _{i}-\mathbf {R} )).} These equations simplify to become,p=MV,L=[IR]ω,{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =[I_{R}]\omega ,}where M is the total mass of the system and [IR] is themoment of inertiamatrix defined by[IR]=−∑i=1nmi[ri−R][ri−R],{\displaystyle [I_{R}]=-\sum _{i=1}^{n}m_{i}[r_{i}-R][r_{i}-R],}where [ri− R] is the skew-symmetric matrix constructed from the vectorri−R.
https://en.wikipedia.org/wiki/Dynamic_equilibrium_(mechanics)
Applied mechanicsis the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments.[1]In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life.[2]It has numerous applications in a wide variety of fields and disciplines, including but not limited tostructural engineering,astronomy,oceanography,meteorology,hydraulics,mechanical engineering,aerospace engineering,nanotechnology,structural design,earthquake engineering,fluid dynamics,planetary sciences, and other life sciences.[3][4]Connecting research between numerous disciplines, applied mechanics plays an important role in bothscienceandengineering.[1] Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application totechnology. Composed of two main categories, Applied Mechanics can be split intoclassical mechanics; the study of the mechanics of macroscopic solids, andfluid mechanics; the study of the mechanics of macroscopic fluids.[4]Each branch of applied mechanics contains subcategories formed through their own subsections as well.[4]Classical mechanics, divided intostaticsanddynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split intokinematicsandkinetics.[4]Likeclassical mechanics,fluid mechanicsis also divided into two sections: statics and dynamics.[4] Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools.[5]In the application of thenatural sciences, mechanics was said to be complemented bythermodynamics, the study of heat and more generallyenergy, andelectromechanics, the study ofelectricityandmagnetism. Engineering problems are generally tackled with applied mechanics through the application of theories ofclassical mechanicsandfluid mechanics.[4]Because applied mechanics can be applied in engineering disciplines likecivil engineering,mechanical engineering,aerospace engineering, materials engineering, andbiomedical engineering, it is sometimes referred to as engineering mechanics.[4] Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines.[1]Incivil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, andearthquake engineering.[4]Inmechanical engineering, it can be applied in mechatronics androbotics, design and drafting,nanotechnology, machine elements, structural analysis, friction stir welding, andacoustical engineering.[4]Inaerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics.[4]In materials engineering, applied mechanics’ concepts are used in thermoelasticity,elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics.[6]Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control.[7] The first science with a theoretical foundation based inmathematicswasmechanics; the underlying principles of mechanics were first delineated byIsaac Newtonin his 1687 bookPhilosophiæ Naturalis Principia Mathematica[3].One of the earliest works to define applied mechanics as its own discipline was the three volumeHandbuch der Mechanikwritten by German physicist and engineerFranz Josef Gerstner.[8]The first seminal work on applied mechanics to be published in English was AManual of Applied Mechanicsin 1858 by English mechanical engineerWilliam Rankine.[8][9]August Föppl, a German mechanical engineer and professor, publishedVorlesungen über technische Mechanikin 1898 in which he introducedcalculusto the study of applied mechanics.[8] Applied mechanics was established as a discipline separate fromclassical mechanicsin the early 1920s with the publication ofJournal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of theInternational Congress of Applied Mechanics.[1]In 1921 Austrian scientistRichard von Misesstarted theJournal of Applied Mathematics and Mechanics(Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientistLudwig Prandtlfounded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik).[1]During a 1922 conference on hydrodynamics and aerodynamics inInnsbruck, Austria,Theodore von Kármán, a Hungarian engineer, andTullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics.[1]In 1924 the first meeting of theInternational Congress of Applied Mechanicswas held inDelft, the Netherlands attended by more than 200 scientist from around the world.[1][3]Since this first meeting the congress has been held every four years, except duringWorld War II; the name of the meeting was changed toInternational Congress of Theoretical and Applied Mechanicsin 1960.[1] Due to the unpredictable political landscape in Europe after theFirst World Warand upheaval of World War II many European scientist and engineers emigrated to the United States.[1]Ukrainian engineerStephan Timoshenkofled theBolsheviksRed Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at theUniversity of MichiganandStanford University.[10]Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded theApplied Mechanics Divisionof theAmerican Society of Mechanical Engineersin 1927 and is considered “America’s Father of Engineering Mechanics.”[10]In 1930 Theodore von Kármán left Germany and became the first director of theAeronautical Laboratoryat theCalifornia Institute of Technology; von Kármán would later co-found theJet Propulsion Laboratoryin 1944.[1]With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950.[1] Dynamics, the study of the motion and movement of various objects, can be further divided into two branches,kinematicsandkinetics.[4]Forclassical mechanics, kinematics would be the analysis of moving bodies using time,velocities,displacement, andacceleration.[4]Kinetics would be the study of moving bodies through the lens of the effects of forces and masses.[4]In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids.[4] The study of statics is the study and describing of bodies at rest.[4]Static analysis in classical mechanics can be broken down into two categories, non-deformable bodies and deformable bodies.[4]When studying non-deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying deformable bodies, the examination of the structure and material strength is observed.[4]In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account.[4] Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below.[4] Fluid Mechanics Body Applications Engineering Body Engineering Engineering Engineering Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton'sPrincipia(published in 1687).[3]It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics.[3]Depending on the type offorce, type ofmatter, and theexternal forces,acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies.[3] Archimedes' principleis a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid.[11]If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid.[11]Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid.[11]The weight of the amount of displaced fluids will then be equal to the weight of the solid.[11] This section based on the "AMR Subject Classification Scheme" from the journalApplied Mechanics Reviews[12].
https://en.wikipedia.org/wiki/Engineering_mechanics
Inchemistryandphysics,metastabilityis an intermediateenergetic statewithin adynamical systemother than the system'sstate of least energy. A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope.Bowling pinsshow similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science isisomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in thepotential energy. During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation: The metastability concept originated in the physics offirst-order phase transitions. It then acquired new meaning in the study of aggregatedsubatomic particles(in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems. Metastability is common in physics and chemistry – from anatom(many-body assembly) to statistical ensembles ofmolecules(viscous fluids,amorphous solids,liquid crystals,minerals, etc.) at molecular levels or as a whole (seeMetastable states of matterandgrain pilesbelow). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse. Indynamic systems(withfeedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, thetime-invarianceof the active or reactive patterns with respect to the external influences defines stability and metastability (seebrain metastabilitybelow). In these systems, the equivalent ofthermal fluctuationsin molecular systems is the "white noise" that affects signal propagation and the decision-making. Non-equilibrium thermodynamicsis a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion orkineticsof the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives. Metastablestates of matter(also referred asmetastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) andsublimating solidstosupercooledliquids orsuperheatedliquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiatescrystallizationcenters. This is a common situation for the droplets of atmospheric clouds. Metastable phases are common in condensed matter and crystallography. This is the case foranatase, a metastable polymorph oftitanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lowersurface energy, is always metastable, withrutilebeing the most stable phase at all temperatures and pressures.[1]As another example,diamondis a stable phase only at very high pressures, but is a metastable form of carbon atstandard temperature and pressure. It can be converted tographite(plus leftover kinetic energy), but only after overcoming anactivation energy– an intervening hill.Martensiteis a metastable phase used to control the hardness of most steel. Metastablepolymorphsofsilicaare commonly observed. In some cases, such as in theallotropesof solidboron, acquiring a sample of the stable phase is difficult.[2] The bonds between the building blocks ofpolymerssuch asDNA,RNA, andproteinsare also metastable.Adenosine triphosphate(ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology.[3] Generally speaking,emulsions/colloidalsystems andglassesare metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098years[4](as compared with the lifetime of the universe, which is thought to be around1.3787×1010years).[5] Sandpilesare one system which can exhibit metastability if a steep slope or tunnel is present.Sand grainsform a pile due tofriction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse. Theavalancheis a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration. Aggregated systems ofsubatomic particlesdescribed byquantum mechanics(quarksinsidenucleons, nucleons insideatomic nuclei,electronsinsideatoms,molecules, oratomic clusters) are found to have many distinguishable states. Of these, one (or a smalldegenerate set) is indefinitely stable: theground stateorglobal minimum. All other states besides the ground state (or those degenerate with it) have higher energies.[6]Of all these other states, themetastablestates are the ones havinglifetimeslasting at least 102to 103times longer than the shortest lived states of the set.[7] Ametastable stateis then long-lived (locallystablewith respect to configurations of 'neighbouring' energies) but not eternal (as the globalminimumis). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, aboveabsolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is throughtunnelling. Some energetic states of anatomic nucleus(having distinct spatial mass, charge, spin,isospindistributions) are much longer-lived than others (nuclear isomersof the sameisotope), e.g.technetium-99m.[8]The isotopetantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least4.5×1016years,[9][10]over 3 million times the currentage of the universe. Some atomic energy levels are metastable.Rydberg atomsare an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipoleselection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation). This slow-decay property of a metastable state is apparent inphosphorescence, the kind ofphotoluminescenceseen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting. In chemical systems, a system of atoms or molecules involving a change inchemical bondcan be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations andthermal motionmake chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energeticvalleyswhich are not the lowest possible valley (point 1 in illustration). A common type of metastability isisomerism. The stability or metastability of a given chemical system depends on its environment, particularlytemperatureandpressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystalpolymorphcan result in failure of a drug while in storage between manufacture and administration.[11]The map of which state is the most stable as a function of pressure, temperature and/or composition is known as aphase diagram. In regions where a particular state is not the most stable, it may still be metastable.Reaction intermediatesare relatively short-lived, and are usually thermodynamically unstable rather than metastable. TheIUPACrecommends referring to these astransientrather than metastable.[12] Metastability is also used to refer to specific situations in mass spectrometry[13]and spectrochemistry.[14] A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However, if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as aflip-flop) canenter a metastable stateand take an unbounded length of time to finally settle into a fully stable digital state. Metastability in the brainis a phenomenon studied incomputational neuroscienceto elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state. Gilbert Simondoninvokes a notion of metastability for his understanding of systems that rather than resolve their tensions and potentials for transformation into a single final state rather, 'conserves the tensions in the equilibrium of metastability instead of nullifying them in the equilibrium of stability' as a critique ofcyberneticnotions ofhomeostasis.[15]
https://en.wikipedia.org/wiki/Metastability
Instaticsandstructural mechanics, a structure isstatically indeterminatewhen theequilibriumequations – force and moment equilibrium conditions – are insufficient for determining theinternal forcesandreactionson that structure.[1][2] Based onNewton's laws of motion, the equilibrium equations available for a two-dimensional body are:[2] In thebeamconstruction on the right, the four unknown reactions areVA,VB,VC, andHA. The equilibrium equations are:[2] Since there are four unknown forces (orvariables) (VA,VB,VC, andHA) but only three equilibrium equations, this system ofsimultaneous equationsdoes not have a unique solution. The structure is therefore classified asstatically indeterminate. To solve statically indeterminate systems (determine the various moment and force reactions within it), one considers the material properties and compatibility indeformations. If the support atBis removed, the reactionVBcannot occur, and the system becomesstatically determinate(orisostatic).[3]Note that the system iscompletely constrainedhere. The system becomes anexact constraintkinematic coupling. The solution to the problem is:[2] If, in addition, the support atAis changed to a roller support, the number of reactions are reduced to three (withoutHA), but the beam can now be moved horizontally; the system becomesunstableorpartly constrained—amechanismrather than a structure. In order to distinguish between this and the situation when a system under equilibrium is perturbed and becomes unstable, it is preferable to use the phrasepartly constrainedhere. In this case, the two unknownsVAandVCcan be determined by resolving the vertical force equation and the moment equation simultaneously. The solution yields the same results as previously obtained. However, it is not possible to satisfy the horizontal force equation unlessFh= 0.[2] Descriptively, a statically determinate structure can be defined as a structure where, if it is possible to find internal actions in equilibrium with external loads, those internal actions are unique. The structure has no possible states of self-stress, i.e. internal forces in equilibrium with zero external loads are not possible. Statical indeterminacy, however, is the existence of a non-trivial (non-zero) solution to thehomogeneous systemof equilibrium equations. It indicates the possibility of self-stress (stress in the absence of an external load) that may be induced by mechanical or thermal action. Mathematically, this requires astiffness matrixto have full rank. A statically indeterminate structure can only be analyzed by including further information like material properties and deflections. Numerically, this can be achieved by using matrix structural analyses,finite element method(FEM) or themoment distribution method(Hardy Cross) . Practically, a structure is called 'statically overdetermined' when it comprises more mechanical constraints – like walls, columns or bolts – than absolutely necessary for stability.
https://en.wikipedia.org/wiki/Statically_indeterminate
Staticsis the branch ofclassical mechanicsthat is concerned with the analysis offorceandtorqueacting on aphysical systemthat does not experience anacceleration, but rather is inequilibriumwith its environment. IfF{\displaystyle {\textbf {F}}}is the total of the forces acting on the system,m{\displaystyle m}is the mass of the system anda{\displaystyle {\textbf {a}}}is the acceleration of the system,Newton's second lawstates thatF=ma{\displaystyle {\textbf {F}}=m{\textbf {a}}\,}(the bold font indicates avectorquantity, i.e. one with bothmagnitudeanddirection). Ifa=0{\displaystyle {\textbf {a}}=0}, thenF=0{\displaystyle {\textbf {F}}=0}. As for a system in static equilibrium, the acceleration equals zero, the system is either at rest, or itscenter of massmoves at constantvelocity. The application of the assumption of zero acceleration to the summation ofmomentsacting on the system leads toM=Iα=0{\displaystyle {\textbf {M}}=I\alpha =0}, whereM{\displaystyle {\textbf {M}}}is the summation of all moments acting on the system,I{\displaystyle I}is the moment of inertia of the mass andα{\displaystyle \alpha }is the angular acceleration of the system. For a system whereα=0{\displaystyle \alpha =0}, it is also true thatM=0.{\displaystyle {\textbf {M}}=0.} Together, the equationsF=ma=0{\displaystyle {\textbf {F}}=m{\textbf {a}}=0}(the 'first condition for equilibrium') andM=Iα=0{\displaystyle {\textbf {M}}=I\alpha =0}(the 'second condition for equilibrium') can be used to solve for unknown quantities acting on the system. Archimedes(c. 287–c. 212 BC) did pioneering work in statics.[1][2]Later developments in the field of statics are found in works ofThebit.[3] Forceis the action of one body on another. A force is either a push or a pull, and it tends to move a body in the direction of its action. The action of a force is characterized by its magnitude, by the direction of its action, and by itspoint of application(orpoint of contact). Thus, force is a vector quantity, because its effect depends on the direction as well as on the magnitude of the action.[4] Forces are classified as either contact or body forces. Acontact forceis produced by direct physical contact; an example is the force exerted on a body by a supporting surface. Abody forceis generated by virtue of the position of a body within aforce fieldsuch as a gravitational, electric, or magnetic field and is independent of contact with any other body; an example of a body force is the weight of a body in the Earth's gravitational field.[5] In addition to the tendency to move a body in the direction of its application, a force can also tend to rotate a body about an axis. The axis may be any line which neither intersects nor is parallel to theline of actionof the force. This rotational tendency is known asmoment of force(M). Moment is also referred to astorque. The magnitude of the moment of a force at a pointO, is equal to the perpendicular distance fromOto the line of action ofF, multiplied by the magnitude of the force:M=F·d, where The direction of the moment is given by the right hand rule, where counter clockwise (CCW) is out of the page, and clockwise (CW) is into the page. The moment direction may be accounted for by using a stated sign convention, such as a plus sign (+) for counterclockwise moments and a minus sign (−) for clockwise moments, or vice versa. Moments can be added together as vectors. In vector format, the moment can be defined as thecross productbetween the radius vector,r(the vector from point O to the line of action), and the force vector,F:[6] Varignon's theoremstates that the moment of a force about any point is equal to the sum of the moments of the components of the force about the same point. Thestatic equilibriumof a particle is an important concept in statics. A particle is in equilibrium only if the resultant of all forces acting on the particle is equal to zero. In a rectangular coordinate system the equilibrium equations can be represented by three scalar equations, where the sums of forces in all three directions are equal to zero. Anengineeringapplication of this concept is determining the tensions of up to three cables under load, for example the forces exerted on each cable of a hoist lifting an object or ofguy wiresrestraining ahot air balloonto the ground.[7] In classical mechanics,moment of inertia, also called mass moment, rotational inertia, polar moment of inertia of mass, or the angular mass, (SI units kg·m²) is a measure of an object's resistance to changes to its rotation. It is the inertia of a rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in linear dynamics, describing the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. The symbols I and J are usually used to refer to the moment of inertia or polar moment of inertia. While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscopic motion. The concept was introduced byLeonhard Eulerin his 1765 bookTheoria motus corporum solidorum seu rigidorum; he discussed the moment of inertia and many related concepts, such as the principal axis of inertia. Statics is used in the analysis of structures, for instance inarchitecturalandstructural engineering.Strength of materialsis a related field of mechanics that relies heavily on the application of static equilibrium. A key concept is thecenter of gravityof a body at rest: it represents an imaginary point at which all themassof a body resides. The position of the point relative to thefoundationson which a body lies determines itsstabilityin response to external forces. If the center of gravity exists outside the foundations, then the body is unstable because there is a torque acting: any small disturbance will cause the body to fall or topple. If the center of gravity exists within the foundations, the body is stable since no net torque acts on the body. If the center of gravity coincides with the foundations, then the body is said to bemetastable. Hydrostatics, also known asfluid statics, is the study of fluids at rest (i.e. in static equilibrium). The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same at all points at the same depth (or altitude) within the fluid. If the net force is greater than zero the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form byFrenchmathematicianandphilosopherBlaise Pascalin 1647 and became known asPascal's law. It has many important applications inhydraulics.Archimedes,Abū Rayhān al-Bīrūnī,Al-Khazini[8]andGalileo Galileiwere also major figures in the development of hydrostatics. "Using a whole body of mathematical methods (not only those inherited from the antique theory of ratios and infinitesimal techniques, but also the methods of the contemporary algebra and fine calculation techniques), Arabic scientists raised statics to a new, higher level. The classical results of Archimedes in the theory of the centre of gravity were generalized and applied to three-dimensional bodies, the theory of ponderable lever was founded and the 'science of gravity' was created and later further developed in medieval Europe. The phenomena of statics were studied by using the dynamic approach so that two trends - statics and dynamics - turned out to be inter-related within a single science, mechanics. The combination of the dynamic approach with Archimedean hydrostatics gave birth to a direction in science which may be called medieval hydrodynamics. [...] Numerous experimental methods were developed for determining the specific weight, which were based, in particular, on the theory of balances and weighing. The classical works of al-Biruni and al-Khazini may be considered the beginning of the application of experimental methods inmedieval science."
https://en.wikipedia.org/wiki/Statics