text
stringlengths
16
172k
source
stringlengths
32
122
Perceptual Evaluation of Video Quality(PEVQ)is an end-to-end (E2E) measurement algorithm to score thepicture quality of a video presentationby means of a 5-pointmean opinion score(MOS). It is, therefore, a video quality model. PEVQ was benchmarked by theVideo Quality Experts Group(VQEG) in the course of the Multimedia Test Phase 2007–2008. Based on the performance results, in which the accuracy of PEVQ was tested againstratings obtained by human viewers, PEVQ became part of the new International Standard.[1] The measurement algorithm can be applied to analyze visible artifacts caused by a digital video encoding/decoding (or transcoding) process, radio- or IP-based transmission networks and end-user devices. Application scenarios addressnext generation networkingand mobile services and includeIPTV(Standard-definition televisionandHDTV),streaming video,Mobile TV,video telephony,video conferencingandvideo messaging. The measurement paradigm is to assess degradations of a decoded video sequence output from the network (for example as received by a TV set top box) in comparison to the original reference picture (broadcast from the studio). Consequently, the setup is referred to as end-to-end (E2E) quality testing. The development for picture quality analysis algorithms available today started with still image models which were later enhanced to also cover motion pictures. PEVQ is full-reference algorithm (see the classification of models invideo quality) and analyzes the picture pixel-by-pixel after a temporal alignment (also referred to as 'temporal registration') of corresponding frames of reference and test signal. PEVQ MOS results range from 1 (bad) to 5 (excellent) and indicate the perceived quality of the decoded sequence. PEVQ is based on modeling the behavior of the human visual system. In addition to an overall MOS score, PEVQ quantifies abnormalities in the video signal by a variety ofKPIs, includingPSNR, distortion indicators and lip-sync delay.
https://en.wikipedia.org/wiki/Perceptual_Evaluation_of_Video_Quality
Thestructural similarityindex measure(SSIM) is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos. It is also used for measuring the similarity between two images. The SSIM index is afull reference metric; in other words, the measurement or prediction ofimage qualityis based on an initial uncompressed or distortion-free image as reference. SSIM is aperception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including bothluminancemasking andcontrastmasking terms. This distinguishes from other techniques such asmean squared error(MSE) orpeak signal-to-noise ratio(PSNR) that instead estimate absolute errors. Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially close. These dependencies carry important information about the structure of the objects in the visual scene. Luminance masking is a phenomenon whereby image distortions (in this context) tend to be less visible in bright regions, while contrast masking is a phenomenon whereby distortions become less visible where there is significant activity or "texture" in the image. The predecessor of SSIM was calledUniversal Quality Index(UQI), orWang–Bovik index, which was developed by Zhou Wang andAlan Bovikin 2001. This evolved, through their collaboration with Hamid Sheikh andEero Simoncelli, into the current version of SSIM, which was published in April 2004 in theIEEE Transactions on Image Processing.[1]In addition to defining the SSIM quality index, the paper provides a general context for developing and evaluating perceptual quality measures, including connections to human visual neurobiology and perception, and direct validation of the index against human subject ratings. The basic model was developed in the Laboratory for Image and Video Engineering (LIVE) atThe University of Texas at Austinand further developed jointly with the Laboratory for Computational Vision (LCV) atNew York University. Further variants of the model have been developed in the Image and Visual Computing Laboratory atUniversity of Waterlooand have been commercially marketed. SSIM subsequently found strong adoption in the image processing community and in the television and social media industries. The 2004 SSIM paper has been cited over 50,000 times according toGoogle Scholar,[2]making it one of the highest cited papers in the image processing and video engineering fields. It was recognized with theIEEE Signal Processing SocietyBest Paper Award for 2009.[3]It also received theIEEE Signal Processing SocietySustained Impact Award for 2016, indicative of a paper having an unusually high impact for at least 10 years following its publication. Because of its high adoption by the television industry, the authors of the original SSIM paper were each accorded aPrimetime Engineering Emmy Awardin 2015 by theTelevision Academy. The SSIM index is calculated between two windows of pixel valuesx{\displaystyle x}andy{\displaystyle y}of common size, from corresponding locations in two images to be compaired. These SSIM values can be aggregated across the full images by averaging or other variations. In one simple special case, further explained in the next section, the SSIM measure betweenx{\displaystyle x}andy{\displaystyle y}is:[4] SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2){\displaystyle {\hbox{SSIM}}(x,y)={\frac {(2\mu _{x}\mu _{y}+c_{1})(2\sigma _{xy}+c_{2})}{(\mu _{x}^{2}+\mu _{y}^{2}+c_{1})(\sigma _{x}^{2}+\sigma _{y}^{2}+c_{2})}}} with: The SSIM formula is based on three comparison measurements between the samples ofx{\displaystyle x}andy{\displaystyle y}: luminance (l{\displaystyle l}), contrast (c{\displaystyle c}), and structure (s{\displaystyle s}). The individual comparison functions are:[4] l(x,y)=2μxμy+c1μx2+μy2+c1{\displaystyle l(x,y)={\frac {2\mu _{x}\mu _{y}+c_{1}}{\mu _{x}^{2}+\mu _{y}^{2}+c_{1}}}} c(x,y)=2σxσy+c2σx2+σy2+c2{\displaystyle c(x,y)={\frac {2\sigma _{x}\sigma _{y}+c_{2}}{\sigma _{x}^{2}+\sigma _{y}^{2}+c_{2}}}} s(x,y)=σxy+c3σxσy+c3{\displaystyle s(x,y)={\frac {\sigma _{xy}+c_{3}}{\sigma _{x}\sigma _{y}+c_{3}}}} The SSIM for each block is then a weighted combination of those comparative measures: SSIM(x,y)=l(x,y)α⋅c(x,y)β⋅s(x,y)γ{\displaystyle {\text{SSIM}}(x,y)=l(x,y)^{\alpha }\cdot c(x,y)^{\beta }\cdot s(x,y)^{\gamma }} Choosing the third denominator stabilizing constant as: leads to a simplification when combining thecandscomponents with equal exponents (β=γ{\displaystyle \beta =\gamma }), as the numerator ofcis then twice the denominator ofs, leading to a cancellation leaving just a 2. Setting the weights (exponents)α,β,γ{\displaystyle \alpha ,\beta ,\gamma }to 1, the formula can then be reduced to the special case shown above. SSIM satisfies the identity of indiscernibles, and symmetry properties, but not the triangle inequality or non-negativity, and thus is not adistance function. However, under certain conditions, SSIM may be converted to a normalized root MSE measure, which is a distance function.[5]The square of such a function is not convex, but is locally convex andquasiconvex,[5]making SSIM a feasible target for optimization. In order to evaluate the image quality, this formula is usually applied only onluma, although it may also be applied on color (e.g.,RGB) values or chromatic (e.g.YCbCr) values. The resultant SSIM index is a decimal value between -1 and 1, where 1 indicates perfect similarity, 0 indicates no similarity, and -1 indicates perfect anti-correlation. For an image, it is typically calculated using a sliding Gaussian window of size 11x11 or a block window of size 8×8. The window can be displaced pixel-by-pixel on the image to create an SSIM quality map of the image. In the case of video quality assessment,[6]the authors propose to use only a subgroup of the possible windows to reduce the complexity of the calculation. A more advanced form of SSIM, called Multiscale SSIM (MS-SSIM)[4]is conducted over multiple scales through a process of multiple stages of sub-sampling, reminiscent of multiscale processing in the early vision system. It has been shown to perform equally well or better than SSIM on different subjective image and video databases.[4][7][8] Three-component SSIM(3-SSIM) is a form of SSIM that takes into account the fact that the human eye can see differences more precisely on textured or edge regions than on smooth regions.[9]The resulting metric is calculated as a weighted average of SSIM for three categories of regions: edges, textures, and smooth regions. The proposed weighting is 0.5 for edges, 0.25 for the textured and smooth regions. The authors mention that a 1/0/0 weighting (ignoring anything but edge distortions) leads to results that are closer to subjective ratings. This suggests that edge regions play a dominant role in image quality perception. The authors of 3-SSIM have also extended the model intofour-component SSIM(4-SSIM). The edge types are further subdivided into preserved and changed edges by their distortion status. The proposed weighting is 0.25 for all four components.[10] Structural dissimilarity (DSSIM) may be derived from SSIM, though it does not constitute a distance function as the triangle inequality is not necessarily satisfied. DSSIM(x,y)=1−SSIM(x,y)2{\displaystyle {\hbox{DSSIM}}(x,y)={\frac {1-{\hbox{SSIM}}(x,y)}{2}}} It is worth noting that the original version SSIM was designed to measure the quality of still images. It does not contain any parameters directly related to temporal effects of human perception and human judgment.[7]A common practice is to calculate the average SSIM value over all frames in the video sequence. However, several temporal variants of SSIM have been developed.[11][6][12] The complex wavelet transform variant of the SSIM (CW-SSIM) is designed to deal with issues of image scaling, translation and rotation. Instead of giving low scores to images with such conditions, the CW-SSIM takes advantage of the complex wavelet transform and therefore yields higher scores to said images. The CW-SSIM is defined as follows: CW-SSIM(cx,cy)=(2∑i=1N|cx,i||cy,i|+K∑i=1N|cx,i|2+∑i=1N|cy,i|2+K)(2|∑i=1Ncx,icy,i∗|+K2∑i=1N|cx,icy,i∗|+K){\displaystyle {\text{CW-SSIM}}(c_{x},c_{y})={\bigg (}{\frac {2\sum _{i=1}^{N}|c_{x,i}||c_{y,i}|+K}{\sum _{i=1}^{N}|c_{x,i}|^{2}+\sum _{i=1}^{N}|c_{y,i}|^{2}+K}}{\bigg )}{\bigg (}{\frac {2|\sum _{i=1}^{N}c_{x,i}c_{y,i}^{*}|+K}{2\sum _{i=1}^{N}|c_{x,i}c_{y,i}^{*}|+K}}{\bigg )}} Wherecx{\displaystyle c_{x}}is the complex wavelet transform of the signalx{\displaystyle x}andcy{\displaystyle c_{y}}is the complex wavelet transform for the signaly{\displaystyle y}. Additionally,K{\displaystyle K}is a small positive number used for the purposes of function stability. Ideally, it should be zero. Like the SSIM, the CW-SSIM has a maximum value of 1. The maximum value of 1 indicates that the two signals are perfectly structurally similar while a value of 0 indicates no structural similarity.[13] The SSIMPLUS index is based on SSIM and is a commercially available tool.[14]It extends SSIM's capabilities, mainly to target video applications. It provides scores in the range of 0–100, linearly matched to human subjective ratings. It also allows adapting the scores to the intended viewing device, comparing video across different resolutions and contents. According to its authors, SSIMPLUS achieves higher accuracy and higher speed than other image and video quality metrics. However, no independent evaluation of SSIMPLUS has been performed, as the algorithm itself is not publicly available. In order to further investigate the standarddiscreteSSIM from a theoretical perspective, thecontinuousSSIM (cSSIM)[15]has been introduced and studied in the context ofradial basis function interpolation. SSIMULACRA and SSIMULACRA2 are variants of SSIM developed byCloudinarywith the goal of fitted to subjective opinion data. The variants operate inXYBcolor space and combine MS-SSIM with two types of asymmetric error maps for blockiness/ringing and smoothing/blur, common compression artifacts. SSIMULACRA2 is part of libjxl, the reference implementation ofJPEG XL.[16][17] The r* cross-correlation metric is based on the variance metrics of SSIM. It's defined asr*(x,y) =⁠σxy/σxσy⁠whenσxσy≠ 0,1when both standard deviations are zero, and0when only one is zero. It has found use in analyzing human response to contrast-detail phantoms.[18] SSIM has also been used on thegradientof images, making it "G-SSIM". G-SSIM is especially useful on blurred images.[19] The modifications above can be combined. For example, 4-G-r* is a combination of 4-SSIM, G-SSIM, and r*. It is able to reflect radiologist preference for images much better than other SSIM variants tested.[20] SSIM has applications in a variety of different problems. Some examples are: Due to its popularity, SSIM is often compared to other metrics, including more simple metrics such as MSE and PSNR, and other perceptual image andvideo quality metrics. SSIM has been repeatedly shown to significantly outperform MSE and its derivates in accuracy, including research by its own authors and others.[7][22][23][24][25][26] A paper by Dosselmann and Yang claims that the performance of SSIM is "much closer to that of the MSE" than usually assumed. While they do not dispute the advantage of SSIM over MSE, they state an analytical and functional dependency between the two metrics.[8]According to their research, SSIM has been found to correlate as well as MSE-based methods on subjective databases other than the databases from SSIM's creators. As an example, they cite Reibman and Poole, who found that MSE outperformed SSIM on a database containing packet-loss–impaired video.[27]In another paper, an analytical link between PSNR and SSIM was identified.[28]
https://en.wikipedia.org/wiki/Structural_similarity_index_measure
Subjective video qualityisvideo qualityas experienced by humans. It is concerned with how video is perceived by a viewer (also called "observer" or "subject") and designates their opinion on a particularvideosequence. It is related to the field ofQuality of Experience. Measuring subjective video quality is necessary because objective quality assessment algorithms such asPSNRhave been shown to correlate poorly with subjective ratings. Subjective ratings may also be used as ground truth to develop new algorithms. Subjective video quality testsarepsychophysical experimentsin which a number of viewers rate a given set of stimuli. These tests are quite expensive in terms of time (preparation and running) and human resources and must therefore be carefully designed. In subjective video quality tests, typically,SRCs("Sources", i.e. original video sequences) are treated with various conditions (HRCsfor "Hypothetical Reference Circuits") to generatePVSs("Processed Video Sequences").[1] The main idea of measuring subjective video quality is similar to themean opinion score(MOS) evaluation foraudio. To evaluate the subjective video quality of a video processing system, the following steps are typically taken: Many parameters of the viewing conditions may influence the results, such as room illumination, display type, brightness, contrast, resolution, viewing distance, and the age and educational level of viewers. It is therefore advised to report this information along with the obtained ratings. Typically, a system should be tested with a representative number of different contents and content characteristics. For example, one may select excerpts from contents of different genres, such as action movies, news shows, and cartoons. The length of the source video depends on the purpose of the test, but typically, sequences of no less than 10 seconds are used. The amount of motion and spatial detail should also cover a broad range. This ensures that the test contains sequences which are of different complexity. Sources should be of pristine quality. There should be no visiblecoding artifactsor other properties that would lower the quality of the original sequence. The design of the HRCs depends on the system under study. Typically, multiple independent variables are introduced at this stage, and they are varied with a number of levels. For example, to test the quality of avideo codec, independent variables may be the video encoding software, a target bitrate, and the target resolution of the processed sequence. It is advised to select settings that result in ratings which cover the full quality range. In other words, assuming anAbsolute Category Ratingscale, the test should show sequences that viewers would rate from bad to excellent. Viewers are also called "observers" or "subjects". A certain minimum number of viewers should be invited to a study, since a larger number of subjects increases the reliability of the experiment outcome, for example by reducing the standard deviation of averaged ratings. Furthermore, there is a risk of having to exclude subjects for unreliable behavior during rating. The minimum number of subjects that are required for a subjective video quality study is not strictly defined. According to ITU-T, any number between 4 and 40 is possible, where 4 is the absolute minimum for statistical reasons, and inviting more than 40 subjects has no added value. In general, at least 15 observers should participate in the experiment. They should not be directly involved in picture quality evaluation as part of their work and should not be experienced assessors.[2]In other documents, it is also claimed that at minimum 10 subjects are needed to obtain meaningful averaged ratings.[3] However, most recommendations for the number of subjects have been designed for measuring video quality encountered by a home television or PC user, where the range and diversity of distortions tend to be limited (e.g., to encoding artifacts only). Given the large ranges and diversity of impairments that may occur on videos captured with mobile devices and/or transmitted over wireless networks, generally, a larger number of human subjects may be required. Brunnström and Barkowsky have provided calculations for estimating the minimum number of subjects necessary based on existing subjective tests.[4]They claim that in order to ensure statistically significant differences when comparing ratings, a larger number of subjects than usually recommended may be needed. Viewers should be non-experts in the sense of not being professionals in the field of video coding or related domains. This requirement is introduced to avoid potential subject bias.[2] Typically, viewers are screened fornormal visionor corrected-to-normal vision usingSnellen charts.Color blindnessis often tested withIshihara plates.[2] There is an ongoing discussion in theQoEcommunity as to whether a viewer's cultural, social, or economic background has a significant impact on the obtained subjective video quality results. A systematic study involving six laboratories in four countries found no statistically significant impact of subject's language and culture / country of origin on video quality ratings.[5] Subjective quality tests can be done in any environment. However, due to possible influence factors from heterogenous contexts, it is typically advised to perform tests in a neutral environment, such as a dedicated laboratory room. Such a room may be sound-proofed, with walls painted in neutral grey, and using properly calibrated light sources. Several recommendations specify these conditions.[6][7]Controlled environments have been shown to result in lower variability in the obtained scores.[5] Crowdsourcinghas recently been used for subjective video quality evaluation, and more generally, in the context ofQuality of Experience.[8]Here, viewers give ratings using their own computer, at home, rather than taking part in a subjective quality test in laboratory rooms. While this method allows for obtaining more results than in traditional subjective tests at lower costs, the validity and reliability of the gathered responses must be carefully checked.[9] Opinions of viewers are typically averaged into the mean opinion score (MOS). To this aim, the labels of categorical scales may be translated into numbers. For example, the responses "bad" to "excellent" can be mapped to the values 1 to 5, and then averaged. MOS values should always be reported with their statisticalconfidence intervalsso that the general agreement between observers can be evaluated. Often, additional measures are taken before evaluating the results. Subject screening is a process in which viewers whose ratings are considered invalid or unreliable are rejected from further analysis. Invalid ratings are hard to detect, as subjects may have rated without looking at a video, or cheat during the test. The overall reliability of a subject can be determined by various procedures, some of which are outlined in ITU-R and ITU-T recommendations.[2][7]For example, the correlation between a person's individual scores and the overall MOS, evaluated for all sequences, is a good indicator of their reliability in comparison with the remaining test participants. While rating stimuli, humans are subject to biases. These may lead to different and inaccurate scoring behavior and consequently result in MOS values that are not representative of the “true quality” of a stimulus. In the recent years, advanced models have been proposed that aim at formally describing the rating process and subsequently recovering noisiness in subjective ratings. According to Janowski et al., subjects may have an opinion bias that generally shifts their scores, as well as a scoring imprecision that is dependent on the subject and stimulus to be rated.[10]Li et al. have proposed to differentiate betweensubject inconsistencyandcontent ambiguity.[11] There are many ways to select proper sequences, system settings, and test methodologies. A few of them have been standardized. They are thoroughly described in several ITU-R and ITU-T recommendations, among those ITU-R BT.500[7]and ITU-T P.910.[2]While there is an overlap in certain aspects, the BT.500 recommendation has its roots in broadcasting, whereas P.910 focuses on multimedia content. A standardized testing method usually describes the following aspects: Another recommendation, ITU-T P.913,[6]gives researchers more freedom to conduct subjective quality tests in environments different from a typical testing laboratory, while still requiring them to report all details necessary to make such tests reproducible. Below, some examples of standardized testing procedures are explained. Which method to choose largely depends on the purpose of the test and possible constraints in time and other resources. Some methods may have fewer context effects (i.e. where the order of stimuli influences the results), which are unwanted test biases.[12]In ITU-T P.910, it is noted that methods such as DCR should be used for testing the fidelity of transmission, especially in high quality systems. ACR and ACR-HR are better suited for qualification tests and – due to giving absolute results – comparison of systems. The PC method has a high discriminatory power, but it requires longer test sessions. The results of subjective quality tests, including the used stimuli, are calleddatabases. A number of subjective picture and video quality databases based on such studies have been made publicly available by research institutes. These databases – some of which have become de facto standards – are used globally by television, cinematic, and video engineers around the world to design and test objective quality models, since the developed models can be trained against the obtained subjective data. An overview of publicly available databases has been compiled by theVideo Quality Experts Group, and video assets have been made available in theConsumer Digital Video Library.
https://en.wikipedia.org/wiki/Subjective_video_quality
Video Multimethod Assessment Fusion(VMAF) is an objective full-referencevideo qualitymetric developed byNetflixin cooperation with theUniversity of Southern California, the IPI/LS2N labNantes Université, and the Laboratory for Image and Video Engineering (LIVE) atThe University of Texas at Austin. It predicts subjective video quality based on a reference and distorted video sequence. The metric can be used to evaluate the quality of differentvideo codecs, encoders, encoding settings, or transmission variants. The metric is based on initial work from the group of Professor C.-C. Jay Kuo at the University of Southern California.[1][2][3]Here, the applicability of fusion of different video quality metrics usingsupport vector machines(SVM) has been investigated, leading to a "FVQA (Fusion-based Video Quality Assessment) Index" that has been shown to outperform existing image quality metrics on a subjective video quality database. The method has been further developed in cooperation with Netflix, using different subjective video datasets, including a Netflix-owned dataset ("NFLX"). Subsequently renamed "Video Multimethod Assessment Fusion", it was announced on theNetflix TechBlogin June 2016[4]and version 0.3.1 of the reference implementation was made available under a permissive open-source license.[5] In 2017, the metric was updated to support a custom model that includes an adaptation for cellular phone screen viewing, generating higher quality scores for the same input material. In 2018, a model that predicts the quality of up to4Kresolution content was released. The datasets on which these models were trained have not been made available to the public. In 2021, aTechnology and Engineering Emmy Awardwas awarded to Beamr, Netflix, University of Southern California,University of Nantes,The University of Texas at Austin, SSIMWAVE, Disney, Google, Brightcove and ATEME for the Development of Open Perceptual Metrics for Video Encoding Optimization. It was the second time in 20 years that universities got an Emmy Award. It was also the first time a French University got one.[6][7] VMAF uses existing image quality metrics and other features to predict video quality: The above features are fused using a SVM-based regression to provide a single output score in the range of 0–100 pervideo frame, with 100 being quality identical to the reference video. These scores are then temporally pooled over the entire video sequence using thearithmetic meanto provide an overall differentialmean opinion score(DMOS). Due to the public availability of the training source code ("VMAF Development Kit", VDK), the fusion method can be re-trained and evaluated based on different video datasets and features. Anti-noisesignal-to-noise ratio(AN-SNR) was used in earlier versions of VMAF as a quality metric but was subsequently abandoned.[9] An early version of VMAF has been shown to outperform other image and video quality metrics such asSSIM,PSNR-HVS and VQM-VFD on three of four datasets in terms of prediction accuracy, when compared tosubjective ratings.[4]Its performance has also been analyzed in another paper, which found that VMAF did not perform better than SSIM and MS-SSIM on a video dataset.[10]In 2017, engineers fromRealNetworksreported good reproducibility of Netflix' performance findings.[11]InMSU video quality metrics benchmark, where its various versions (including VMAF NEG) were tested, VMAF outperformed all other metrics on all compression standards (H.265, VP9, AV1, VVC). VMAF scores can be artificially increased without improving perceived quality by applying various operations before or after distorting the video, sometimes without impacting the popularPSNRmetric.[12][13] Areference implementationwritten inCandPython("VMAF Development Kit, VDK") is published asfree softwareunder the terms of BSD+Patent license.[14]Its source code and additional material are available onGitHub.[5]
https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion
Video qualityis a characteristic of avideopassed through a video transmission or processing system that describes perceived video degradation (typically compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impact the user's perception of the system. For many stakeholders invideo productionand distribution, ensuring video quality is an important task. Video quality evaluationis performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (bymathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services) orin-service(to monitor and ensure a certain level of quality). Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systemsencode video streamsand transmit them over various kinds of networks or channels. In the age ofanalog videosystems, it was possible to evaluate the quality aspects of a video processing system by calculating the system'sfrequency responseusing test signals (for example, a collection of color bars and circles). Digital videosystems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends on many factors, including the characteristics of the input video signal (e.g., amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity ornetwork performance. Objective video quality modelsaremathematical modelsthat approximate results fromsubjective quality assessment, in which human observers are asked to rate the quality of a video.[1]In this context, the termmodelmay refer to a simple statistical model in which several independent variables (e.g., thepacket loss rateon a network and the video coding parameters) are fit against results obtained in a subjective quality evaluation test usingregression techniques. A model may also be a more complicated algorithm implemented insoftwareorhardware. The termsmodelandmetricare often used interchangeably in the field to mean adescriptive statisticthat provides anindicatorof quality. The term “objective” refers to the fact that, in general, quality models are based on criteria that can bemeasuredobjectively, that is, free from human interpretation. They can be automatically evaluated by acomputer program. Unlike a panel of human observers, an objective model should always deterministically output the same quality score for a given set of input parameters. Objective quality models are sometimes also referred to asinstrumental (quality) models,[2][3]in order to emphasize their application as measurement instruments. Some authors suggest that the term “objective” is misleading, as it “implies that instrumental measurements bear objectivity, which they only do in cases where they can be generalized.”[4] Objective models can be classified by the amount of information available about the original signal, the received signal, or whether there is a signal present at all:[5] Some models that are used for video quality assessment (such asPSNRorSSIM) are simplyimage quality models, whose output is calculated for every frame of a video sequence. An overview of recent no-referenceimage qualitymodels has also been given in a journal paper by Shahid et al.[5] The quality measure of every frame in a video (as determined by an image quality model) can then be recorded and pooled over time to assess the quality of an entire video sequence. While this method is easy to implement, it does not factor in certain kinds of degradations that develop over time, such as the moving artifacts caused bypacket lossand itsconcealment. A video quality model that considers the temporal aspects of quality degradations, likeVQMor theMOVIE Index, may be able to produce more accurate predictions of human-perceived quality. The estimation ofvisual artifactsis a well known technique for estimating overall video quality. The majority of these artifacts arecompression artifactscaused by lossy compression. Some of the attributes typically estimated by pixel-based metrics include: Spatial Temporal This section lists examples of video quality metrics. Since objective video quality models are expected to predict results given by human observers, they are developed with the aid ofsubjective test results. During the development of an objective model, its parameters should be trained so as to achieve the best correlation between the objectively predicted values and the subjective scores, often available asmean opinion scores(MOS). The most widely used subjective test materials are in the public domain and include still pictures, motion pictures, streaming video, high definition, 3-D (stereoscopic), and special-purpose picture quality-related datasets.[18]These so-called databases are created by various research laboratories around the world. Some of them have become de facto standards, including several public-domain subjective picture quality databases created and maintained by theLaboratory for Image and Video Engineering (LIVE)as well theTampere Image Database 2008. A collection of databases can be found in theQUALINET Databasesrepository. TheConsumer Digital Video Library(CDVL) hosts freely available video test sequences for model development. Some databases also provide pre-computed metric scores to allow others to benchmark new metrics against existing ones. Examples can be seen in the table below In theory, a model can be trained on a set of data in such a way that it produces perfectly matching scores on that dataset. However, such a model will beover-trainedand will therefore not perform well on new datasets. It is therefore advised tovalidate modelsagainst new data and use the resulting performance as a real indicator of the model's prediction accuracy. To measure the performance of a model, some frequently used metrics are thelinear correlation coefficient,Spearman's rank correlation coefficient, and theroot mean square error(RMSE). Other metrics are thekappa coefficientand theoutliers ratio. ITU-T Rec.P.1401gives an overview of statistical procedures to evaluate and compare objective models. Objective video quality models can be used in various application areas. Invideo codecdevelopment, the performance of a codec is often evaluated in terms of PSNR or SSIM. For service providers, objective models can be used for monitoring a system. For example, anIPTVprovider may choose to monitor their service quality by means of objective models, rather than asking users for their opinion, or waiting for customer complaints about bad video quality. Few of these standards have found commercial applications, includingPEVQandVQuad-HD.SSIMis also part of a commercially available video quality toolset (SSIMWAVE).VMAFis used byNetflixto tune their encoding and streaming algorithms, and to quality-control all streamed content.[19][20]It is also being used by other technology companies likeBitmovin[21]and has been integrated into software such asFFmpeg. An objective model should only be used in the context that it was developed for. For example, a model that was developed using a particular video codec is not guaranteed to be accurate for another video codec. Similarly, a model trained on tests performed on a large TV screen should not be used for evaluating the quality of a video watched on a mobile phone. When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. There is ongoing research into developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed.[22] The main goal of many-objective video quality metrics is to automatically estimate the average user's (viewer's) opinion on the quality of a video processed by a system. Procedures forsubjective video qualitymeasurements are described inITU-RrecommendationBT.500and ITU-T recommendationP.910. In such tests, video sequences are shown to a group of viewers. The viewers' opinion is recorded and averaged into themean opinion scoreto evaluate the quality of each video sequence. However, the testing procedure may vary depending on what kind of system is tested. Paid for HDR metrics MSU developed metrics: Blurring Metric, Blocking Metric, Brightness Flicking Metric, Drop Frame Metric, Noise Estimation Metric
https://en.wikipedia.org/wiki/Video_quality
Bayesian linear regressionis a type ofconditional modelingin which the mean of one variable is described by alinear combinationof other variables, with the goal of obtaining theposterior probabilityof the regression coefficients (as well as other parameters describing thedistributionof the regressand) and ultimately allowing theout-of-sampleprediction of theregressand(often labelledy{\displaystyle y})conditional onobserved values of the regressors (usuallyX{\displaystyle X}). The simplest and most widely used version of this model is thenormal linear model, in whichy{\displaystyle y}givenX{\displaystyle X}is distributedGaussian. In this model, and under a particular choice ofprior probabilitiesfor the parameters—so-calledconjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated. Consider a standardlinear regressionproblem, in which fori=1,…,n{\displaystyle i=1,\ldots ,n}we specify the mean of theconditional distributionofyi{\displaystyle y_{i}}given ak×1{\displaystyle k\times 1}predictor vectorxi{\displaystyle \mathbf {x} _{i}}:yi=xiTβ+εi,{\displaystyle y_{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},} whereβ{\displaystyle {\boldsymbol {\beta }}}is ak×1{\displaystyle k\times 1}vector, and theεi{\displaystyle \varepsilon _{i}}areindependent and identicallynormally distributedrandom variables:εi∼N(0,σ2).{\displaystyle \varepsilon _{i}\sim N(0,\sigma ^{2}).} This corresponds to the followinglikelihood function: ρ(y∣X,β,σ2)∝(σ2)−n/2exp⁡(−12σ2(y−Xβ)T(y−Xβ)).{\displaystyle \rho (\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\propto (\sigma ^{2})^{-n/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})\right).} Theordinary least squaressolution is used to estimate the coefficient vector using theMoore–Penrose pseudoinverse:β^=(XTX)−1XTy{\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} } whereX{\displaystyle \mathbf {X} }is then×k{\displaystyle n\times k}design matrix, each row of which is a predictor vectorxiT{\displaystyle \mathbf {x} _{i}^{\mathsf {T}}}; andy{\displaystyle \mathbf {y} }is the columnn{\displaystyle n}-vector[y1⋯yn]T{\displaystyle [y_{1}\;\cdots \;y_{n}]^{\mathsf {T}}}. This is afrequentistapproach, and it assumes that there are enough measurements to say something meaningful aboutβ{\displaystyle {\boldsymbol {\beta }}}. In theBayesianapproach,[1]the data are supplemented with additional information in the form of aprior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according toBayes' theoremto yield theposterior beliefabout the parametersβ{\displaystyle {\boldsymbol {\beta }}}andσ{\displaystyle \sigma }. The prior can take different functional forms depending on the domain and the information that is availablea priori. Since the data comprise bothy{\displaystyle \mathbf {y} }andX{\displaystyle \mathbf {X} }, the focus only on the distribution ofy{\displaystyle \mathbf {y} }conditional onX{\displaystyle \mathbf {X} }needs justification. In fact, a "full" Bayesian analysis would require a joint likelihoodρ(y,X∣β,σ2,γ){\displaystyle \rho (\mathbf {y} ,\mathbf {X} \mid {\boldsymbol {\beta }},\sigma ^{2},\gamma )}along with a priorρ(β,σ2,γ){\displaystyle \rho (\beta ,\sigma ^{2},\gamma )}, whereγ{\displaystyle \gamma }symbolizes the parameters of the distribution forX{\displaystyle \mathbf {X} }. Only under the assumption of (weak) exogeneity can the joint likelihood be factored intoρ(y∣X,β,σ2)ρ(X∣γ){\displaystyle \rho (\mathbf {y} \mid {\boldsymbol {\mathbf {X} }},\beta ,\sigma ^{2})\rho (\mathbf {X} \mid \gamma )}.[2]The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptionsX{\displaystyle \mathbf {X} }are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters.[3] For an arbitrary prior distribution, there may be no analytical solution for theposterior distribution. In this section, we will consider a so-calledconjugate priorfor which the posterior distribution can be derived analytically. A priorρ(β,σ2){\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2})}isconjugateto this likelihood function if it has the same functional form with respect toβ{\displaystyle {\boldsymbol {\beta }}}andσ{\displaystyle \sigma }. Since the log-likelihood is quadratic inβ{\displaystyle {\boldsymbol {\beta }}}, the log-likelihood is re-written such that the likelihood becomes normal in(β−β^){\displaystyle ({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})}. Write (y−Xβ)T(y−Xβ)=[(y−Xβ^)+(Xβ^−Xβ)]T[(y−Xβ^)+(Xβ^−Xβ)]=(y−Xβ^)T(y−Xβ^)+(β−β^)T(XTX)(β−β^)+2(Xβ^−Xβ)T(y−Xβ^)⏟=0=(y−Xβ^)T(y−Xβ^)+(β−β^)T(XTX)(β−β^).{\displaystyle {\begin{aligned}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})&=[(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})]^{\mathsf {T}}[(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})]\\&=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})+\underbrace {2(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})} _{=\ 0}\\&=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})\,.\end{aligned}}} The likelihood is now re-written asρ(y|X,β,σ2)∝(σ2)−v2exp⁡(−vs22σ2)(σ2)−n−v2exp⁡(−12σ2(β−β^)T(XTX)(β−β^)),{\displaystyle \rho (\mathbf {y} |\mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\propto (\sigma ^{2})^{-{\frac {v}{2}}}\exp \left(-{\frac {vs^{2}}{2{\sigma }^{2}}}\right)(\sigma ^{2})^{-{\frac {n-v}{2}}}\exp \left(-{\frac {1}{2{\sigma }^{2}}}({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})\right),}wherevs2=(y−Xβ^)T(y−Xβ^)andv=n−k,{\displaystyle vs^{2}=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})\quad {\text{ and }}\quad v=n-k,}wherek{\displaystyle k}is the number of regression coefficients. This suggests a form for the prior:ρ(β,σ2)=ρ(σ2)ρ(β∣σ2),{\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2})=\rho (\sigma ^{2})\rho ({\boldsymbol {\beta }}\mid \sigma ^{2}),}whereρ(σ2){\displaystyle \rho (\sigma ^{2})}is aninverse-gamma distributionρ(σ2)∝(σ2)−v02−1exp⁡(−v0s022σ2).{\displaystyle \rho (\sigma ^{2})\propto (\sigma ^{2})^{-{\frac {v_{0}}{2}}-1}\exp \left(-{\frac {v_{0}s_{0}^{2}}{2\sigma ^{2}}}\right).} In the notation introduced in theinverse-gamma distributionarticle, this is the density of anInv-Gamma(a0,b0){\displaystyle {\text{Inv-Gamma}}(a_{0},b_{0})}distribution witha0=v02{\displaystyle a_{0}={\tfrac {v_{0}}{2}}}andb0=12v0s02{\displaystyle b_{0}={\tfrac {1}{2}}v_{0}s_{0}^{2}}withv0{\displaystyle v_{0}}ands02{\displaystyle s_{0}^{2}}as the prior values ofv{\displaystyle v}ands2{\displaystyle s^{2}}, respectively. Equivalently, it can also be described as ascaled inverse chi-squared distribution,Scale-inv-χ2(v0,s02).{\displaystyle {\text{Scale-inv-}}\chi ^{2}(v_{0},s_{0}^{2}).} Further the conditional prior densityρ(β|σ2){\displaystyle \rho ({\boldsymbol {\beta }}|\sigma ^{2})}is anormal distribution, ρ(β∣σ2)∝(σ2)−k/2exp⁡(−12σ2(β−μ0)TΛ0(β−μ0)).{\displaystyle \rho ({\boldsymbol {\beta }}\mid \sigma ^{2})\propto (\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}\mathbf {\Lambda } _{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})\right).} In the notation of thenormal distribution, the conditional prior distribution isN(μ0,σ2Λ0−1).{\displaystyle {\mathcal {N}}\left({\boldsymbol {\mu }}_{0},\sigma ^{2}{\boldsymbol {\Lambda }}_{0}^{-1}\right).} With the prior now specified, the posterior distribution can be expressed as ρ(β,σ2∣y,X)∝ρ(y∣X,β,σ2)ρ(β∣σ2)ρ(σ2)∝(σ2)−n/2exp⁡(−12σ2(y−Xβ)T(y−Xβ))(σ2)−k/2exp⁡(−12σ2(β−μ0)TΛ0(β−μ0))(σ2)−(a0+1)exp⁡(−b0σ2){\displaystyle {\begin{aligned}\rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )&\propto \rho (\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\rho ({\boldsymbol {\beta }}\mid \sigma ^{2})\rho (\sigma ^{2})\\&\propto (\sigma ^{2})^{-n/2}\exp \left(-{\frac {1}{2{\sigma }^{2}}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})\right)(\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})\right)(\sigma ^{2})^{-(a_{0}+1)}\exp \left(-{\frac {b_{0}}{\sigma ^{2}}}\right)\end{aligned}}} With some re-arrangement,[4]the posterior can be re-written so that the posterior meanμn{\displaystyle {\boldsymbol {\mu }}_{n}}of the parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}can be expressed in terms of the least squares estimatorβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}and the prior meanμ0{\displaystyle {\boldsymbol {\mu }}_{0}}, with the strength of the prior indicated by the prior precision matrixΛ0{\displaystyle {\boldsymbol {\Lambda }}_{0}} μn=(XTX+Λ0)−1(XTXβ^+Λ0μ0).{\displaystyle {\boldsymbol {\mu }}_{n}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0})^{-1}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} {\hat {\boldsymbol {\beta }}}+{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}).} To justify thatμn{\displaystyle {\boldsymbol {\mu }}_{n}}is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as aquadratic forminβ−μn{\displaystyle {\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n}}.[5] (y−Xβ)T(y−Xβ)+(β−μ0)TΛ0(β−μ0)=(β−μn)T(XTX+Λ0)(β−μn)+yTy−μnT(XTX+Λ0)μn+μ0TΛ0μ0.{\displaystyle (\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})+({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})=({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0})({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})+\mathbf {y} ^{\mathsf {T}}\mathbf {y} -{\boldsymbol {\mu }}_{n}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0}){\boldsymbol {\mu }}_{n}+{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}.} Now the posterior can be expressed as anormal distributiontimes aninverse-gamma distribution: ρ(β,σ2∣y,X)∝(σ2)−k/2exp⁡(−12σ2(β−μn)T(XTX+Λ0)(β−μn))(σ2)−n+2a02−1exp⁡(−2b0+yTy−μnT(XTX+Λ0)μn+μ0TΛ0μ02σ2).{\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )\propto (\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2{\sigma }^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\mathbf {\Lambda } _{0})({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})\right)(\sigma ^{2})^{-{\frac {n+2a_{0}}{2}}-1}\exp \left(-{\frac {2b_{0}+\mathbf {y} ^{\mathsf {T}}\mathbf {y} -{\boldsymbol {\mu }}_{n}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0}){\boldsymbol {\mu }}_{n}+{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}}{2\sigma ^{2}}}\right).} Therefore, the posterior distribution can be parametrized as follows.ρ(β,σ2∣y,X)∝ρ(β∣σ2,y,X)ρ(σ2∣y,X),{\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )\propto \rho ({\boldsymbol {\beta }}\mid \sigma ^{2},\mathbf {y} ,\mathbf {X} )\rho (\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} ),}where the two factors correspond to the densities ofN(μn,σ2Λn−1){\displaystyle {\mathcal {N}}\left({\boldsymbol {\mu }}_{n},\sigma ^{2}{\boldsymbol {\Lambda }}_{n}^{-1}\right)\,}andInv-Gamma(an,bn){\displaystyle {\text{Inv-Gamma}}\left(a_{n},b_{n}\right)}distributions, with the parameters of these given by Λn=(XTX+Λ0),μn=(Λn)−1(XTXβ^+Λ0μ0),{\displaystyle {\boldsymbol {\Lambda }}_{n}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\mathbf {\Lambda } _{0}),\quad {\boldsymbol {\mu }}_{n}=({\boldsymbol {\Lambda }}_{n})^{-1}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} {\hat {\boldsymbol {\beta }}}+{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}),}an=a0+n2,bn=b0+12(yTy+μ0TΛ0μ0−μnTΛnμn).{\displaystyle a_{n}=a_{0}+{\frac {n}{2}},\qquad b_{n}=b_{0}+{\frac {1}{2}}(\mathbf {y} ^{\mathsf {T}}\mathbf {y} +{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}-{\boldsymbol {\mu }}_{n}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{n}{\boldsymbol {\mu }}_{n}).} which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample. Themodel evidencep(y∣m){\displaystyle p(\mathbf {y} \mid m)}is the probability of the data given the modelm{\displaystyle m}. It is also known as themarginal likelihood, and as theprior predictive density. Here, the model is defined by the likelihood functionp(y∣X,β,σ){\displaystyle p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma )}and the prior distribution on the parameters, i.e.p(β,σ){\displaystyle p({\boldsymbol {\beta }},\sigma )}. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models byBayes factors. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integratingp(y,β,σ∣X){\displaystyle p(\mathbf {y} ,{\boldsymbol {\beta }},\sigma \mid \mathbf {X} )}over all possible values ofβ{\displaystyle {\boldsymbol {\beta }}}andσ{\displaystyle \sigma }.p(y|m)=∫p(y∣X,β,σ)p(β,σ)dβdσ{\displaystyle p(\mathbf {y} |m)=\int p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma )\,p({\boldsymbol {\beta }},\sigma )\,d{\boldsymbol {\beta }}\,d\sigma }This integral can be computed analytically and the solution is given in the following equation.[6]p(y∣m)=1(2π)n/2det(Λ0)det(Λn)⋅b0a0bnan⋅Γ(an)Γ(a0){\displaystyle p(\mathbf {y} \mid m)={\frac {1}{(2\pi )^{n/2}}}{\sqrt {\frac {\det({\boldsymbol {\Lambda }}_{0})}{\det({\boldsymbol {\Lambda }}_{n})}}}\cdot {\frac {b_{0}^{a_{0}}}{b_{n}^{a_{n}}}}\cdot {\frac {\Gamma (a_{n})}{\Gamma (a_{0})}}} HereΓ{\displaystyle \Gamma }denotes thegamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values ofβ{\displaystyle {\boldsymbol {\beta }}}andσ{\displaystyle \sigma }.[7]p(y∣m)=p(β,σ|m)p(y∣X,β,σ,m)p(β,σ∣y,X,m){\displaystyle p(\mathbf {y} \mid m)={\frac {p({\boldsymbol {\beta }},\sigma |m)\,p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ,m)}{p({\boldsymbol {\beta }},\sigma \mid \mathbf {y} ,\mathbf {X} ,m)}}}Note that this equation follows from a re-arrangement ofBayes' theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above. In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by anapproximate Bayesian inferencemethod such asMonte Carlo sampling,[8]INLAorvariational Bayes. The special caseμ0=0,Λ0=cI{\displaystyle {\boldsymbol {\mu }}_{0}=0,\mathbf {\Lambda } _{0}=c\mathbf {I} }is calledridge regression. A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesianestimation of covariance matrices: seeBayesian multivariate linear regression.
https://en.wikipedia.org/wiki/Bayesian_linear_regression
Ridge regression(also known asTikhonov regularization, named forAndrey Tikhonov) is a method of estimating thecoefficientsof multiple-regression modelsin scenarios where the independent variables are highly correlated.[1]It has been used in many fields including econometrics, chemistry, and engineering.[2]It is a method ofregularizationofill-posed problems.[a]It is particularly useful to mitigate the problem ofmulticollinearityinlinear regression, which commonly occurs in models with large numbers of parameters.[3]In general, the method provides improvedefficiencyin parameter estimation problems in exchange for a tolerable amount ofbias(seebias–variance tradeoff).[4] The theory was first introduced by Hoerl and Kennard in 1970 in theirTechnometricspapers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".[5][6][1] Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[7][2] In the simplest case, the problem of anear-singularmoment matrixXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} }is alleviated by adding positive elements to thediagonals, thereby decreasing itscondition number. Analogous to theordinary least squaresestimator, the simple ridge estimator is then given byβ^R=(XTX+λI)−1XTy{\displaystyle {\hat {\boldsymbol {\beta }}}_{R}=\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\lambda \mathbf {I} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }wherey{\displaystyle \mathbf {y} }is theregressand,X{\displaystyle \mathbf {X} }is thedesign matrix,I{\displaystyle \mathbf {I} }is theidentity matrix, and the ridge parameterλ≥0{\displaystyle \lambda \geq 0}serves as the constant shifting the diagonals of the moment matrix.[8]It can be shown that this estimator is the solution to theleast squaresproblem subject to theconstraintβTβ=c{\displaystyle {\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}=c}, which can be expressed as a Lagrangian minimization:β^R=argminβ(y−Xβ)T(y−Xβ)+λ(βTβ−c){\displaystyle {\hat {\boldsymbol {\beta }}}_{R}={\text{argmin}}_{\boldsymbol {\beta }}\,\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)+\lambda \left({\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}-c\right)}which shows thatλ{\displaystyle \lambda }is nothing but theLagrange multiplierof the constraint.[9]In fact, there is a one-to-one relationship betweenc{\displaystyle c}andβ{\displaystyle \beta }and since, in practice, we do not knowc{\displaystyle c}, we defineλ{\displaystyle \lambda }heuristically or find it via additional data-fitting strategies, seeDetermination of the Tikhonov factor. Note that, whenλ=0{\displaystyle \lambda =0}, in which case theconstraint is non-binding, the ridge estimator reduces toordinary least squares. A more general approach to Tikhonov regularization is discussed below. Tikhonov regularization was invented independently in many different contexts. It became widely known through its application to integral equations in the works ofAndrey Tikhonov[10][11][12][13][14]and David L. Phillips.[15]Some authors use the termTikhonov–Phillips regularization. The finite-dimensional case was expounded byArthur E. Hoerl, who took a statistical approach,[16]and by Manus Foster, who interpreted this method as aWiener–Kolmogorov (Kriging)filter.[17]Following Hoerl, it is known in the statistical literature as ridge regression,[18]named after ridge analysis ("ridge" refers to the path from the constrained maximum).[19] Suppose that for a knownreal matrixA{\displaystyle A}and vectorb{\displaystyle \mathbf {b} }, we wish to find a vectorx{\displaystyle \mathbf {x} }such thatAx=b,{\displaystyle A\mathbf {x} =\mathbf {b} ,}wherex{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }may be of different sizes andA{\displaystyle A}may be non-square. The standard approach isordinary least squareslinear regression.[clarification needed]However, if nox{\displaystyle \mathbf {x} }satisfies the equation or more than onex{\displaystyle \mathbf {x} }does—that is, the solution is not unique—the problem is said to beill posed. In such cases, ordinary least squares estimation leads to anoverdetermined, or more often anunderdeterminedsystem of equations. Most real-world phenomena have the effect oflow-pass filters[clarification needed]in the forward direction whereA{\displaystyle A}mapsx{\displaystyle \mathbf {x} }tob{\displaystyle \mathbf {b} }. Therefore, in solving the inverse-problem, the inverse mapping operates as ahigh-pass filterthat has the undesirable tendency of amplifying noise (eigenvalues/ singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version ofx{\displaystyle \mathbf {x} }that is in the null-space ofA{\displaystyle A}, rather than allowing for a model to be used as a prior forx{\displaystyle \mathbf {x} }. Ordinary least squares seeks to minimize the sum of squaredresiduals, which can be compactly written as‖Ax−b‖22,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2},}where‖⋅‖2{\displaystyle \|\cdot \|_{2}}is theEuclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:‖Ax−b‖22+‖Γx‖22=‖(AΓ)x−(b0)‖22{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2}+\left\|\Gamma \mathbf {x} \right\|_{2}^{2}=\left\|{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\mathbf {x} -{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}\right\|_{2}^{2}}for some suitably chosenTikhonov matrixΓ{\displaystyle \Gamma }. In many cases, this matrix is chosen as a scalar multiple of theidentity matrix(Γ=αI{\displaystyle \Gamma =\alpha I}), giving preference to solutions with smallernorms; this is known asL2regularization.[20]In other cases, high-pass operators (e.g., adifference operatoror a weightedFourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted byx^{\displaystyle {\hat {\mathbf {x} }}}, is given byx^=(ATA+ΓTΓ)−1ATb=((AΓ)T(AΓ))−1(AΓ)T(b0).{\displaystyle {\hat {\mathbf {x} }}=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}\mathbf {b} =\left({\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\right)^{-1}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}.}The effect of regularization may be varied by the scale of matrixΓ{\displaystyle \Gamma }. ForΓ=0{\displaystyle \Gamma =0}this reduces to the unregularized least-squares solution, provided that (ATA)−1exists. Note that in case of acomplex matrixA{\displaystyle A}, as usual the transposeAT{\displaystyle A^{\mathsf {T}}}has to be replaced by theHermitian transposeAH{\displaystyle A^{\mathsf {H}}}. L2regularization is used in many contexts aside from linear regression, such asclassificationwithlogistic regressionorsupport vector machines,[21]and matrix factorization.[22] Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems, it is possible to do so after the unregularised optimisation has taken place. E.g., if the above problem withΓ=0{\displaystyle \Gamma =0}yields the solutionx^0{\displaystyle {\hat {\mathbf {x} }}_{0}}, the solution in the presence ofΓ≠0{\displaystyle \Gamma \neq 0}can be expressed as:x^=Bx^0,{\displaystyle {\hat {\mathbf {x} }}=B{\hat {\mathbf {x} }}_{0},}with the "regularisation matrix"B=(ATA+ΓTΓ)−1ATA{\displaystyle B=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}A}. If the parameter fit comes with a covariance matrix of the estimated parameter uncertaintiesV0{\displaystyle V_{0}}, then the regularisation matrix will beB=(V0−1+ΓTΓ)−1V0−1,{\displaystyle B=(V_{0}^{-1}+\Gamma ^{\mathsf {T}}\Gamma )^{-1}V_{0}^{-1},}and the regularised result will have a new covarianceV=BV0BT.{\displaystyle V=BV_{0}B^{\mathsf {T}}.} In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.[23] For general multivariate normal distributions forx{\displaystyle \mathbf {x} }and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek anx{\displaystyle \mathbf {x} }to minimize‖Ax−b‖P2+‖x−x0‖Q2,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{P}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2},}where we have used‖x‖Q2{\displaystyle \left\|\mathbf {x} \right\|_{Q}^{2}}to stand for the weighted norm squaredxTQx{\displaystyle \mathbf {x} ^{\mathsf {T}}Q\mathbf {x} }(compare with theMahalanobis distance). In the Bayesian interpretationP{\displaystyle P}is the inversecovariance matrixofb{\displaystyle \mathbf {b} },x0{\displaystyle \mathbf {x} _{0}}is theexpected valueofx{\displaystyle \mathbf {x} }, andQ{\displaystyle Q}is the inverse covariance matrix ofx{\displaystyle \mathbf {x} }. The Tikhonov matrix is then given as a factorization of the matrixQ=ΓTΓ{\displaystyle Q=\Gamma ^{\mathsf {T}}\Gamma }(e.g. theCholesky factorization) and is considered awhitening filter. This generalized problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(ATPA+Q)−1(ATPb+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\mathbf {b} +Q\mathbf {x} _{0}\right),}or equivalently, whenQisnota null matrix:x∗=x0+(ATPA+Q)−1(ATP(b−Ax0)).{\displaystyle \mathbf {x} ^{*}=\mathbf {x} _{0}+\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\left(\mathbf {b} -A\mathbf {x} _{0}\right)\right).} In some situations, one can avoid using the transposeAT{\displaystyle A^{\mathsf {T}}}, as proposed byMikhail Lavrentyev.[24]For example, ifA{\displaystyle A}is symmetric positive definite, i.e.A=AT>0{\displaystyle A=A^{\mathsf {T}}>0}, so is its inverseA−1{\displaystyle A^{-1}}, which can thus be used to set up the weighted norm squared‖x‖P2=xTA−1x{\displaystyle \left\|\mathbf {x} \right\|_{P}^{2}=\mathbf {x} ^{\mathsf {T}}A^{-1}\mathbf {x} }in the generalized Tikhonov regularization, leading to minimizing‖Ax−b‖A−12+‖x−x0‖Q2{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{A^{-1}}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2}}or, equivalently up to a constant term,xT(A+Q)x−2xT(b+Qx0).{\displaystyle \mathbf {x} ^{\mathsf {T}}\left(A+Q\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\mathbf {b} +Q\mathbf {x} _{0}\right).} This minimization problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(A+Q)−1(b+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A+Q\right)^{-1}\left(\mathbf {b} +Q\mathbf {x} _{0}\right),}which is nothing but the solution of the generalized Tikhonov problem whereA=AT=P−1.{\displaystyle A=A^{\mathsf {T}}=P^{-1}.} The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrixA+Q{\displaystyle A+Q}can be better conditioned, i.e., have a smallercondition number, compared to the Tikhonov matrixATA+ΓTΓ.{\displaystyle A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma .} Typically discrete linear ill-conditioned problems result from discretization ofintegral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpretA{\displaystyle A}as acompact operatoronHilbert spaces, andx{\displaystyle x}andb{\displaystyle b}as elements in the domain and range ofA{\displaystyle A}. The operatorA∗A+ΓTΓ{\displaystyle A^{*}A+\Gamma ^{\mathsf {T}}\Gamma }is then aself-adjointbounded invertible operator. WithΓ=αI{\displaystyle \Gamma =\alpha I}, this least-squares solution can be analyzed in a special way using thesingular-value decomposition. Given the singular value decompositionA=UΣVT{\displaystyle A=U\Sigma V^{\mathsf {T}}}with singular valuesσi{\displaystyle \sigma _{i}}, the Tikhonov regularized solution can be expressed asx^=VDUTb,{\displaystyle {\hat {x}}=VDU^{\mathsf {T}}b,}whereD{\displaystyle D}has diagonal valuesDii=σiσi2+α2{\displaystyle D_{ii}={\frac {\sigma _{i}}{\sigma _{i}^{2}+\alpha ^{2}}}}and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on thecondition numberof the regularized problem. For the generalized case, a similar representation can be derived using ageneralized singular-value decomposition.[25] Finally, it is related to theWiener filter:x^=∑i=1qfiuiTbσivi,{\displaystyle {\hat {x}}=\sum _{i=1}^{q}f_{i}{\frac {u_{i}^{\mathsf {T}}b}{\sigma _{i}}}v_{i},}where the Wiener weights arefi=σi2σi2+α2{\displaystyle f_{i}={\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}}andq{\displaystyle q}is therankofA{\displaystyle A}. The optimal regularization parameterα{\displaystyle \alpha }is usually unknown and often in practical problems is determined by anad hocmethod. A possible approach relies on the Bayesian interpretation described below. Other approaches include thediscrepancy principle,cross-validation,L-curve method,[26]restricted maximum likelihoodandunbiased predictive risk estimator.Grace Wahbaproved that the optimal parameter, in the sense ofleave-one-out cross-validationminimizes[27][28]G=RSSτ2=‖Xβ^−y‖2[Tr⁡(I−X(XTX+α2I)−1XT)]2,{\displaystyle G={\frac {\operatorname {RSS} }{\tau ^{2}}}={\frac {\left\|X{\hat {\beta }}-y\right\|^{2}}{\left[\operatorname {Tr} \left(I-X\left(X^{\mathsf {T}}X+\alpha ^{2}I\right)^{-1}X^{\mathsf {T}}\right)\right]^{2}}},}whereRSS{\displaystyle \operatorname {RSS} }is theresidual sum of squares, andτ{\displaystyle \tau }is theeffective number of degrees of freedom. Using the previous SVD decomposition, we can simplify the above expression:RSS=‖y−∑i=1q(ui′b)ui‖2+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\left\|y-\sum _{i=1}^{q}(u_{i}'b)u_{i}\right\|^{2}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}RSS=RSS0+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\operatorname {RSS} _{0}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}andτ=m−∑i=1qσi2σi2+α2=m−q+∑i=1qα2σi2+α2.{\displaystyle \tau =m-\sum _{i=1}^{q}{\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}=m-q+\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}.} The probabilistic formulation of aninverse problemintroduces (when all uncertainties are Gaussian) a covariance matrixCM{\displaystyle C_{M}}representing thea prioriuncertainties on the model parameters, and a covariance matrixCD{\displaystyle C_{D}}representing the uncertainties on the observed parameters.[29]In the special case when these two matrices are diagonal and isotropic,CM=σM2I{\displaystyle C_{M}=\sigma _{M}^{2}I}andCD=σD2I{\displaystyle C_{D}=\sigma _{D}^{2}I}, and, in this case, the equations of inverse theory reduce to the equations above, withα=σD/σM{\displaystyle \alpha ={\sigma _{D}}/{\sigma _{M}}}.[30][31] Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrixΓ{\displaystyle \Gamma }seems rather arbitrary, the process can be justified from aBayesian point of view.[32]Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, theprior probabilitydistribution ofx{\displaystyle x}is sometimes taken to be amultivariate normal distribution.[33]For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the samestandard deviationσx{\displaystyle \sigma _{x}}. The data are also subject to errors, and the errors inb{\displaystyle b}are also assumed to beindependentwith zero mean and standard deviationσb{\displaystyle \sigma _{b}}. Under these assumptions the Tikhonov-regularized solution is themost probablesolution given the data and thea prioridistribution ofx{\displaystyle x}, according toBayes' theorem.[34] If the assumption ofnormalityis replaced by assumptions ofhomoscedasticityand uncorrelatedness oferrors, and if one still assumes zero mean, then theGauss–Markov theorementails that the solution is the minimalunbiased linear estimator.[35]
https://en.wikipedia.org/wiki/Tikhonov_regularization#Bayesian_interpretation
Instatisticsandmachine learning,lasso(least absolute shrinkage and selection operator; alsoLasso,LASSOorL1 regularization)[1]is aregression analysismethod that performs bothvariable selectionandregularizationin order to enhance the prediction accuracy and interpretability of the resultingstatistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced ingeophysics,[2]and later byRobert Tibshirani,[3]who coined the term. Lasso was originally formulated forlinear regressionmodels. This simple case reveals a substantial amount about the estimator. These include its relationship toridge regressionandbest subset selectionand the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates do not need to be unique ifcovariatesarecollinear. Though originally defined for linear regression, lasso regularization is easily extended to other statistical models includinggeneralized linear models,generalized estimating equations,proportional hazards models, andM-estimators.[3][4]Lasso's ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms ofgeometry,Bayesian statisticsandconvex analysis. The LASSO is closely related tobasis pursuit denoising. Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model.[3][2] Lasso was developed independently in geophysics literature in 1986, based on prior work that used theℓ1{\displaystyle \ell ^{1}}penaltyfor both fitting and penalization of the coefficients. StatisticianRobert Tibshiraniindependently rediscovered and popularized it in 1996, based onBreiman's nonnegative garrote.[2][5] Prior to lasso, the most widely used method for choosing covariates wasstepwise selection. That approach only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can increase prediction error.[6] At the time,ridge regressionwas the most popular technique for improving prediction accuracy. Ridge regression improves prediction error byshrinkingthe sum of the squares of theregression coefficientsto be less than a fixed value in order to reduceoverfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable. Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, excluding them from impacting prediction. This idea is similar to ridge regression, which also shrinks the size of the coefficients; however, ridge regression does not set coefficients to zero (and, thus, does not performvariable selection). Consider a sample consisting ofNcases, each of which consists ofpcovariatesand a single outcome. Letyi{\displaystyle y_{i}}be the outcome andxi:=(x1,x2,…,xp)i⊺{\displaystyle x_{i}:=(x_{1},x_{2},\ldots ,x_{p})_{i}^{\intercal }}be the covariate vector for theithcase. Then the objective of lasso is to solve:[3]minβ0,β{∑i=1N(yi−β0−xi⊺β)2}{\displaystyle \min _{\beta _{0},\beta }{\biggl \{}\sum _{i=1}^{N}{\bigl (}y_{i}-\beta _{0}-x_{i}^{\intercal }\beta {\bigr )}^{2}{\biggr \}}}subject to∑j=1p|βj|≤t.{\displaystyle \sum _{j=1}^{p}|\beta _{j}|\leq t.} Hereβ0{\displaystyle \beta _{0}}is the constant coefficient,β:=(β1,β2,…,βp){\displaystyle \beta :=(\beta _{1},\beta _{2},\ldots ,\beta _{p})}is the coefficient vector, andt{\displaystyle t}is a prespecified free parameter that determines the degree of regularization. LettingX{\displaystyle X}be the covariate matrix, so thatXij=(xi)j{\displaystyle X_{ij}=(x_{i})_{j}}andxi⊺{\displaystyle x_{i}^{\intercal }}is theithrow ofX{\displaystyle X}, the expression can be written more compactly asminβ0,β{‖y−β0−Xβ‖22}subject to‖β‖1≤t,{\displaystyle \min _{\beta _{0},\beta }\left\{\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}{\text{ subject to }}\|\beta \|_{1}\leq t,}where‖u‖p=(∑i=1N|ui|p)1/p{\displaystyle \|u\|_{p}={\biggl (}\sum _{i=1}^{N}|u_{i}|^{p}{\biggr )}^{1/p}}is the standardℓp{\displaystyle \ell ^{p}}norm. Denoting the scalar mean of the data pointsxi{\displaystyle x_{i}}byx¯{\displaystyle {\bar {x}}}and the mean of the response variablesyi{\displaystyle y_{i}}byy¯{\displaystyle {\bar {y}}}, the resulting estimate forβ0{\displaystyle \beta _{0}}isβ^0=y¯−x¯⊺β{\displaystyle {\hat {\beta }}_{0}={\bar {y}}-{\bar {x}}^{\intercal }\beta }, so thatyi−β^0−xi⊺β=yi−(y¯−x¯⊺β)−xi⊺β=(yi−y¯)−(xi−x¯)⊺β,{\displaystyle y_{i}-{\hat {\beta }}_{0}-x_{i}^{\intercal }\beta =y_{i}-({\bar {y}}-{\bar {x}}^{\intercal }\beta )-x_{i}^{\intercal }\beta =(y_{i}-{\bar {y}})-(x_{i}-{\bar {x}})^{\intercal }\beta ,}and therefore it is standard to work with variables that have been made zero-mean. Additionally, the covariates are typicallystandardized(∑i=1Nxi2=1){\textstyle {\bigl (}\sum _{i=1}^{N}x_{i}^{2}=1{\bigr )}}so that the solution does not depend on the measurement scale. It can be helpful to rewriteminβ∈Rp{1N‖y−Xβ‖22}subject to‖β‖1≤t.{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}\right\}{\text{ subject to }}\|\beta \|_{1}\leq t.}in the so-calledLagrangianformminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖1}{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda \|\beta \|_{1}\right\}}where the exact relationship betweent{\displaystyle t}andλ{\displaystyle \lambda }is data dependent. Some basic properties of the lasso estimator can now be considered. Assuming first that the covariates areorthonormalso thatxi⊺xj=δij,{\displaystyle \ x_{i}^{\intercal }x_{j}=\delta _{ij}\ ,}whereδij{\displaystyle \ \delta _{ij}\ }is theKronecker delta, or, equivalently,X⊺X=I,{\displaystyle \ X^{\intercal }X=I\ ,}then usingsubgradient methodsit can be shown that[3]β^j=SN,λ⁡(β^jOLS)=β^jOLS⋅max{0,1−Nλ|β^jOLS|}{\displaystyle \,{\begin{aligned}{\hat {\beta }}_{j}\ =\ {}&\operatorname {S} _{N,\lambda }\left({\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\right)\ =\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\cdot \max \!\left\{\ 0,\ 1-{\frac {\ N\ \lambda \ }{\ {\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}\ }}\ \right\}\end{aligned}}\,} whereβ^jOLS=(X⊺X)−1X⊺y=X⊺y.{\displaystyle \quad {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\ =\ (X^{\intercal }X)^{-1}X^{\intercal }y\ =\ X^{\intercal }y~.} Sα{\displaystyle \ S_{\alpha }\ }is referred to as thesoft thresholding operator, since it translates values towards zero (making them exactly zero in the limit as they themselves approach zero) instead of setting smaller values to zero and leaving larger ones untouched as thehard thresholding operator, often denotedHα,{\displaystyle \ H_{\alpha }\ ,}would. In ridge regression the objective is to minimizeminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖22}{\displaystyle \ \min _{\beta \in \mathbb {R} ^{p}}\left\{~{\tfrac {\ 1\ }{N}}{\Bigl \|}\ y-X\ \beta \ {\Bigr \|}_{2}^{2}\ +\ \lambda \ {\Bigl \|}\ \beta \ {\Bigr \|}_{2}^{2}~\right\}\ } UsingX⊺X=I{\displaystyle \ X^{\intercal }X=I\ }and the ridge regression formula:β^=(X⊺X+NλI)−1X⊺y,{\displaystyle \ {\hat {\beta }}={\Bigl (}\ X^{\intercal }X\ +\ N\ \lambda \ I\ {\Bigr )}^{-1}X^{\intercal }y\ ,}[7]yields:β^j=(1+Nλ)−1β^jOLS.{\displaystyle \ {\hat {\beta }}_{j}=\left(1+N\ \lambda \right)^{-1}\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}~.} Ridge regression shrinks all coefficients by a uniform factor of(1+Nλ)−1{\displaystyle \ (1+N\lambda )^{-1}\ }and does not set any coefficients to zero.[8] It can also be compared to regression withbest subset selection, in which the goal is to minimizeminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖0}{\displaystyle \ \min _{\beta \in \mathbb {R} ^{p}}\left\{~{\tfrac {1}{N}}{\Bigl \|}\ y-X\beta \ {\Bigr \|}_{2}^{2}\ +\ \lambda \ {\Bigl \|}\ \beta \ {\Bigr \|}_{0}~\right\}\ }where‖⋅‖0{\displaystyle \ \|\cdot \|_{0}\ }is the "ℓ0{\displaystyle \ \ell ^{0}\ }norm", which is defined as‖z‖=m{\displaystyle \ \|z\|=m\ }if exactlymcomponents ofzare nonzero. In this case, it can be shown thatβ^j=HNλ(β^jOLS)=β^jOLS⋅I⁡[|β^jOLS|≥Nλ]{\displaystyle \ {\hat {\beta }}_{j}\ =\ H_{\sqrt {N\lambda \ }}\ \left(\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\ \right)\ =\ {\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}\cdot \operatorname {\mathbb {I} } \left[~{\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}\geq {\sqrt {N\ \lambda \ }}~\right]\ }whereHα{\displaystyle \ H_{\alpha }\ }is again the hard thresholding operator andI{\displaystyle \ \mathbb {I} \ }is anindicator function(it is1if its argument is true and0otherwise). Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it. In one special case two covariates, sayjandk, are identical for each observation, so thatx(j)=x(k){\displaystyle x_{(j)}=x_{(k)}}, wherex(j),i=x(k),i{\displaystyle x_{(j),i}=x_{(k),i}}. Then the values ofβj{\displaystyle \beta _{j}}andβk{\displaystyle \beta _{k}}that minimize the lasso objective function are not uniquely determined. In fact, if someβ^{\displaystyle {\hat {\beta }}}in whichβ^jβ^k≥0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta }}_{k}\geq 0}, then ifs∈[0,1]{\displaystyle s\in [0,1]}replacingβ^j{\displaystyle {\hat {\beta }}_{j}}bys(β^j+β^k){\displaystyle s({\hat {\beta }}_{j}+{\hat {\beta }}_{k})}andβ^k{\displaystyle {\hat {\beta }}_{k}}by(1−s)(β^j+β^k){\displaystyle (1-s)({\hat {\beta }}_{j}+{\hat {\beta }}_{k})}, while keeping all the otherβ^i{\displaystyle {\hat {\beta }}_{i}}fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers.[9]Several variants of the lasso, including theElastic net regularization, have been designed to address this shortcoming. Lasso regularization can be extended to other objective functions such as those forgeneralized linear models,generalized estimating equations,proportional hazards models, andM-estimators.[3][4]Given the objective function1N∑i=1Nf(xi,yi,α,β){\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}f(x_{i},y_{i},\alpha ,\beta )}the lasso regularized version of the estimatorsthe solution tominα,β1N∑i=1Nf(xi,yi,α,β)subject to‖β‖1≤t{\displaystyle \min _{\alpha ,\beta }{\frac {1}{N}}\sum _{i=1}^{N}f(x_{i},y_{i},\alpha ,\beta ){\text{ subject to }}\|\beta \|_{1}\leq t}where onlyβ{\displaystyle \beta }is penalized whileα{\displaystyle \alpha }is free to take any allowed value, just asβ0{\displaystyle \beta _{0}}was not penalized in the basic case. Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective functionminβ0,β{1N‖y−β0−Xβ‖22}{\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{N}}\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}}but with respect to different constraints:‖β‖1≤t{\displaystyle \|\beta \|_{1}\leq t}for lasso and‖β‖22≤t{\displaystyle \|\beta \|_{2}^{2}\leq t}for ridge. The figure shows that the constraint region defined by theℓ1{\displaystyle \ell ^{1}}norm is a square rotated so that its corners lie on the axes (in general across-polytope), while the region defined by theℓ2{\displaystyle \ell ^{2}}norm is a circle (in general ann-sphere), which isrotationallyinvariantand, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or a higher-dimensional equivalent) of a hypercube, for which some components ofβ{\displaystyle \beta }are identically zero, while in the case of ann-sphere, the points on the boundary for which some of the components ofβ{\displaystyle \beta }are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components ofβ{\displaystyle \beta }are zero than one for which none of them are. The lasso can be rescaled so that it becomes easy to anticipate and influence the degree of shrinkage associated with a given value ofλ{\displaystyle \lambda }.[10]It is assumed thatX{\displaystyle X}is standardized with z-scores and thaty{\displaystyle y}is centered (zero mean). Letβ0{\displaystyle \beta _{0}}represent the hypothesized regression coefficients and letbOLS{\displaystyle b_{\text{OLS}}}refer to the data-optimized ordinary least squares solutions. We can then define theLagrangianas a tradeoff between the in-sample accuracy of the data-optimized solutions and the simplicity of sticking to the hypothesized values.[11]This results inminβ∈Rp{(y−Xβ)′(y−Xβ)(y−Xβ0)′(y−Xβ0)+2λ∑i=1p|βi−β0,i|qi}{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {(y-X\beta )'(y-X\beta )}{(y-X\beta _{0})'(y-X\beta _{0})}}+2\lambda \sum _{i=1}^{p}{\frac {|\beta _{i}-\beta _{0,i}|}{q_{i}}}\right\}}whereqi{\displaystyle q_{i}}is specified below and the "prime" symbol stands for transpose. The first fraction represents relative accuracy, the second fraction relative simplicity, andλ{\displaystyle \lambda }balances between the two. Given a single regressor, relative simplicity can be defined by specifyingqi{\displaystyle q_{i}}as|bOLS−β0|{\displaystyle |b_{\text{OLS}}-\beta _{0}|}, which is the maximum amount of deviation fromβ0{\displaystyle \beta _{0}}whenλ=0{\displaystyle \lambda =0}. Assuming thatβ0=0{\displaystyle \beta _{0}=0}, the solution path can be defined in terms ofR2{\displaystyle R^{2}}:bℓ1={(1−λ/R2)bOLSifλ≤R2,0ifλ>R2.{\displaystyle b_{\ell _{1}}={\begin{cases}(1-\lambda /R^{2})b_{\text{OLS}}&{\mbox{if }}\lambda \leq R^{2},\\0&{\mbox{if }}\lambda >R^{2}.\end{cases}}}Ifλ=0{\displaystyle \lambda =0}, the ordinary least squares solution (OLS) is used. The hypothesized value ofβ0=0{\displaystyle \beta _{0}=0}is selected ifλ{\displaystyle \lambda }is bigger thanR2{\displaystyle R^{2}}. Furthermore, ifR2=1{\displaystyle R^{2}=1}, thenλ{\displaystyle \lambda }represents the proportional influence ofβ0=0{\displaystyle \beta _{0}=0}. In other words,λ×100%{\displaystyle \lambda \times 100\%}measures in percentage terms the minimal amount of influence of the hypothesized value relative to the data-optimized OLS solution. If anℓ2{\displaystyle \ell _{2}}-norm is used to penalize deviations from zero given a single regressor, the solution path is given bybℓ2=(1+λR2(1−λ))−1bOLS.{\displaystyle b_{\ell _{2}}=\left(1+{\frac {\lambda }{R^{2}(1-\lambda )}}\right)^{-1}b_{\text{OLS}}.}Likebℓ1{\displaystyle b_{\ell _{1}}},bℓ2{\displaystyle b_{\ell _{2}}}moves in the direction of the point(λ=R2,b=0){\displaystyle (\lambda =R^{2},b=0)}whenλ{\displaystyle \lambda }is close to zero; but unlikebℓ1{\displaystyle b_{\ell _{1}}}, the influence ofR2{\displaystyle R^{2}}diminishes inbℓ2{\displaystyle b_{\ell _{2}}}ifλ{\displaystyle \lambda }increases (see figure).Given multiple regressors, the moment that a parameter is activated (i.e. allowed to deviate fromβ0{\displaystyle \beta _{0}}) is also determined by a regressor's contribution toR2{\displaystyle R^{2}}accuracy. First,R2=1−(y−Xb)′(y−Xb)(y−Xβ0)′(y−Xβ0).{\displaystyle R^{2}=1-{\frac {(y-Xb)'(y-Xb)}{(y-X\beta _{0})'(y-X\beta _{0})}}.}AnR2{\displaystyle R^{2}}of 75% means that in-sample accuracy improves by 75% if the unrestricted OLS solutions are used instead of the hypothesizedβ0{\displaystyle \beta _{0}}values. The individual contribution of deviating from each hypothesis can be computed with thep{\displaystyle p}xp{\displaystyle p}matrixR⊗=(X′y~0)(X′y~0)′(X′X)−1(y~0′y~0)−1,{\displaystyle R^{\otimes }=(X'{\tilde {y}}_{0})(X'{\tilde {y}}_{0})'(X'X)^{-1}({\tilde {y}}_{0}'{\tilde {y}}_{0})^{-1},}wherey~0=y−Xβ0{\displaystyle {\tilde {y}}_{0}=y-X\beta _{0}}. Ifb=bOLS{\displaystyle b=b_{\text{OLS}}}whenR2{\displaystyle R^{2}}is computed, then the diagonal elements ofR⊗{\displaystyle R^{\otimes }}sum toR2{\displaystyle R^{2}}. The diagonalR⊗{\displaystyle R^{\otimes }}values may be smaller than 0 or, less often, larger than 1. If regressors are uncorrelated, then theith{\displaystyle i^{th}}diagonal element ofR⊗{\displaystyle R^{\otimes }}simply corresponds to ther2{\displaystyle r^{2}}value betweenxi{\displaystyle x_{i}}andy{\displaystyle y}. A rescaled version of the adaptive lasso of can be obtained by settingqadaptive lasso,i=|bOLS,i−β0,i|{\displaystyle q_{{\mbox{adaptive lasso}},i}=|b_{{\text{OLS}},i}-\beta _{0,i}|}.[12]If regressors are uncorrelated, the moment that theith{\displaystyle i^{th}}parameter is activated is given by theith{\displaystyle i^{th}}diagonal element ofR⊗{\displaystyle R^{\otimes }}. Assuming for convenience thatβ0{\displaystyle \beta _{0}}is a vector of zeros,bi={(1−λ/Rii⊗)bOLS,iifλ≤Rii⊗,0ifλ>Rii⊗.{\displaystyle b_{i}={\begin{cases}(1-\lambda /R_{ii}^{\otimes })b_{{\text{OLS}},i}&{\text{if }}\lambda \leq R_{ii}^{\otimes },\\0&{\text{if }}\lambda >R_{ii}^{\otimes }.\end{cases}}}That is, if regressors are uncorrelated,λ{\displaystyle \lambda }again specifies the minimal influence ofβ0{\displaystyle \beta _{0}}. Even when regressors are correlated, the first time that a regression parameter is activated occurs whenλ{\displaystyle \lambda }is equal to the highest diagonal element ofR⊗{\displaystyle R^{\otimes }}. These results can be compared to a rescaled version of the lasso by definingqlasso,i=1p∑l|bOLS,l−β0,l|{\displaystyle q_{{\mbox{lasso}},i}={\frac {1}{p}}\sum _{l}|b_{{\text{OLS}},l}-\beta _{0,l}|}, which is the average absolute deviation ofbOLS{\displaystyle b_{\text{OLS}}}fromβ0{\displaystyle \beta _{0}}. Assuming that regressors are uncorrelated, then the moment of activation of theith{\displaystyle i^{th}}regressor is given byλ~lasso,i=1pRi⊗∑l=1pRl⊗.{\displaystyle {\tilde {\lambda }}_{{\text{lasso}},i}={\frac {1}{p}}{\sqrt {R_{i}^{\otimes }}}\sum _{l=1}^{p}{\sqrt {R_{l}^{\otimes }}}.} Forp=1{\displaystyle p=1}, the moment of activation is again given byλ~lasso,i=R2{\displaystyle {\tilde {\lambda }}_{{\text{lasso}},i}=R^{2}}. Ifβ0{\displaystyle \beta _{0}}is a vector of zeros and a subset ofpB{\displaystyle p_{B}}relevant parameters are equally responsible for a perfect fit ofR2=1{\displaystyle R^{2}=1}, then this subset is activated at aλ{\displaystyle \lambda }value of1p{\displaystyle {\frac {1}{p}}}. The moment of activation of a relevant regressor then equals1p1pBpB1pB=1p{\displaystyle {\frac {1}{p}}{\frac {1}{\sqrt {p_{B}}}}p_{B}{\frac {1}{\sqrt {p_{B}}}}={\frac {1}{p}}}. In other words, the inclusion of irrelevant regressors delays the moment that relevant regressors are activated by this rescaled lasso. The adaptive lasso and the lasso are special cases of a '1ASTc' estimator. The latter only groups parameters together if the absolute correlation among regressors is larger than a user-specified value.[10] Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normalprior distributions, lasso can be interpreted as linear regression for which the coefficients haveLaplace prior distributions.[13]The Laplace distribution is sharplypeakedat zero (its first derivative is discontinuous at zero) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not.[3] Lasso can also be viewed as a convex relaxation of the best subset selection regression problem, which is to find the subset of≤k{\displaystyle \leq k}covariates that results in the smallest value of the objective function for some fixedk≤n{\displaystyle k\leq n}, where n is the total number of covariates. The "ℓ0{\displaystyle \ell ^{0}}norm",‖⋅‖0{\displaystyle \|\cdot \|_{0}}, (the number of nonzero entries of a vector), is the limiting case of "ℓp{\displaystyle \ell ^{p}}norms", of the form‖x‖p=(∑i=1n|xj|p)1/p{\displaystyle \textstyle \|x\|_{p}=\left(\sum _{i=1}^{n}|x_{j}|^{p}\right)^{1/p}}(where the quotation marks signify that these are not really norms forp<1{\displaystyle p<1}since‖⋅‖p{\displaystyle \|\cdot \|_{p}}is not convex forp<1{\displaystyle p<1}, so the triangle inequality does not hold). Therefore, since p = 1 is the smallest value for which the "ℓp{\displaystyle \ell ^{p}}norm" is convex (and therefore actually a norm), lasso is, in some sense, the best convex approximation to the best subset selection problem, since the region defined by‖x‖1≤t{\displaystyle \|x\|_{1}\leq t}is theconvex hullof the region defined by‖x‖p≤t{\displaystyle \|x\|_{p}\leq t}forp<1{\displaystyle p<1}. Lasso variants have been created in order to remedy limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or exploiting dependencies among the covariates. Elastic net regularizationadds an additional ridge regression-like penalty that improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy.[9] Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others.[14]Further extensions of group lasso perform variable selection within individual groups (sparse group lasso) and allow overlap between groups (overlap group lasso).[15][16] Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match system structure.[17]Lasso-regularized models can be fit using techniques includingsubgradient methods,least-angle regression(LARS), andproximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen usingcross-validation. In 2005, Zou and Hastie introduced theelastic net.[9]Whenp>n(the number of covariates is greater than the sample size) lasso can select onlyncovariates (even when more are associated with the outcome) and it tends to select one covariate from any set of highly correlated covariates. Additionally, even whenn>p, ridge regression tends to perform better given strongly correlated covariates. The elastic net extends lasso by adding an additionalℓ2{\displaystyle \ell ^{2}}penalty term givingminβ∈Rp{‖y−Xβ‖22+λ1‖β‖1+λ2‖β‖22},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{\left\|y-X\beta \right\|_{2}^{2}+\lambda _{1}\|\beta \|_{1}+\lambda _{2}\|\beta \|_{2}^{2}\right\},}which is equivalent to solvingminβ0,β{‖y−β0−Xβ‖22}subject to(1−α)‖β‖1+α‖β‖22≤t,whereα=λ2λ1+λ2.{\displaystyle {\begin{aligned}\min _{\beta _{0},\beta }\left\{\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}&{\text{ subject to }}(1-\alpha )\|\beta \|_{1}+\alpha \|\beta \|_{2}^{2}\leq t,\\&{\text{ where }}\alpha ={\frac {\lambda _{2}}{\lambda _{1}+\lambda _{2}}}.\end{aligned}}} This problem can be written in a simple lasso formminβ∗∈Rp{‖y∗−X∗β∗‖22+λ∗‖β∗‖1}{\displaystyle \min _{\beta ^{*}\in \mathbb {R} ^{p}}\left\{\left\|y^{*}-X^{*}\beta ^{*}\right\|_{2}^{2}+\lambda ^{*}\|\beta ^{*}\|_{1}\right\}}lettingX(n+p)×p∗=(1+λ2)−1/2(Xλ21/2Ip×p),{\displaystyle X_{(n+p)\times p}^{*}=(1+\lambda _{2})^{-1/2}{\binom {X}{\lambda _{2}^{1/2}I_{p\times p}}},}y(n+p)∗=(y0p),λ∗=λ11+λ2,{\displaystyle y_{(n+p)}^{*}={\binom {y}{0^{p}}},\qquad \lambda ^{*}={\frac {\lambda _{1}}{\sqrt {1+\lambda _{2}}}},}β∗=1+λ2β.{\displaystyle \beta ^{*}={\sqrt {1+\lambda _{2}}}\beta .} Thenβ^=β^∗1+λ2{\displaystyle {\hat {\beta }}={\frac {{\hat {\beta }}^{*}}{\sqrt {1+\lambda _{2}}}}}, which, when the covariates are orthogonal to each other, givesβ^j=β^j∗,OLS1+λ2max(0,1−λ∗|β^j∗,OLS|)=β^jOLS1+λ2max(0,1−λ1|β^jOLS|)=(1+λ2)−1β^jlasso.{\displaystyle {\hat {\beta }}_{j}={\frac {{\hat {\beta }}{}_{j}^{\!\;*,{\text{OLS}}}}{\sqrt {1+\lambda _{2}}}}\max {\Biggl (}0,1-{\frac {\lambda ^{*}}{{\bigl |}{\hat {\beta }}{}_{j}^{\!\;*,{\text{OLS}}}{\bigr |}}}{\Biggr )}={\frac {{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}}{1+\lambda _{2}}}\max {\Biggl (}0,1-{\frac {\lambda _{1}}{{\bigl |}{\hat {\beta }}{}_{j}^{\!\;{\text{OLS}}}{\bigr |}}}{\Biggr )}=(1+\lambda _{2})^{-1}{\hat {\beta }}{}_{j}^{\text{lasso}}.} So the result of the elastic net penalty is a combination of the effects of the lasso and ridge penalties. Returning to the general case, the fact that the penalty function is now strictly convex means that ifx(j)=x(k){\displaystyle x_{(j)}=x_{(k)}},β^j=β^k{\displaystyle {\hat {\beta }}_{j}={\hat {\beta }}_{k}}, which is a change from lasso.[9]In general, ifβ^jβk^>0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta _{k}}}>0}|β^j−βk^|‖y‖≤λ2−12(1−ρjk),whereρ=X⊺X,{\displaystyle {\frac {|{\hat {\beta }}_{j}-{\hat {\beta _{k}}}|}{\|y\|}}\leq \lambda _{2}^{-1}{\sqrt {2(1-\rho _{jk})}},{\text{ where }}\rho =X^{\intercal }X,}is the sample correlation matrix because thex{\displaystyle x}'s are normalized. Therefore, highly correlated covariates tend to have similar regression coefficients, with the degree of similarity depending on both‖y‖1{\displaystyle \|y\|_{1}}andλ2{\displaystyle \lambda _{2}}, which is different from lasso. This phenomenon, in which strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect. Grouping is desirable since, in applications such as tying genes to a disease, finding all the associated covariates is preferable, rather than selecting one from each set of correlated covariates, as lasso often does.[9]In addition, selecting only one from each group typically results in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso). In 2006, Yuan and Lin introduced the group lasso to allow predefined groups of covariates to jointly be selected into or out of a model.[14]This is useful in many settings, perhaps most obviously when a categorical variable is coded as a collection of binary covariates. In this case, group lasso can ensure that all the variables encoding the categorical covariate are included or excluded together. Another setting in which grouping is natural is in biological studies. Since genes and proteins often lie in known pathways, which pathways are related to an outcome may be more significant than whether individual genes are. The objective function for the group lasso is a natural generalization of the standard lasso objectiveminβ∈Rp{‖y−∑j=1JXjβj‖22+λ∑j=1J‖βj‖Kj},‖z‖Kj=(z⊺Kjz)1/2{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}{\biggl \{}{\biggl \|}y-\sum _{j=1}^{J}X_{j}\beta _{j}{\biggr \|}_{2}^{2}+\lambda \sum _{j=1}^{J}\|\beta _{j}\|_{K_{j}}{\biggr \}},\qquad \|z\|_{K_{j}}=(z^{\intercal }K_{j}z)^{1/2}}where thedesign matrixX{\displaystyle X}and covariate vectorβ{\displaystyle \beta }have been replaced by a collection of design matricesXj{\displaystyle X_{j}}and covariate vectorsβj{\displaystyle \beta _{j}}, one for each of the J groups. Additionally, the penalty term is now a sum overℓ2{\displaystyle \ell ^{2}}norms defined by the positive definite matricesKj{\displaystyle K_{j}}. If each covariate is in its own group andKj=I{\displaystyle K_{j}=I}, then this reduces to the standard lasso, while if there is only a single group andK1=I{\displaystyle K_{1}=I}, it reduces to ridge regression. Since the penalty reduces to anℓ2{\displaystyle \ell ^{2}}norm on the subspaces defined by each group, it cannot select out only some of the covariates from a group, just as ridge regression cannot. However, because the penalty is the sum over the different subspace norms, as in the standard lasso, the constraint has some non-differential points, which correspond to some subspaces being identically zero. Therefore, it can set the coefficient vectors corresponding to some subspaces to zero, while only shrinking others. However, it is possible to extend the group lasso to the so-called sparse group lasso, which can select individual covariates within a group, by adding an additionalℓ1{\displaystyle \ell ^{1}}penalty to each group subspace.[15]Another extension, group lasso with overlap allows covariates to be shared across groups, e.g., if a gene were to occur in two pathways.[16] The "gglasso" package by in R, allows for fast and efficient implementation of Group LASSO.[18] In some cases, the phenomenon under study may have important spatial or temporal structure that must be considered during analysis, such as time series or image-based data. In 2005, Tibshirani and colleagues introduced the fused lasso to extend the use of lasso to this type of data.[17]The fused lasso objective function isminβ{1N∑i=1N(yi−xi⊺β)2}subject to∑j=1p|βj|≤t1and∑j=2p|βj−βj−1|≤t2.{\displaystyle {\begin{aligned}&\min _{\beta }{\biggl \{}{\frac {1}{N}}\sum _{i=1}^{N}\left(y_{i}-x_{i}^{\intercal }\beta \right)^{2}{\biggr \}}\\[4pt]&{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}|\leq t_{1}{\text{ and }}\sum _{j=2}^{p}|\beta _{j}-\beta _{j-1}|\leq t_{2}.\end{aligned}}} The first constraint is the lasso constraint, while the second directly penalizes large changes with respect to the temporal or spatial structure, which forces the coefficients to vary smoothly to reflect the system's underlying logic. Clustered lasso[19]is a generalization of fused lasso that identifies and groups relevant covariates based on their effects (coefficients). The basic idea is to penalize the differences between the coefficients so that nonzero ones cluster. This can be modeled using the following regularization:∑i<jp|βi−βj|≤t2.{\displaystyle \sum _{i<j}^{p}|\beta _{i}-\beta _{j}|\leq t_{2}.} In contrast, variables can be clustered into highly correlated groups, and then a single representative covariate can be extracted from each cluster.[20] Algorithms exist that solve the fused lasso problem, and some generalizations of it. Algorithms can solve it exactly in a finite number of operations.[21] Lasso, elastic net, group and fused lasso construct the penalty functions from theℓ1{\displaystyle \ell ^{1}}andℓ2{\displaystyle \ell ^{2}}norms (with weights, if necessary). The bridge regression utilises generalℓp{\displaystyle \ell ^{p}}norms (p≥1{\displaystyle p\geq 1}) and quasinorms (0<p<1{\displaystyle 0<p<1}).[23]For example, forp=1/2 the analogue of lasso objective in the Lagrangian form is to solveminβ∈Rp{1N‖y−Xβ‖22+λ‖β‖1/2},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda {\sqrt {\|\beta \|_{1/2}}}\right\},}where‖β‖1/2=(∑j=1p|βj|)2{\displaystyle \|\beta \|_{1/2}={\biggl (}\sum _{j=1}^{p}{\sqrt {|\beta _{j}|}}{\biggr )}^{2}} It is claimed that the fractional quasi-normsℓp{\displaystyle \ell ^{p}}(0<p<1{\displaystyle 0<p<1}) provide more meaningful results in data analysis both theoretically and empirically.[24]The non-convexity of these quasi-norms complicates the optimization problem. To solve this problem, an expectation-minimization procedure is developed[25]and implemented[22]for minimization of functionminβ∈Rp{1N‖y−Xβ‖22+λ∑j=1pϑ(βj2)},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+\lambda \sum _{j=1}^{p}\vartheta (\beta _{j}^{2})\right\},}whereϑ(γ){\displaystyle \vartheta (\gamma )}is an arbitrary concave monotonically increasing function (for example,ϑ(γ)=γ{\displaystyle \vartheta (\gamma )={\sqrt {\gamma }}}gives the lasso penalty andϑ(γ)=γ1/4{\displaystyle \vartheta (\gamma )=\gamma ^{1/4}}gives theℓ1/2{\displaystyle \ell ^{1/2}}penalty). The efficient algorithm for minimization is based on piece-wisequadratic approximationof subquadratic growth (PQSQ).[25] The adaptive lasso was introduced by Zou in 2006 for linear regression[12]and by Zhang and Lu in 2007 for proportional hazards regression.[26] The prior lasso was introduced for generalized linear models by Jiang et al. in 2016 to incorporate prior information, such as the importance of certain covariates.[27]In prior lasso, such information is summarized into pseudo responses (called prior responses)y^p{\displaystyle {\hat {y}}^{\mathrm {p} }}and then an additional criterion function is added to the usual objective function with a lasso penalty. Without loss of generality, in linear regression, the new objective function can be written asminβ∈Rp{1N‖y−Xβ‖22+1Nη‖y^p−Xβ‖22+λ‖β‖1},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|y-X\beta \right\|_{2}^{2}+{\frac {1}{N}}\eta \left\|{\hat {y}}^{\mathrm {p} }-X\beta \right\|_{2}^{2}+\lambda \|\beta \|_{1}\right\},}which is equivalent tominβ∈Rp{1N‖y~−Xβ‖22+λ1+η‖β‖1},{\displaystyle \min _{\beta \in \mathbb {R} ^{p}}\left\{{\frac {1}{N}}\left\|{\tilde {y}}-X\beta \right\|_{2}^{2}+{\frac {\lambda }{1+\eta }}\|\beta \|_{1}\right\},} the usual lasso objective function with the responsesy{\displaystyle y}being replaced by a weighted average of the observed responses and the prior responsesy~=(y+ηy^p)/(1+η){\displaystyle {\tilde {y}}=(y+\eta {\hat {y}}^{\mathrm {p} })/(1+\eta )}(called the adjusted response values by the prior information). In prior lasso, the parameterη{\displaystyle \eta }is called a balancing parameter, in that it balances the relative importance of the data and the prior information. In the extreme case ofη=0{\displaystyle \eta =0}, prior lasso is reduced to lasso. Ifη=∞{\displaystyle \eta =\infty }, prior lasso will solely rely on the prior information to fit the model. Furthermore, the balancing parameterη{\displaystyle \eta }has another appealing interpretation: it controls the variance ofβ{\displaystyle \beta }in its prior distribution from a Bayesian viewpoint. Prior lasso is more efficient in parameter estimation and prediction (with a smaller estimation error and prediction error) when the prior information is of high quality, and is robust to the low quality prior information with a good choice of the balancing parameterη{\displaystyle \eta }. Lasso can be run in anensemble. This can be especially useful when the data is high-dimensional. The procedure involves running lasso on each of several random subsets of the data and collating the results.[28][29][30] The loss function of the lasso is not differentiable, but a wide variety of techniques from convex analysis and optimization theory have been developed to compute the solutions path of the lasso. These include coordinate descent,[31]subgradient methods,least-angle regression(LARS),[32]and proximal gradient methods.Subgradientmethods are the natural generalization of traditional methods such asgradient descentandstochastic gradient descentto the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit efficiently, though it may not perform well in all circumstances. LARS generates complete solution paths.[32]Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant, the data and the available resources. However, proximal methods generally perform well. The "glmnet" package in R, where "glm" is a reference to "generalized linear models" and "net" refers to the "net" from "elastic net" provides an extremely efficient way to implement LASSO and some of its variants.[33][34][35] The "celer" package in Python provides a highly efficient solver for the Lasso problem, often outperforming traditional solvers like scikit-learn by up to 100 times in certain scenarios, particularly with high-dimensional datasets. This package leverages dual extrapolation techniques to achieve its performance gains.[36][37]The celer package is available atGitHub. Choosing the regularization parameter (λ{\displaystyle \lambda }) is a fundamental part of lasso. A good value is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction accuracy and interpretability. However, if the regularization becomes too strong, important variables may be omitted and coefficients may be shrunk excessively, which can harm both predictive capacity and inferencing.Cross-validationis often used to find the regularization parameter. Information criteria such as theBayesian information criterion(BIC) and theAkaike information criterion(AIC) might be preferable to cross-validation, because they are faster to compute and their performance is less volatile in small samples.[38]An information criterion selects the estimator's regularization parameter by maximizing a model's in-sample accuracy while penalizing its effective number of parameters/degrees of freedom. Zou et al. proposed to measure the effective degrees of freedom by counting the number of parameters that deviate from zero.[39]The degrees of freedom approach was considered flawed by Kaufman and Rosset[40]and Janson et al.,[41]because a model's degrees of freedom might increase even when it is penalized harder by the regularization parameter. As an alternative, the relative simplicity measure defined above can be used to count the effective number of parameters.[38]For the lasso, this measure is given byP^=∑i=1p|βi−β0,i|1p∑l|bOLS,l−β0,l|,{\displaystyle {\hat {\mathcal {P}}}=\sum _{i=1}^{p}{\frac {|\beta _{i}-\beta _{0,i}|}{{\frac {1}{p}}\sum _{l}|b_{{\text{OLS}},l}-\beta _{0,l}|}},}which monotonically increases from zero top{\displaystyle p}as the regularization parameter decreases from∞{\displaystyle \infty }to zero. LASSO has been applied in economics and finance, and was found to improve prediction and to select sometimes neglected variables, for example in corporate bankruptcy prediction literature,[42]or high growth firms prediction.[43]
https://en.wikipedia.org/wiki/Lasso_regression
Instatistics,least-angle regression (LARS)is an algorithm for fittinglinear regressionmodels to high-dimensional data, developed byBradley Efron,Trevor Hastie,Iain JohnstoneandRobert Tibshirani.[1] Suppose we expect a response variable to be determined by a linear combination of a subset of potential covariates. Then the LARS algorithm provides a means of producing an estimate of which variables to include, as well as their coefficients. Instead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of theL1 normof the parameter vector. The algorithm is similar to forwardstepwise regression, but instead of including variables at each step, the estimated parameters are increased in a direction equiangular to each one's correlations with the residual. The advantages of the LARS method are: The disadvantages of the LARS method include: The basic steps of the Least-angle regression algorithm are: Least-angle regression is implemented inRvia thelarspackage, inPythonwith thescikit-learnpackage, and inSASvia theGLMSELECTprocedure.
https://en.wikipedia.org/wiki/Least-angle_regression
Least-squares adjustmentis a model for the solution of anoverdetermined systemof equations based on the principle ofleast squaresofobservation residuals. It is used extensively in the disciplines ofsurveying,geodesy, andphotogrammetry—the field ofgeomatics, collectively. There are three forms of least squares adjustment:parametric,conditional, andcombined: Clearly, parametric and conditional adjustments correspond to the more general combined case whenf(X,Y) =h(X) -Yandf(X,Y) =g(Y), respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature,Ymay be denotedL. The equalities above only hold for the estimated parametersX^{\displaystyle {\hat {X}}}and observationsY^{\displaystyle {\hat {Y}}}, thusf(X^,Y^)=0{\displaystyle f\left({\hat {X}},{\hat {Y}}\right)=0}. In contrast, measured observationsY~{\displaystyle {\tilde {Y}}}and approximate parametersX~{\displaystyle {\tilde {X}}}produce a nonzeromisclosure:w~=f(X~,Y~).{\displaystyle {\tilde {w}}=f\left({\tilde {X}},{\tilde {Y}}\right).}One can proceed toTaylor series expansionof the equations, which results in theJacobiansordesign matrices: the first one,A=∂f/∂X;{\displaystyle A=\partial {f}/\partial {X};}and the second one,B=∂f/∂Y.{\displaystyle B=\partial {f}/\partial {Y}.}The linearized model then reads:w~+Ax^+By^=0,{\displaystyle {\tilde {w}}+A{\hat {x}}+B{\hat {y}}=0,}wherex^=X^−X~{\displaystyle {\hat {x}}={\hat {X}}-{\tilde {X}}}are estimatedparameter correctionsto thea priorivalues, andy^=Y^−Y~{\displaystyle {\hat {y}}={\hat {Y}}-{\tilde {Y}}}are post-fitobservationresiduals. In the parametric adjustment, the second design matrix is an identity,B=-I, and the misclosure vector can be interpreted as the pre-fit residuals,y~=w~=h(X~)−Y~{\displaystyle {\tilde {y}}={\tilde {w}}=h({\tilde {X}})-{\tilde {Y}}}, so the system simplifies to:Ax^=y^−y~,{\displaystyle A{\hat {x}}={\hat {y}}-{\tilde {y}},}which is in the form ofordinary least squares. In the conditional adjustment, the first design matrix is null,A= 0. For the more general cases,Lagrange multipliersare introduced to relate the two Jacobian matrices, and transform theconstrainedleast squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to theX^{\displaystyle {\hat {X}}}andY^{\displaystyle {\hat {Y}}}vectors as well as the respective parameters and observationsa posterioricovariance matrices. Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming thenormal matrixand applyingCholesky decomposition, applying theQR factorizationdirectly to the Jacobian matrix,iterative methodsfor very large systems, etc. Ifrank deficiencyis encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading toconstrained least squares.
https://en.wikipedia.org/wiki/Adjustment_of_observations
TheGittins indexis a measure of the reward that can be achieved through a givenstochastic processwith certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is arealscalar. To illustrate the theory we can take two examples from a developing sector, such as from electricity generating technologies: wind power and wave power. If we are presented with the two technologies when they are both proposed as ideas we cannot say which will be better in the long run as we have no data, as yet, to base our judgments on.[1]It would be easy to say that wave power would be too problematic to develop as it seems easier to put up many wind turbines than to make the long floating generators, tow them out to sea and lay the cables necessary. If we were to make a judgment call at that early time in development we could be condemning one technology to being put on the shelf and the other would be developed and put into operation. If we develop both technologies we would be able to make a judgment call on each by comparing the progress of each technology at a set time interval such as every three months. The decisions we make about investment in the next stage would be based on those results.[1] In a paper in 1979 calledBandit Processes and Dynamic Allocation IndicesJohn C. Gittinssuggests a solution for problems such as this. He takes the two basic functions of a "schedulingProblem" and a "multi-armed bandit" problem[2]and shows how these problems can be solved usingDynamic allocation indices. He first takes the "Scheduling Problem" and reduces it to a machine which has to perform jobs and has a set time period, every hour or day for example, to finish each job in. The machine is given a reward value, based on finishing or not within the time period, and a probability value of whether it will finish or not for each job is calculated. The problem is "to decide which job to process next at each stage so as to maximize the total expected reward."[1]He then moves on to the "Multi–armed bandit problem" where each pull on a "one armed bandit" lever is allocated a reward function for a successful pull, and a zero reward for an unsuccessful pull. The sequence of successes forms aBernoulli processand has an unknown probability of success. There are multiple "bandits" and the distribution of successful pulls is calculated and different for each machine. Gittins states that the problem here is "to decide which arm to pull next at each stage so as to maximize the total expected reward from an infinite sequence of pulls."[1] Gittins says that "Both the problems described above involve a sequence of decisions, each of which is based on more information than its predecessors, and these both problems may be tackled by dynamic allocation indices."[2] In applied mathematics, the "Gittins index" is arealscalarvalue associated to the state of astochastic processwith a reward function and with a probability of termination. It is a measure of the reward that can be achieved by the process evolving from that state on, under the probability that it will be terminated in future. The "index policy" induced by the Gittins index, consisting of choosing at any time the stochastic process with the currently highest Gittins index, is the solution of somestopping problemssuch as the one of dynamic allocation, where a decision-maker has to maximize the total reward by distributing a limited amount of effort to a number of competing projects, each returning a stochastic reward. If the projects are independent from each other and only one project at a time may evolve, the problem is calledmulti-armed bandit(one type ofStochastic schedulingproblems) and the Gittins index policy is optimal. If multiple projects can evolve, the problem is calledRestless banditand the Gittins index policy is a known good heuristic but no optimal solution exists in general. In fact, in general this problem isNP-completeand it is generally accepted that no feasible solution can be found. Questions about the optimal stopping policies in the context of clinical trials have been open from the 1940s and in the 1960s a few authors analyzed simple models leading to optimal index policies,[3]but it was only in the 1970s thatGittinsand his collaborators demonstrated in a Markovian framework that the optimal solution of the general case is an index policy whose "dynamic allocation index" is computable in principle for every state of each project as a function of the single project's dynamics.[2][4]In parallel to Gittins,Martin Weitzmanestablished the same result in the economics literature.[5] Soon after the seminal paper of Gittins,Peter Whittle[6]demonstrated that the index emerges as aLagrange multiplierfrom adynamic programmingformulation of the problem calledretirement processand conjectured that the same index would be a good heuristic in a more general setup namedRestless bandit. The question of how to actually calculate the index forMarkov chainswas first addressed by Varaiya and his collaborators[7]with an algorithm that computes the indexes from the largest first down to the smallest and by Chen and Katehakis[8]who showed that standardLPcould be used to calculate the index of a state without requiring its calculation for all states with higher index values. LCM Kallenberg[9]provided a parametric LP implementation to compute the indices for all states of a Markov chain. Further, Katehakis and Veinott[10]demonstrated that the index is the expected reward of aMarkov decision processconstructed over the Markov chain and known asRestart in Stateand can be calculated exactly by solving that problem with thepolicy iterationalgorithm, or approximately with thevalue iterationalgorithm. This approach also has the advantage of calculating the index for one specific state without having to calculate all the greater indexes and it is valid under more general state space conditions. A faster algorithm for the calculation of all indices was obtained in 2004 by Sonin[11]as a consequence of hiselimination algorithmfor the optimal stopping of a Markov chain. In this algorithm the termination probability of the process may depend on the current state rather than being a fixed factor. A faster algorithm was proposed in 2007 by Niño-Mora[12]by exploiting the structure of a parametric simplex to reduce the computational effort of the pivot steps and thereby achieving the same complexity as theGaussian eliminationalgorithm. Cowan, W. and Katehakis (2014),[13]provide a solution to the problem, with potentially non-Markovian, uncountable state space reward processes, under frameworks in which, either the discount factors may be non-uniform and vary over time, or the periods of activation of each bandit may be not be fixed or uniform, subject instead to a possibly stochastic duration of activation before a change to a different bandit is allowed. The solution is based on generalized restart-in-state indices. The classical definition by Gittins et al. is: whereZ(⋅){\displaystyle Z(\cdot )}is a stochastic process,R(i){\displaystyle R(i)}is the utility (also called reward) associated to the discrete statei{\displaystyle i},β<1{\displaystyle \beta <1}is the probability that the stochastic process does not terminate, and⟨⋅⟩c{\displaystyle \langle \cdot \rangle _{c}}is the conditional expectation operator givenc: withχ{\displaystyle \chi }being thedomainofX. The dynamic programming formulation in terms of retirement process, given by Whittle, is: wherev(i,k){\displaystyle v(i,k)}is thevalue function with the same notation as above. It holds that IfZ(⋅){\displaystyle Z(\cdot )}is a Markov chain with rewards, the interpretation ofKatehakisand Veinott (1987) associates to every state the action of restarting from one arbitrary statei{\displaystyle i}, thereby constructing a Markov decision processMi{\displaystyle M_{i}}. The Gittins Index of that statei{\displaystyle i}is the highest total reward which can be achieved onMi{\displaystyle M_{i}}if one can always choose to continue or restart from that statei{\displaystyle i}. whereπ{\displaystyle \pi }indicates a policy overMi{\displaystyle M_{i}}. It holds that If the probability of survivalβ(i){\displaystyle \beta (i)}depends on the statei{\displaystyle i}, a generalization introduced by Sonin[11](2008) defines the Gittins indexα(i){\displaystyle \alpha (i)}as the maximum discounted total reward per chance of termination. where Ifβt{\displaystyle \beta ^{t}}is replaced by∏j=0t−1β[Z(j)]{\displaystyle \prod _{j=0}^{t-1}\beta [Z(j)]}in the definitions ofν(i){\displaystyle \nu (i)},w(i){\displaystyle w(i)}andh(i){\displaystyle h(i)}, then it holds that this observation leads Sonin[11]to conclude thatα(i){\displaystyle \alpha (i)}and notν(i){\displaystyle \nu (i)}is the "true meaning" of the Gittins index. In queueing theory, Gittins index is used to determine the optimal scheduling of jobs, e.g., in an M/G/1 queue. The mean completion time of jobs under a Gittins index schedule can be determined using the SOAP approach.[14]Note that the dynamics of the queue are intrinsically Markovian, and stochasticity is due to the arrival and service processes. This is in contrast to most of the works in the learning literature, where stochasticity is explicitly accounted through a noise term. While conventional Gittins indices induce a policy to optimize the accrual of a reward, a common problem setting consists of optimizing the ratio of accrued rewards. For example, this is a case for systems to maximize bandwidth, consisting of data over time, or minimize power consumption, consisting of energy over time. This class of problems is different from the optimization of a semi-Markov reward process, because the latter one might select states with a disproportionate sojourn time just for accruing a higher reward. Instead, it corresponds to the class of linear-fractional markov reward optimization problem. However, a detrimental aspect of such ratio optimizations is that, once the achieved ratio in some state is high, the optimization might select states leading to a low ratio because they bear a high probability of termination, so that the process is likely to terminate before the ratio drops significantly. A problem setting to prevent such early terminations consists of defining the optimization as maximization of the future ratio seen by each state. An indexation is conjectured to exist for this problem, be computable as simple variation on existing restart-in-state or state elimination algorithms and evaluated to work well in practice.[15]
https://en.wikipedia.org/wiki/Gittins_index
In the field ofcalculus of variationsinmathematics, the method ofLagrange multipliers on Banach spacescan be used to solve certain infinite-dimensionalconstrainedoptimization problems. The method is a generalization of the classical method ofLagrange multipliersas used to findextremaof afunctionof finitely many variables. LetXandYberealBanach spaces. LetUbe anopen subsetofXand letf:U→Rbe a continuouslydifferentiable function. Letg:U→Ybe another continuously differentiable function, theconstraint: the objective is to find the extremal points (maxima or minima) offsubject to the constraint thatgis zero. Suppose thatu0is aconstrained extremumoff, i.e. an extremum offon Suppose also that theFréchet derivativeDg(u0) :X→Yofgatu0is asurjectivelinear map. Then there exists aLagrange multiplierλ:Y→RinY∗, thedual spacetoY, such that Since Df(u0) is an element of the dual spaceX∗, equation (L) can also be written as where (Dg(u0))∗(λ) is thepullbackofλby Dg(u0), i.e. the action of theadjointmap (Dg(u0))∗onλ, as defined by In the case thatXandYare both finite-dimensional (i.e.linearly isomorphictoRmandRnfor somenatural numbersmandn) then writing out equation (L) inmatrixform shows thatλis the usual Lagrange multiplier vector; in the casen= 1,λis the usual Lagrange multiplier, a real number. In many optimization problems, one seeks to minimize a functional defined on an infinite-dimensional space such as a Banach space. Consider, for example, theSobolev spaceX=H01([−1,+1];R){\textstyle X=H_{0}^{1}([-1,+1];\mathbb {R} )}and the functionalf:X→R{\textstyle f:X\rightarrow \mathbb {R} }given by Without any constraint, the minimum value offwould be 0, attained byu0(x) = 0 for allxbetween −1 and +1. One could also consider the constrained optimization problem, to minimizefamong all thoseu∈Xsuch that the mean value ofuis +1. In terms of the above theorem, the constraintgwould be given by However this problem can be solved as in the finite dimensional case since the Lagrange multiplierλ{\displaystyle \lambda }is only a scalar. This article incorporates material fromLagrange multipliers on Banach spacesonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Lagrange_multipliers_on_Banach_spaces
Instatistics, thescore testassessesconstraintsonstatistical parametersbased on thegradientof thelikelihood function—known as thescore—evaluated at the hypothesized parameter value under thenull hypothesis. Intuitively, if the restricted estimator is near themaximumof the likelihood function, the score should not differ from zero by more thansampling error. While thefinite sample distributionsof score tests are generally unknown, they have an asymptoticχ2-distributionunder the null hypothesis as first proved byC. R. Raoin 1948,[1]a fact that can be used to determinestatistical significance. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of themagnitudeof theLagrange multipliersassociated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown byS. D. Silveyin 1959,[2]which led to the nameLagrange Multiplier (LM) testthat has become more commonly used, particularly in econometrics, sinceBreuschandPagan's much-cited 1980 paper.[3] The main advantage of the score test over theWald testandlikelihood-ratio testis that the score test only requires the computation of the restricted estimator.[4]This makes testing feasible when the unconstrained maximum likelihood estimate is aboundary pointin theparameter space.[citation needed]Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.[5] LetL{\displaystyle L}be thelikelihood functionwhich depends on a univariate parameterθ{\displaystyle \theta }and letx{\displaystyle x}be the data. The scoreU(θ){\displaystyle U(\theta )}is defined as TheFisher informationis[6] where ƒ is the probability density. The statistic to testH0:θ=θ0{\displaystyle {\mathcal {H}}_{0}:\theta =\theta _{0}}isS(θ0)=U(θ0)2I(θ0){\displaystyle S(\theta _{0})={\frac {U(\theta _{0})^{2}}{I(\theta _{0})}}} which has anasymptotic distributionofχ12{\displaystyle \chi _{1}^{2}}, whenH0{\displaystyle {\mathcal {H}}_{0}}is true. While asymptotically identical, calculating the LM statistic using theouter-gradient-product estimatorof the Fisher information matrix can lead to bias in small samples.[7] Note that some texts use an alternative notation, in which the statisticS∗(θ)=S(θ){\displaystyle S^{*}(\theta )={\sqrt {S(\theta )}}}is tested against a normal distribution. This approach is equivalent and gives identical results. whereL{\displaystyle L}is thelikelihood function,θ0{\displaystyle \theta _{0}}is the value of the parameter of interest under the null hypothesis, andC{\displaystyle C}is a constant set depending on the size of the test desired (i.e. the probability of rejectingH0{\displaystyle H_{0}}ifH0{\displaystyle H_{0}}is true; seeType I error). The score test is the most powerful test for small deviations fromH0{\displaystyle H_{0}}. To see this, consider testingθ=θ0{\displaystyle \theta =\theta _{0}}versusθ=θ0+h{\displaystyle \theta =\theta _{0}+h}. By theNeyman–Pearson lemma, the most powerful test has the form Taking the log of both sides yields The score test follows making the substitution (byTaylor seriesexpansion) and identifying theC{\displaystyle C}above withlog⁡(K){\displaystyle \log(K)}. If the null hypothesis is true, thelikelihood ratio test, theWald test, and the Score test are asymptotically equivalent tests of hypotheses.[8][9]When testingnested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters. A more general score test can be derived when there is more than one parameter. Suppose thatθ^0{\displaystyle {\widehat {\theta }}_{0}}is themaximum likelihoodestimate ofθ{\displaystyle \theta }under the null hypothesisH0{\displaystyle H_{0}}whileU{\displaystyle U}andI{\displaystyle I}are respectively, the score vector and the Fisher information matrix. Then asymptotically underH0{\displaystyle H_{0}}, wherek{\displaystyle k}is the number of constraints imposed by the null hypothesis and and This can be used to testH0{\displaystyle H_{0}}. The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.[10] In many situations, the score statistic reduces to another commonly used statistic.[11] Inlinear regression, the Lagrange multiplier test can be expressed as a function of theF-test.[12] When the data follows a normal distribution, the score statistic is the same as thet statistic.[clarification needed] When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in thePearson's chi-squared test.
https://en.wikipedia.org/wiki/Lagrange_multiplier_test
In the field ofmathematical optimization,Lagrangian relaxationis arelaxation methodwhichapproximatesa difficult problem ofconstrained optimizationby a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information. The method penalizes violations of inequality constraints using aLagrange multiplier, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem. The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangiandual problem. Suppose we are given alinear programming problem, withx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}andA∈Rm,n{\displaystyle A\in \mathbb {R} ^{m,n}}, of the following form: If we split the constraints inA{\displaystyle A}such thatA1∈Rm1,n{\displaystyle A_{1}\in \mathbb {R} ^{m_{1},n}},A2∈Rm2,n{\displaystyle A_{2}\in \mathbb {R} ^{m_{2},n}}andm1+m2=m{\displaystyle m_{1}+m_{2}=m}we may write the system: We may introduce the constraint (2) into the objective: If we letλ=(λ1,…,λm2){\displaystyle \lambda =(\lambda _{1},\ldots ,\lambda _{m_{2}})}be nonnegative weights, we get penalized if we violate the constraint (2), and we are also rewarded if we satisfy the constraint strictly. The above system is called the Lagrangian relaxation of our original problem. Of particular use is the property that for any fixed set ofλ~⪰0{\displaystyle {\tilde {\lambda }}\succeq 0}values, the optimal result to the Lagrangian relaxation problem will be no smaller than the optimal result to the original problem. To see this, letx^{\displaystyle {\hat {x}}}be the optimal solution to the original problem, and letx¯{\displaystyle {\bar {x}}}be the optimal solution to the Lagrangian relaxation. We can then see that The first inequality is true becausex^{\displaystyle {\hat {x}}}is feasible in the original problem and the second inequality is true becausex¯{\displaystyle {\bar {x}}}is the optimal solution to the Lagrangian relaxation. The above inequality tells us that if we minimize the maximum value we obtain from the relaxed problem, we obtain a tighter limit on the objective value of our original problem. Thus we can address the original problem by instead exploring the partially dualized problem where we defineP(λ){\displaystyle P(\lambda )}as A Lagrangian relaxation algorithm thus proceeds to explore the range of feasibleλ{\displaystyle \lambda }values while seeking to minimize the result returned by the innerP{\displaystyle P}problem. Each value returned byP{\displaystyle P}is a candidate upper bound to the problem, the smallest of which is kept as the best upper bound. If we additionally employ a heuristic, probably seeded by thex¯{\displaystyle {\bar {x}}}values returned byP{\displaystyle P}, to find feasible solutions to the original problem, then we can iterate until the best upper bound and the cost of the best feasible solution converge to a desired tolerance. Theaugmented Lagrangian methodis quite similar in spirit to the Lagrangian relaxation method, but adds an extra term, and updates the dual parametersλ{\displaystyle \lambda }in a more principled manner. It was introduced in the 1970s and has been used extensively. Thepenalty methoddoes not use dual variables but rather removes the constraints and instead penalizes deviations from the constraint. The method is conceptually simple but usually augmented Lagrangian methods are preferred in practice since the penalty method suffers from ill-conditioning issues.
https://en.wikipedia.org/wiki/Lagrangian_relaxation
Instatistics,explained variationmeasures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified asvariance; then, the more specific termexplained variancecan be used. The complementary part of the total variation is calledunexplainedorresidualvariation; likewise, when discussing variance as such, this is referred to asunexplainedorresidual variance. Following Kent (1983),[1]we use the Fraser information (Fraser 1965)[2] whereg(r){\displaystyle g(r)}is the probability density of a random variableR{\displaystyle R\,}, andf(r;θ){\displaystyle f(r;\theta )\,}withθ∈Θi{\displaystyle \theta \in \Theta _{i}}(i=0,1{\displaystyle i=0,1\,}) are two families of parametric models. Model family 0 is the simpler one, with a restricted parameter spaceΘ0⊂Θ1{\displaystyle \Theta _{0}\subset \Theta _{1}}. Parameters are determined bymaximum likelihood estimation, The information gain of model 1 over model 0 is written as where a factor of 2 is included for convenience. Γ is always nonnegative; it measures the extent to which the best model of family 1 is better than the best model of family 0 in explainingg(r). Assume a two-dimensional random variableR=(X,Y){\displaystyle R=(X,Y)}whereXshall be considered as an explanatory variable, andYas a dependent variable. Models of family 1 "explain"Yin terms ofX, whereas in family 0,XandYare assumed to be independent. We define the randomness ofYbyD(Y)=exp⁡[−2F(θ0)]{\displaystyle D(Y)=\exp[-2F(\theta _{0})]}, and the randomness ofY, givenX, byD(Y∣X)=exp⁡[−2F(θ1)]{\displaystyle D(Y\mid X)=\exp[-2F(\theta _{1})]}. Then, can be interpreted as proportion of the data dispersion which is "explained" byX. The fraction of variance unexplained is an established concept in the context oflinear regression. The usual definition of thecoefficient of determinationis based on the fundamental concept of explained variance. LetXbe a random vector, andYa random variable that is modeled by a normal distribution with centreμ=ΨTX{\displaystyle \mu =\Psi ^{\textrm {T}}X}. In this case, the above-derived proportion of explained variationρC2{\displaystyle \rho _{C}^{2}}equals the squaredcorrelation coefficientR2{\displaystyle R^{2}}. Note the strong model assumptions: the centre of theYdistribution must be a linear function ofX, and for any givenx, theYdistribution must be normal. In other situations, it is generally not justified to interpretR2{\displaystyle R^{2}}as proportion of explained variance. Explained variance is routinely used inprincipal component analysis. The relation to the Fraser–Kent information gain remains to be clarified. As the fraction of "explained variance" equals the squared correlation coefficientR2{\displaystyle R^{2}}, it shares all the disadvantages of the latter: it reflects not only the quality of the regression, but also the distribution of the independent (conditioning) variables. In the words of one critic: "ThusR2{\displaystyle R^{2}}gives the 'percentage of variance explained' by the regression, an expression that, for most social scientists, is of doubtful meaning but great rhetorical value. If this number is large, the regression gives a good fit, and there is little point in searching for additional variables. Other regression equations on different data sets are said to be less satisfactory or less powerful if theirR2{\displaystyle R^{2}}is lower. Nothing aboutR2{\displaystyle R^{2}}supports these claims".[3]: 58And, after constructing an example whereR2{\displaystyle R^{2}}is enhanced just by jointly considering data from two different populations: "'Explained variance' explains nothing."[3][page needed][4]: 183
https://en.wikipedia.org/wiki/Explained_variance
In themathematicalsubfield ofnumerical analysis,numerical stabilityis a generally desirable property ofnumerical algorithms. The precise definition of stability depends on the context: one important context isnumerical linear algebra, and another is algorithms for solving ordinary and partial differential equations by discrete approximation. In numerical linear algebra, the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly collidingeigenvalues. On the other hand, in numerical algorithms for differential equations the concern is the growth of round-off errors and/or small fluctuations in initial data which might cause a large deviation of final answer from the exact solution.[citation needed] Some numerical algorithms may damp out the small fluctuations (errors) in the input data; others might magnify such errors. Calculations that can be proven not to magnify approximation errors are callednumerically stable. One of the common tasks of numerical analysis is to try to select algorithms which arerobust– that is to say, do not produce a wildly different result for a very small change in the input data. Anoppositephenomenon isinstability. Typically, an algorithm involves an approximative method, and in some cases one could prove that the algorithm would approach the right solution in some limit (when using actual real numbers, not floating point numbers). Even in this case, there is no guarantee that it would converge to the correct solution, because the floating-point round-off or truncation errors can be magnified, instead of damped, causing the deviation from the exact solution to grow exponentially.[1] There are different ways to formalize the concept of stability. The following definitions of forward, backward, and mixed stability are often used innumerical linear algebra. Consider the problem to be solved by the numerical algorithm as afunctionfmapping the dataxto the solutiony. The result of the algorithm, sayy*, will usually deviate from the "true" solutiony. The main causes of error areround-off errorandtruncation error. Theforward errorof the algorithm is the difference between the result and the solution; in this case,Δy=y* −y. Thebackward erroris the smallestΔxsuch thatf(x+ Δx) =y*; in other words, the backward error tells us what problem the algorithm actually solved. The forward and backward error are related by thecondition number: the forward error is at most as big in magnitude as the condition number multiplied by the magnitude of the backward error. In many cases, it is more natural to consider therelative error|Δx||x|{\displaystyle {\frac {|\Delta x|}{|x|}}}instead of the absolute errorΔx. The algorithm is said to bebackward stableif the backward error is small for all inputsx. Of course, "small" is a relative term and its definition will depend on the context. Often, we want the error to be of the same order as, or perhaps only a feworders of magnitudebigger than, theunit round-off. The usual definition of numerical stability uses a more general concept, calledmixed stability, which combines the forward error and the backward error. An algorithm is stable in this sense if it solves a nearby problem approximately, i.e., if there exists aΔxsuch that bothΔxis small andf(x+ Δx) −y*is small. Hence, a backward stable algorithm is always stable. An algorithm isforward stableif its forward error divided by the condition number of the problem is small. This means that an algorithm is forward stable if it has a forward error of magnitude similar to some backward stable algorithm. The above definitions are particularly relevant in situations where truncation errors are not important. In other contexts, for instance when solvingdifferential equations, a different definition of numerical stability is used. Innumerical ordinary differential equations, various concepts of numerical stability exist, for instanceA-stability. They are related to some concept of stability in thedynamical systemssense, oftenLyapunov stability. It is important to use a stable method when solving astiff equation. Yet another definition is used innumerical partial differential equations. An algorithm for solving a linear evolutionarypartial differential equationis stable if thetotal variationof the numerical solution at a fixed time remains bounded as the step size goes to zero. TheLax equivalence theoremstates that an algorithmconvergesif it isconsistentandstable(in this sense). Stability is sometimes achieved by includingnumerical diffusion. Numerical diffusion is a mathematical term which ensures that roundoff and other errors in the calculation get spread out and do not add up to cause the calculation to "blow up".Von Neumann stability analysisis a commonly used procedure for the stability analysis offinite difference schemesas applied to linear partial differential equations. These results do not hold for nonlinear PDEs, where a general, consistent definition of stability is complicated by many properties absent in linear equations. Computing the square root of 2 (which is roughly 1.41421) is awell-posed problem. Many algorithms solve this problem by starting with an initial approximationx0to2{\displaystyle {\sqrt {2}}}, for instancex0= 1.4, and then computing improved guessesx1,x2, etc. One such method is the famousBabylonian method, which is given byxk+1= (xk+ 2/xk)/2. Another method, called "method X", is given byxk+1= (xk2− 2)2+xk.[note 1]A few iterations of each scheme are calculated in table form below, with initial guessesx0= 1.4 andx0= 1.42. Observe that the Babylonian method converges quickly regardless of the initial guess, whereas Method X converges extremely slowly with initial guessx0= 1.4 and diverges for initial guessx0= 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable. Numerical stability is affected by the number of the significant digits the machine keeps. If a machine is used that keeps only the four most significant decimal digits, a good example on loss of significance can be given by the two equivalent functions by comparing the two results above, it is clear thatloss of significance(caused here bycatastrophic cancellationfrom subtracting approximations to the nearby numbers501{\displaystyle {\sqrt {501}}}and500{\displaystyle {\sqrt {500}}}, despite the subtraction being computed exactly) has a huge effect on the results, even though both functions are equivalent, as shown below The desired value, computed using infinite precision, is 11.174755...[note 2]
https://en.wikipedia.org/wiki/Numerical_stability
Inlinear algebra, aHilbert matrix, introduced byHilbert(1894), is asquare matrixwith entries being theunit fractions For example, this is the 5 × 5 Hilbert matrix: The entries can also be defined by the integral that is, as aGramian matrixfor powers ofx. It arises in theleast squaresapproximation of arbitrary functions bypolynomials. The Hilbert matrices are canonical examples ofill-conditionedmatrices, being notoriously difficult to use innumerical computation. For example, the 2-normcondition numberof the matrix above is about 4.8×105. Hilbert (1894)introduced the Hilbert matrix to study the following question inapproximation theory: "Assume thatI= [a,b], is a real interval. Is it then possible to find a non-zero polynomialPwith integer coefficients, such that the integral is smaller than any given boundε> 0, taken arbitrarily small?" To answer this question, Hilbert derives an exact formula for thedeterminantof the Hilbert matrices and investigates their asymptotics. He concludes that the answer to his question is positive if the lengthb−aof the interval is smaller than 4. The Hilbert matrix issymmetricandpositive definite. The Hilbert matrix is alsototally positive(meaning that the determinant of everysubmatrixis positive). The Hilbert matrix is an example of aHankel matrix. It is also a specific example of aCauchy matrix. The determinant can be expressed inclosed form, as a special case of theCauchy determinant. The determinant of then×nHilbert matrix is where Hilbert already mentioned the curious fact that the determinant of the Hilbert matrix is the reciprocal of an integer (see sequenceOEIS:A005249in theOEIS), which also follows from the identity UsingStirling's approximationof thefactorial, one can establish the following asymptotic result: whereanconverges to the constante1/421/12A−3≈0.6450{\displaystyle e^{1/4}\,2^{1/12}\,A^{-3}\approx 0.6450}asn→∞{\displaystyle n\to \infty }, whereAis theGlaisher–Kinkelin constant. Theinverseof the Hilbert matrix can be expressed in closed form usingbinomial coefficients; its entries are wherenis the order of the matrix.[1]It follows that the entries of the inverse matrix are all integers, and that the signs form a checkerboard pattern, being positive on theprincipal diagonal. For example, The condition number of then×nHilbert matrix grows asO((1+2)4n/n){\displaystyle O\left(\left(1+{\sqrt {2}}\right)^{4n}/{\sqrt {n}}\right)}. Themethod of momentsapplied to polynomial distributions results in aHankel matrix, which in the special case of approximating a probability distribution on the interval [0, 1] results in a Hilbert matrix. This matrix needs to be inverted to obtain the weight parameters of the polynomial distribution approximation.[2]
https://en.wikipedia.org/wiki/Hilbert_matrix
Inmathematics, awell-posed problemis one for which the following properties hold:[a] Examples ofarchetypalwell-posed problems include theDirichlet problem for Laplace's equation, and theheat equationwith specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems. Problems that are not well-posed in the sense above are termedill-posed. A simple example is aglobal optimizationproblem, because the location of the optima is generally not a continuous function of the parameters specifying the objective, even when the objective itself is a smooth function of those parameters.Inverse problemsare often ill-posed; for example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data. Continuum models must often bediscretizedin order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer fromnumerical instabilitywhen solved with finiteprecision, or witherrorsin the data. Even if a problem is well-posed, it may still beill-conditioned, meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinearcomplex systems(so-calledchaoticsystems) provide well-known examples of instability. An ill-conditioned problem is indicated by a largecondition number. If the problem is well-posed, then it stands a good chance of solution on a computer using astable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution. This process is known asregularization.[1]Tikhonov regularizationis one of the most commonly used for regularization of linear ill-posed problems. The existence of local solutions is often an important part of the well-posedness problem, and it is the foundation of many estimate methods, for example the energy method below. There are many results on this topic. For example, theCauchy–Kowalevski theoremfor Cauchy initial value problems essentially states that if the terms in a partialdifferential equationare all made up ofanalytic functionsand a certain transversality condition is satisfied (thehyperplaneor more generally hypersurface where the initial data are posed must be non-characteristic with respect to the partial differential operator), then on certain regions, there necessarily exist solutions which are as well analytic functions. This is a fundamental result in the study of analytic partial differential equations. Surprisingly, the theorem does not hold in the setting of smooth functions; anexamplediscovered byHans Lewyin 1957 consists of a linear partial differential equation whose coefficients are smooth (i.e., have derivatives of all orders) but not analytic for which no solution exists. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions. The energy method is useful for establishing both uniqueness and continuity with respect to initial conditions (i.e. it does not establish existence). The method is based upon deriving an upper bound of an energy-like functional for a given problem. Example: Consider the diffusion equation on the unit interval with homogeneousDirichlet boundary conditionsand suitable initial dataf(x){\displaystyle f(x)}(e.g. for whichf(0)=f(1)=0{\displaystyle f(0)=f(1)=0}). ut=Duxx,0<x<1,t>0,D>0,u(x,0)=f(x),u(0,t)=0,u(1,t)=0,{\displaystyle {\begin{aligned}u_{t}&=Du_{xx},&&0<x<1,\,t>0,\,D>0,\\u(x,0)&=f(x),\\u(0,t)&=0,\\u(1,t)&=0,\\\end{aligned}}} Multiply the equationut=Duxx{\displaystyle u_{t}=Du_{xx}}byu{\displaystyle u}and integrate in space over the unit interval to obtain ∫01uutdx=D∫01uuxxdx⟹∫0112∂tu2dx=Duux|01−D∫01(ux)2dx⟹12∂t‖u‖22=0−D∫01(ux)2dx≤0{\displaystyle {\begin{aligned}&&\int _{0}^{1}uu_{t}dx&=D\int _{0}^{1}uu_{xx}dx\\\Longrightarrow &&\int _{0}^{1}{\frac {1}{2}}\partial _{t}u^{2}dx&=Duu_{x}{\Big |}_{0}^{1}-D\int _{0}^{1}(u_{x})^{2}dx\\\Longrightarrow &&{\frac {1}{2}}\partial _{t}\|u\|_{2}^{2}&=0-D\int _{0}^{1}(u_{x})^{2}dx\leq 0\end{aligned}}} This tells us that‖u‖2{\displaystyle \|u\|_{2}}(p-norm) cannot grow in time. By multiplying by two and integrating in time, from0{\displaystyle 0}up tot{\displaystyle t}, one finds ‖u(⋅,t)‖22≤‖f(⋅)‖22{\displaystyle \|u(\cdot ,t)\|_{2}^{2}\leq \|f(\cdot )\|_{2}^{2}} This result is theenergy estimatefor this problem. To show uniqueness of solutions, assume there are two distinct solutions to the problem, call themu{\displaystyle u}andv{\displaystyle v}, each satisfying the same initial data. Upon definingw=u−v{\displaystyle w=u-v}then, via the linearity of the equations, one finds thatw{\displaystyle w}satisfies wt=Dwxx,0<x<1,t>0,D>0,w(x,0)=0,w(0,t)=0,w(1,t)=0,{\displaystyle {\begin{aligned}w_{t}&=Dw_{xx},&&0<x<1,\,t>0,\,D>0,\\w(x,0)&=0,\\w(0,t)&=0,\\w(1,t)&=0,\\\end{aligned}}} Applying the energy estimate tells us‖w(⋅,t)‖22≤0{\displaystyle \|w(\cdot ,t)\|_{2}^{2}\leq 0}which impliesu=v{\displaystyle u=v}(almost everywhere). Similarly, to show continuity with respect to initial conditions, assume thatu{\displaystyle u}andv{\displaystyle v}are solutions corresponding to different initial datau(x,0)=f(x){\displaystyle u(x,0)=f(x)}andv(x,0)=g(x){\displaystyle v(x,0)=g(x)}. Consideringw=u−v{\displaystyle w=u-v}once more, one finds thatw{\displaystyle w}satisfies the same equations as above but withw(x,0)=f(x)−g(x){\displaystyle w(x,0)=f(x)-g(x)}. This leads to the energy estimate‖w(⋅,t)‖22≤D‖f(⋅)−g(⋅)‖22{\displaystyle \|w(\cdot ,t)\|_{2}^{2}\leq D\|f(\cdot )-g(\cdot )\|_{2}^{2}}which establishes continuity (i.e. asf{\displaystyle f}andg{\displaystyle g}become closer, as measured by theL2{\displaystyle L^{2}}norm of their difference, then‖w(⋅,t)‖2→0{\displaystyle \|w(\cdot ,t)\|_{2}\to 0}). Themaximum principleis an alternative approach to establish uniqueness and continuity of solutions with respect to initial conditions for this example. The existence of solutions to this problem can be established usingFourier series. If it is possible to denote the solution to a Cauchy problem∂u∂t=Au,u(0)=u0(1){\displaystyle {\frac {\partial u}{\partial t}}=Au,u(0)=u_{0}{\text{ (1)}}}, whereAis a linear operator mapping a dense linear subspaceD(A)ofXintoX,withu(t)=S(t)u0{\displaystyle u(t)=S(t)u_{0}}, where{S(t);t≥0}{\displaystyle \{S(t);t\geq 0\}}is a family of linear operators onX, satisfying then (1) is well-posed. Hille-Yosida theoremstates the criteria onAfor such a{S(t);t≥0}{\displaystyle \{S(t);t\geq 0\}}to exist.
https://en.wikipedia.org/wiki/Ill-posed_problem
Wilson matrixis the following4×4{\displaystyle 4\times 4}matrix having integers as elements:[1][2][3][4][5] This is thecoefficient matrixof the followingsystem of linear equationsconsidered in a paper by J. Morris published in 1946:[6] Morris ascribes the source of the set of equations to one T. S. Wilson but no details about Wilson have been provided. The particular system of equations was used by Morris to illustrate the concept of ill-conditioned system of equations. The matrixW{\displaystyle W}has been used as an example and for test purposes in many research papers and books over the years. John Todd has referred toW{\displaystyle W}as “the notorious matrix W of T. S. Wilson”.[1] A consideration of the condition number of the Wilson matrix has spawned several interesting research problems relating to condition numbers of matrices in certain special classes of matrices having some or all the special features of the Wilson matrix. In particular, the following special classes of matrices have been studied:[1] An exhaustive computation of the condition numbers of the matrices in the above sets has yielded the following results:
https://en.wikipedia.org/wiki/Wilson_matrix
Importance samplingis aMonte Carlo methodfor evaluating properties of a particulardistribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally attributed to a paper byTeun KloekandHerman K. van Dijkin 1978,[1]but its precursors can be found instatistical physicsas early as 1949.[2][3]Importance sampling is also related toumbrella samplingincomputational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both. LetX:Ω→R{\displaystyle X\colon \Omega \to \mathbb {R} }be arandom variablein someprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}. We wish to estimate theexpected valueofX{\displaystyle X}underP{\displaystyle \mathbb {P} }, denotedEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}. If we have statistically independent random samplesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}, generated according toP{\displaystyle \mathbb {P} }, then an empirical estimate ofEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}is just and the precision of this estimate depends on the variance ofX{\displaystyle X}: The basic idea of importance sampling is to sample from a different distribution to lower the variance of the estimation ofEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}, or when sampling directly fromP{\displaystyle \mathbb {P} }is difficult. This is accomplished by first choosing a random variableY≥0{\displaystyle Y\geq 0}such thatEP[Y]=1{\displaystyle \mathbb {E} _{\mathbb {P} }[Y]=1}and thatP{\displaystyle \mathbb {P} }-almost everywhereY(ω)≠0{\displaystyle Y(\omega )\neq 0}. With the variableY{\displaystyle Y}we define a probabilityQ{\displaystyle \mathbb {Q} }that satisfies The variableX/Y{\displaystyle X/Y}will thus be sampled underQ{\displaystyle \mathbb {Q} }to estimateEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}as above and this estimation is improved when WhenX{\displaystyle X}is of constant sign overΩ{\displaystyle \Omega }, the best variableY{\displaystyle Y}would clearly beY∗=XEP[X]≥0{\displaystyle Y^{*}={\frac {X}{\mathbb {E} _{\mathbb {P} }[X]}}\geq 0}, so thatX/Y∗{\displaystyle X/Y^{*}}is the searched constantEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}and a single sample underQ∗{\displaystyle \mathbb {Q} ^{*}}suffices to give its value. Unfortunately we cannot take that choice, becauseEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}is precisely the value we are looking for! However this theoretical best caseY∗{\displaystyle Y^{*}}gives us an insight into what importance sampling does: for allx∈R{\displaystyle x\in \mathbb {R} }, the density ofQ∗{\displaystyle \mathbb {Q} ^{*}}atX=x{\displaystyle X=x}can be written as To the right,xP(X∈[x;x+dx]){\displaystyle x\,\mathbb {P} (X\in [x;x+dx])}is one of the infinitesimal elements that sum up toEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}: therefore, a good probability changeQ{\displaystyle \mathbb {Q} }in importance sampling will redistribute the law ofX{\displaystyle X}so that its samples' frequencies are sorted directly according to their contributions inEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}as opposed toEP[1]{\displaystyle \mathbb {E} _{\mathbb {P} }[1]}. Hence the name "importance sampling." Importance sampling is often used as aMonte Carlo integrator. WhenP{\displaystyle \mathbb {P} }is the uniform distribution overΩ=R{\displaystyle \Omega =\mathbb {R} }, the expectationEP[X]{\displaystyle \mathbb {E} _{\mathbb {P} }[X]}corresponds to the integral of the real functionX:R→R{\displaystyle X\colon \mathbb {R} \to \mathbb {R} }. Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically. Examples includeBayesian networksand importance weightedvariational autoencoders.[4] Importance samplingis avariance reductiontechnique that can be used in theMonte Carlo method. The idea behind importance sampling is that certain values of the inputrandom variablesin asimulationhave more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then theestimatorvariance can be reduced. Hence, the basic methodology in importance sampling is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased. The weight is given by thelikelihood ratio, that is, theRadon–Nikodym derivativeof the true underlying distribution with respect to the biased simulation distribution. The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the "art" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling. ConsiderX{\displaystyle X}to be the sample andf(X)g(X){\displaystyle {\frac {f(X)}{g(X)}}}to be the likelihood ratio, wheref{\displaystyle f}is the probability density (mass) function of the desired distribution andg{\displaystyle g}is the probability density (mass) function of the biased/proposal/sample distribution. Then the problem can be characterized by choosing the sample distributiong{\displaystyle g}that minimizes the variance of the scaled sample: It can be shown that the following distribution minimizes the above variance:[5] Notice that whenX≥0{\displaystyle X\geq 0}, this variance becomes 0. Consider estimating by simulation the probabilitypt{\displaystyle p_{t}\,}of an eventX≥t{\displaystyle X\geq t}, whereX{\displaystyle X}is a random variable withcumulative distribution functionF(x){\displaystyle F(x)}andprobability density functionf(x)=F′(x){\displaystyle f(x)=F'(x)\,}, where prime denotesderivative. AK{\displaystyle K}-lengthindependent and identically distributed(i.i.d.) sequenceXi{\displaystyle X_{i}\,}is generated from the distributionF{\displaystyle F}, and the numberkt{\displaystyle k_{t}}of random variables that lie above the thresholdt{\displaystyle t}are counted. The random variablekt{\displaystyle k_{t}}is characterized by theBinomial distribution One can show thatE[kt/K]=pt{\displaystyle \mathbb {E} [k_{t}/K]=p_{t}}, andvar⁡[kt/K]=pt(1−pt)/K{\displaystyle \operatorname {var} [k_{t}/K]=p_{t}(1-p_{t})/K}, so in the limitK→∞{\displaystyle K\to \infty }we are able to obtainpt{\displaystyle p_{t}}. Note that the variance is low ifpt≈1{\displaystyle p_{t}\approx 1}. Importance sampling is concerned with the determination and use of an alternate density functionf∗{\displaystyle f_{*}\,}(forX{\displaystyle X}), usually referred to as a biasing density, for the simulation experiment. This density allows the eventX≥t{\displaystyle {X\geq t\ }}to occur more frequently, so the sequence lengthsK{\displaystyle K}gets smaller for a givenestimatorvariance. Alternatively, for a givenK{\displaystyle K}, use of the biasing density results in a variance smaller than that of the conventional Monte Carlo estimate. From the definition ofpt{\displaystyle p_{t}\,}, we can introducef∗{\displaystyle f_{*}\,}as below. where is a likelihood ratio and is referred to as the weighting function. The last equality in the above equation motivates the estimator This is the importance sampling estimator ofpt{\displaystyle p_{t}\,}and is unbiased. That is, the estimation procedure is to generate i.i.d. samples fromf∗{\displaystyle f_{*}\,}and for each sample which exceedst{\displaystyle t\,}, the estimate is incremented by the weightW{\displaystyle W\,}evaluated at the sample value. The results are averaged overK{\displaystyle K\,}trials. The variance of the importance sampling estimator is easily shown to be Now, the importance sampling problem then focuses on finding a biasing densityf∗{\displaystyle f_{*}\,}such that the variance of the importance sampling estimator is less than the variance of the general Monte Carlo estimate. For some biasing density function, which minimizes the variance, and under certain conditions reduces it to zero, it is called an optimal biasing density function. Although there are many kinds of biasing methods, the following two methods are most widely used in the applications of importance sampling. Shifting probability mass into the event regionX≥t{\displaystyle {X\geq t\ }}by positive scaling of the random variableX{\displaystyle X\,}with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods. In importance sampling by scaling, the simulation density is chosen as the density function of the scaled random variableaX{\displaystyle aX\,}, where usuallya>1{\displaystyle a>1}for tail probability estimation. By transformation, and the weighting function is While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary regionX<t{\displaystyle X<t\,}which is undesirable. IfX{\displaystyle X\,}is a sum ofn{\displaystyle n\,}random variables, the spreading of mass takes place in ann{\displaystyle n\,}dimensional space. The consequence of this is a decreasing importance sampling gain for increasingn{\displaystyle n\,}, and is called the dimensionality effect. A modern version of importance sampling by scaling is e.g. so-called sigma-scaled sampling (SSS) which is running multiple Monte Carlo (MC) analysis with different scaling factors. In opposite to many other high yield estimation methods (like worst-case distances WCD) SSS does not suffer much from the dimensionality problem. Also addressing multiple MC outputs causes no degradation in efficiency. On the other hand, as WCD, SSS is only designed for Gaussian statistical variables, and in opposite to WCD, the SSS method is not designed to provide accurate statistical corners. Another SSS disadvantage is that the MC runs with large scale factors may become difficult, e. g. due to model and simulator convergence problems. In addition, in SSS we face a strong bias-variance trade-off: Using large scale factors, we obtain quite stable yield results, but the larger the scale factors, the larger the bias error. If the advantages of SSS does not matter much in the application of interest, then often other methods are more efficient. Another simple and effective biasing technique employs translation of the density function (and hence random variable) to place much of its probability mass in the rare event region. Translation does not suffer from a dimensionality effect and has been successfully used in several applications relating to simulation ofdigital communicationsystems. It often provides better simulation gains than scaling. In biasing by translation, the simulation density is given by wherec{\displaystyle c\,}is the amount of shift and is to be chosen to minimize the variance of the importance sampling estimator. The fundamental problem with importance sampling is that designing good biased distributions becomes more complicated as the system complexity increases. Complex systems are the systems with long memory since complex processing of a few inputs is much easier to handle. This dimensionality or memory can cause problems in three ways: In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems. Examples of techniques to break the simulation down are conditioning and error-event simulation (EES) and regenerative simulation. In order to identify successful importance sampling techniques, it is useful to be able to quantify the run-time savings due to the use of the importance sampling approach. The performance measure commonly used isσMC2/σIS2{\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,}, and this can be interpreted as the speed-up factor by which the importance sampling estimator achieves the same precision as the MC estimator. This has to be computed empirically since the estimator variances are not likely to be analytically possible when their mean is intractable. Other useful concepts in quantifying an importance sampling estimator are the variance bounds and the notion of asymptotic efficiency. One related measure is the so-calledEffective Sample Size(ESS).[6] Variance is not the only possiblecost functionfor a simulation, and other cost functions, such as the mean absolute deviation, are used in various statistical applications. Nevertheless, the variance is the primary cost function addressed in the literature, probably due to the use of variances inconfidence intervalsand in the performance measureσMC2/σIS2{\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,}. An associated issue is the fact that the ratioσMC2/σIS2{\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,}overestimates the run-time savings due to importance sampling since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to importance sampling is the time taken to devise and program the technique and analytically derive the desired weight function. When different proposal distributions,gi(x){\displaystyle g_{i}(x)},i=1,…,n,{\displaystyle i=1,\ldots ,n,}are jointly used for drawing the samplesx1,…,xn,{\displaystyle x_{1},\ldots ,x_{n},}different proper weighting functions can be employed (e.g., see[7][8][9][10]). In an adaptive setting, the proposal distributions,gi,t(x){\displaystyle g_{i,t}(x)},i=1,…,n,{\displaystyle i=1,\ldots ,n,}andt=1,…,T,{\displaystyle t=1,\ldots ,T,}are updated each iterationt{\displaystyle t}of the adaptive importance sampling algorithm. Hence, since a population of proposal densities is used, several suitable combinations of sampling and weighting schemes can be employed.[11][12][13][14][15][16][17]
https://en.wikipedia.org/wiki/Importance_sampling
Local regressionorlocal polynomial regression,[1]also known asmoving regression,[2]is a generalization of themoving averageandpolynomial regression.[3]Its most common methods, initially developed forscatterplot smoothing, areLOESS(locally estimated scatterplot smoothing) andLOWESS(locally weighted scatterplot smoothing), both pronounced/ˈloʊɛs/LOH-ess. They are two strongly relatednon-parametric regressionmethods that combine multiple regression models in ak-nearest-neighbor-based meta-model. In some fields, LOESS is known and commonly referred to asSavitzky–Golay filter[4][5](proposed 15 years before LOESS). LOESS and LOWESS thus build on"classical" methods, such as linear and nonlinearleast squares regression. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility ofnonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data. The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches. A smooth curve through a set of data points obtained with this statistical technique is called aloess curve, particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of they-axisscattergramcriterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as alowess curve; however, some authorities treatlowessand loess as synonyms.[6][7] Local regression and closely related procedures have a long and rich history, having been discovered and rediscovered in different fields on multiple occasions. An early work byRobert Henderson[8]studying the problem of graduation (a term for smoothing used in Actuarial literature) introduced local regression using cubic polynomials. Specifically, letYj{\displaystyle Y_{j}}denote an ungraduated sequence of observations. Following Henderson, suppose that only the terms fromY−h{\displaystyle Y_{-h}}toYh{\displaystyle Y_{h}}are to be taken into account when computing the graduated value ofY0{\displaystyle Y_{0}}, andWj{\displaystyle W_{j}}is the weight to be assigned toYj{\displaystyle Y_{j}}. Henderson then uses a local polynomial approximationa+bj+cj2+dj3{\displaystyle a+bj+cj^{2}+dj^{3}}, and sets up the following four equations for the coefficients: Solving these equations for the polynomial coefficients yields the graduated value,Y^0=a{\displaystyle {\hat {Y}}_{0}=a}. Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules ofSpencer(1904).[9]These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show thatanysuch graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights. Further discussions of the historical work on graduation and local polynomial fitting can be found inMaculay(1931),[10]ClevelandandLoader(1995);[11]andMurrayandBellhouse(2019).[12] TheSavitzky-Golay filter, introduced byAbraham SavitzkyandMarcel J. E. Golay(1964)[13]significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as aconvolution. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths. Local regression methods started to appear extensively in statistics literature in the 1970s; for example,Charles J. Stone(1977),[14]Vladimir Katkovnik(1979)[15]andWilliam S. Cleveland(1979).[16]Katkovnik (1985)[17]is the earliest book devoted primarily to local regression methods. Theoretical work continued to appear throughout the 1990s. Important contributions includeJianqing FanandIrène Gijbels(1992)[18]studying efficiency properties, andDavid RuppertandMatthew P. Wand(1994)[19]developing an asymptotic distribution theory for multivariate local regression. An important extension of local regression is Local Likelihood Estimation, formulated byRobert TibshiraniandTrevor Hastie(1987).[20]This replaces the local least-squares criterion with a likelihood-based criterion, thereby extending the local regression method to theGeneralized linear modelsetting; for example binary data, count data or censored data. Practical implementations of local regression began appearing in statistical software in the 1980s. Cleveland (1981)[21]introduces the LOWESS routines, intended for smoothing scatterplots. This implements local linear fitting with a single predictor variable, and also introduces robustness downweighting to make the procedure resistant to outliers. An entirely new implementation, LOESS, is described in Cleveland andSusan J. Devlin(1988).[22]LOESS is a multivariate smoother, able to handle spatial data with two (or more) predictor variables, and uses (by default) local quadratic fitting. Both LOWESS and LOESS are implemented in theSandRprogramming languages. See also Cleveland's Local Fitting Software.[23] While Local Regression, LOWESS and LOESS are sometimes used interchangeably, this usage should be considered incorrect. Local Regression is a general term for the fitting procedure; LOWESS and LOESS are two distinct implementations. Local regression uses adata setconsisting of observations one or more ‘independent’ or ‘predictor’ variables, and a ‘dependent’ or ‘response’ variable. The dataset will consist of a numbern{\displaystyle n}observations. The observations of the predictor variable can be denotedx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}, and corresponding observations of the response variable byY1,…,Yn{\displaystyle Y_{1},\ldots ,Y_{n}}. For ease of presentation, the development below assumes a single predictor variable; the extension to multiple predictors (when thexi{\displaystyle x_{i}}are vectors) is conceptually straightforward. A functional relationship between the predictor and response variables is assumed:Yi=μ(xi)+ϵi{\displaystyle Y_{i}=\mu (x_{i})+\epsilon _{i}}whereμ(x){\displaystyle \mu (x)}is the unknown ‘smooth’ regression function to be estimated, and represents the conditional expectation of the response, given a value of the predictor variables. In theoretical work, the ‘smoothness’ of this function can be formally characterized by placing bounds on higher order derivatives. Theϵi{\displaystyle \epsilon _{i}}represents random error; for estimation purposes these are assumed to havemeanzero. Stronger assumptions (e.g.,independenceand equalvariance) may be made when assessing properties of the estimates. Local regression then estimates the functionμ(x){\displaystyle \mu (x)}, for one value ofx{\displaystyle x}at a time. Since the function is assumed to be smooth, the most informative data points are those whosexi{\displaystyle x_{i}}values are close tox{\displaystyle x}. This is formalized with a bandwidthh{\displaystyle h}and akernelor weight functionW(⋅){\displaystyle W(\cdot )}, with observations assigned weightswi(x)=W(xi−xh).{\displaystyle w_{i}(x)=W\left({\frac {x_{i}-x}{h}}\right).}A typical choice ofW{\displaystyle W}, used by Cleveland in LOWESS, isW(u)=(1−|u|3)3{\displaystyle W(u)=(1-|u|^{3})^{3}}for|u|<1{\displaystyle |u|<1}, although any similar function (peaked atu=0{\displaystyle u=0}and small or 0 for large values ofu{\displaystyle u}) can be used. Questions of bandwidth selection and specification (how large shouldh{\displaystyle h}be, and should it vary depending upon the fitting pointx{\displaystyle x}?) are deferred for now. A local model (usually a low-order polynomial with degreep≤3{\displaystyle p\leq 3}), expressed asμ(xi)≈β0+β1(xi−x)+…+βp(xi−x)p{\displaystyle \mu (x_{i})\approx \beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}}is then fitted byweighted least squares: choose regression coefficients(β^0,…,β^p){\displaystyle ({\hat {\beta }}_{0},\ldots ,{\hat {\beta }}_{p})}to minimize∑i=1nwi(x)(Yi−β0−β1(xi−x)−…−βp(xi−x)p)2.{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left(Y_{i}-\beta _{0}-\beta _{1}(x_{i}-x)-\ldots -\beta _{p}(x_{i}-x)^{p}\right)^{2}.}The local regression estimate ofμ(x){\displaystyle \mu (x)}is then simply the intercept estimate:μ^(x)=β^0{\displaystyle {\hat {\mu }}(x)={\hat {\beta }}_{0}}while the remaining coefficients can be interpreted (up to a factor ofp!{\displaystyle p!}) as derivative estimates. It is to be emphasized that the above procedure produces the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}for one value ofx{\displaystyle x}. When considering a new value ofx{\displaystyle x}, a new set of weightswi(x){\displaystyle w_{i}(x)}must be computed, and the regression coefficient estimated afresh. As with all least squares estimates, the estimated regression coefficients can be expressed in closed form (seeWeighted least squaresfor details):β^=(XTWX)−1XTWy{\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X^{\textsf {T}}WX} )^{-1}\mathbf {X^{\textsf {T}}W} \mathbf {y} }whereβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is a vector of the local regression coefficients;X{\displaystyle \mathbf {X} }is then×(p+1){\displaystyle n\times (p+1)}design matrixwith entries(xi−x)j{\displaystyle (x_{i}-x)^{j}};W{\displaystyle \mathbf {W} }is a diagonal matrix of the smoothing weightswi(x){\displaystyle w_{i}(x)}; andy{\displaystyle \mathbf {y} }is a vector of the responsesYi{\displaystyle Y_{i}}. This matrix representation is crucial for studying the theoretical properties of local regression estimates. With appropriate definitions of the design and weight matrices, it immediately generalizes to the multiple-predictor setting. Implementation of local regression requires specification and selection of several components: Each of these components has been the subject of extensive study; a summary is provided below. The bandwidthh{\displaystyle h}controls the resolution of the local regression estimate. Ifhis too small, the estimate may show high-resolution features that represent noise in the data, rather than any real structure in the mean function. Conversely, ifhis too large, the estimate will only show low-resolution features, and important structure may be lost. This is thebias-variance tradeoff; ifhis too small, the estimate exhibits large variation; while at largeh, the estimate exhibits large bias. Careful choice of bandwidth is therefore crucial when applying local regression. Mathematical methods for bandwidth selection require, firstly, formal criteria to assess the performance of an estimate. One such criterion is prediction error: if a new observation is made atx~{\displaystyle {\tilde {x}}}, how well does the estimateμ^(x~){\displaystyle {\hat {\mu }}({\tilde {x}})}predict the new responseY~{\displaystyle {\tilde {Y}}}? Performance is often assessed using a squared-error loss function. The mean squared prediction error isE(Y~−μ^(x~))2=E(Y~−μ(x)+μ(x)−μ^(x~))2=E(Y~−μ(x))2+E(μ(x)−μ^(x~))2.{\displaystyle {\begin{aligned}E\left({\tilde {Y}}-{\hat {\mu }}({\tilde {x}})\right)^{2}&=E\left({\tilde {Y}}-\mu (x)+\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}\\&=E\left({\tilde {Y}}-\mu (x)\right)^{2}+E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}.\end{aligned}}}The first termE(Y~−μ(x))2{\displaystyle E\left({\tilde {Y}}-\mu (x)\right)^{2}}is the random variation of the observation; this is entirely independent of the local regression estimate. The second term,E(μ(x)−μ^(x~))2{\displaystyle E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}}is the mean squared estimation error. This relation shows that, for squared error loss, minimizing prediction error and estimation error are equivalent problems. In global bandwidth selection, these measures can be integrated over thex{\displaystyle x}space ("mean integrated squared error", often used in theoretical work), or averaged over the actualxi{\displaystyle x_{i}}(more useful for practical implementations). Some standard techniques from model selection can be readily adapted to local regression: Any of these criteria can be minimized to produce an automatic bandwidth selector. Cleveland and Devlin[22]prefer a graphical method (theM-plot) to visually display the bias-variance trade-off and guide bandwidth choice. One question not addressed above is, how should the bandwidth depend upon the fitting pointx{\displaystyle x}? Often a constant bandwidth is used, while LOWESS and LOESS prefer a nearest-neighbor bandwidth, meaninghis smaller in regions with many data points. Formally, the smoothing parameter,α{\displaystyle \alpha }, is the fraction of the total numbernof data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises thenα{\displaystyle n\alpha }points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.[7] More sophisticated methods attempt to choose the bandwidthadaptively; that is, choose a bandwidth at each fitting pointx{\displaystyle x}by applying criteria such as cross-validation locally within the smoothing window. An early example of this isJerome H. Friedman's[24]"supersmoother", which uses cross-validation to choose among local linear fits at different bandwidths. Most sources, in both theoretical and computational work, use low-order polynomials as the local model, with polynomial degree ranging from 0 to 3. The degree 0 (local constant) model is equivalent to akernel smoother; usually credited toÈlizbar Nadaraya(1964)[25]andG. S. Watson(1964).[26]This is the simplest model to use, but can suffer from bias when fitting near boundaries of the dataset. Local linear (degree 1) fitting can substantially reduce the boundary bias. Local quadratic (degree 2) and local cubic (degree 3) can result in improved fits, particularly when the underlying mean functionμ(x){\displaystyle \mu (x)}has substantial curvature, or equivalently a large second derivative. In theory, higher orders of polynomial can lead to faster convergence of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}to the true meanμ(x){\displaystyle \mu (x)},provided thatμ(x){\displaystyle \mu (x)}has a sufficient number of derivatives. See C. J. Stone (1980).[27]Generally, it takes a large sample size for this faster convergence to be realized. There are also computational and stability issues that arise, particularly for multivariate smoothing. It is generally not recommended to use local polynomials with degree greater than 3. As with bandwidth selection, methods such as cross-validation can be used to compare the fits obtained with different degrees of polynomial. As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local modelparameterestimates. Cleveland (1979)[16]sets out four requirements for the weight function: Asymptotic efficiency of weight functions has been considered byV. A. Epanechnikov(1969)[28]in the context of kernel density estimation; J. Fan (1993)[29]has derived similar results for local regression. They conclude that the quadratic kernel,W(x)=1−x2{\displaystyle W(x)=1-x^{2}}for|x|≤1{\displaystyle |x|\leq 1}has greatest efficiency under a mean-squared-error loss function. See"kernel functions in common use"for more discussion of different kernels and their efficiencies. Considerations other than MSE are also relevant to the choice of weight function. Smoothness properties ofW(x){\displaystyle W(x)}directly affect smoothness of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}. In particular, the quadaratic kernel is not differentiable atx=±1{\displaystyle x=\pm 1}, andμ^(x){\displaystyle {\hat {\mu }}(x)}is not differentiable as a result. Thetri-cube weight function,W(x)=(1−|x|3)3;|x|<1{\displaystyle W(x)=(1-|x|^{3})^{3};|x|<1}has been used in LOWESS and other local regression software; this combines higher-order differentiability with a high MSE efficiency. One criticism of weight functions with bounded support is that they can lead to numerical problems (i.e. an unstable or singular design matrix) when fitting in regions with sparse data. For this reason, some authors[who?]choose to use the Gaussian kernel, or others with unbounded support. As described above, local regression uses a locally weighted least squares criterion to estimate the regression parameters. This inherits many of the advantages (ease of implementation and interpretation; good properties when errors are normally distributed) and disadvantages (sensitivity to extreme values and outliers; inefficiency when errors have unequal variance or are not normally distributed) usually associated with least squares regression. These disadvantages can be addressed by replacing the local least-squares estimation by something else. Two such ideas are presented here: local likelihood estimation, which applies local estimation to thegeneralized linear model, and robust local regression, which localizes methods fromrobust regression. In local likelihood estimation, developed in Tibshirani and Hastie (1987),[20]the observationsYi{\displaystyle Y_{i}}are assumed to come from a parametric family of distributions, with a known probability density function (or mass function, for discrete data),Yi∼f(y,θ(xi)),{\displaystyle Y_{i}\sim f(y,\theta (x_{i})),}where the parameter functionθ(x){\displaystyle \theta (x)}is the unknown quantity to be estimated. To estimateθ(x){\displaystyle \theta (x)}at a particular pointx{\displaystyle x}, the local likelihood criterion is∑i=1nwi(x)log⁡(f(Yi,β0+β1(xi−x)+…+βp(xi−x)p).{\displaystyle \sum _{i=1}^{n}w_{i}(x)\log \left(f(Y_{i},\beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}\right).}Estimates of the regression coefficients (in, particular,β^0{\displaystyle {\hat {\beta }}_{0}}) are obtained by maximizing the local likelihood criterion, and the local likelihood estimate isθ^(x)=β^0.{\displaystyle {\hat {\theta }}(x)={\hat {\beta }}_{0}.} Whenf(y,θ(x)){\displaystyle f(y,\theta (x))}is the normal distribution andθ(x){\displaystyle \theta (x)}is the mean function, the local likelihood method reduces to the standard local least-squares regression. For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such asiteratively reweighted least squaresmust be used to compute the estimate. Example(local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability,μ(xi)=Pr(Yi=1|xi){\displaystyle \mu (x_{i})=\Pr(Y_{i}=1|x_{i})}. Sinceμ(xi){\displaystyle \mu (x_{i})}must be between 0 and 1, a local polynomial model should not be used forμ(x){\displaystyle \mu (x)}directly. Insead, the logistic transformationθ(x)=log⁡(μ(x)1−μ(x)){\displaystyle \theta (x)=\log \left({\frac {\mu (x)}{1-\mu (x)}}\right)}can be used; equivalently,1−μ(x)=11+eθ(x);μ(x)=eθ(x)1+eθ(x){\displaystyle {\begin{aligned}1-\mu (x)&={\frac {1}{1+e^{\theta (x)}}};\\\mu (x)&={\frac {e^{\theta (x)}}{1+e^{\theta (x)}}}\end{aligned}}}and the mass function isf(Yi,θ(xi))=eYiθ(xi)1+eθ(xi).{\displaystyle f(Y_{i},\theta (x_{i}))={\frac {e^{Y_{i}\theta (x_{i})}}{1+e^{\theta (x_{i})}}}.} An asymptotic theory for local likelihood estimation is developed in J. Fan,Nancy E. Heckmanand M.P.Wand (1995);[30]the book Loader (1999)[31]discusses many more applications of local likelihood. To address the sensitivity to outliers, techniques fromrobust regressioncan be employed. In localM-estimation, the local least-squares criterion is replaced by a criterion of the form∑i=1nwi(x)ρ(Yi−β0−…−βp(xi−x)ps){\displaystyle \sum _{i=1}^{n}w_{i}(x)\rho \left({\frac {Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}}{s}}\right)}whereρ(⋅){\displaystyle \rho (\cdot )}is a robustness function ands{\displaystyle s}is a scale parameter. Discussion of the merits of different choices of robustness function is best left to therobust regressionliterature. The scale parameters{\displaystyle s}must also be estimated. References for local M-estimation include Katkovnik (1985)[17]andAlexandre Tsybakov(1986).[32] The robustness iterations in LOWESS and LOESS correspond to the robustness function defined byρ′(u)=u(1−u2/6)2;|u|<1{\displaystyle \rho '(u)=u(1-u^{2}/6)^{2};|u|<1}and a robust global estimate of the scale parameter. Ifρ(u)=|u|{\displaystyle \rho (u)=|u|}, the localL1{\displaystyle L_{1}}criterion∑i=1nwi(x)|Yi−β0−…−βp(xi−x)p|{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left|Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}\right|}results; this does not require a scale parameter. Whenp=0{\displaystyle p=0}, this criterion is minimized by a locally weighted median; localL1{\displaystyle L_{1}}regression can be interpreted as estimating themedian, rather thanmean, response. If the loss function is skewed, this becomes local quantile regression. SeeKeming YuandM.C. Jones(1998).[33] As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure. Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models[citation needed]. LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.[7] Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. Innonlinear regression, on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system. Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causalfinite impulse responsefilter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative,robustversion of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity tooutliers, but too many extreme outliers can still overcome even the robust method. Books substantially covering local regression and extensions: Book chapters, Reviews: This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Local_regression
Instatistical modeling(especiallyprocess modeling), polynomial functions and rational functions are sometimes used as an empirical technique forcurve fitting. Apolynomial functionis one that has the form wherenis a non-negativeintegerthat defines the degree of the polynomial. A polynomial with a degree of 0 is simply aconstant function; with a degree of 1 is aline; with a degree of 2 is aquadratic; with a degree of 3 is acubic, and so on. Historically, polynomial models are among the most frequently used empirical models forcurve fitting. These models are popular for the following reasons. However, polynomial models also have the following limitations. When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit. Arational functionis simply the ratio of two polynomial functions. withndenoting a non-negative integer that defines the degree of the numerator andmdenoting a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. The rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant). Rational function models have the following advantages: Rational function models have the following disadvantages: This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Rational_function_modeling
Smoothing splinesare function estimates,f^(x){\displaystyle {\hat {f}}(x)}, obtained from a set of noisy observationsyi{\displaystyle y_{i}}of the targetf(xi){\displaystyle f(x_{i})}, in order to balance a measure ofgoodness of fitoff^(xi){\displaystyle {\hat {f}}(x_{i})}toyi{\displaystyle y_{i}}with a derivative based measure of the smoothness off^(x){\displaystyle {\hat {f}}(x)}. They provide a means for smoothing noisyxi,yi{\displaystyle x_{i},y_{i}}data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case wherex{\displaystyle x}is a vector quantity. Let{xi,Yi:i=1,…,n}{\displaystyle \{x_{i},Y_{i}:i=1,\dots ,n\}}be a set of observations, modeled by the relationYi=f(xi)+ϵi{\displaystyle Y_{i}=f(x_{i})+\epsilon _{i}}where theϵi{\displaystyle \epsilon _{i}}are independent, zero mean random variables. The cubic smoothing spline estimatef^{\displaystyle {\hat {f}}}of the functionf{\displaystyle f}is defined to be the unique minimizer, in theSobolev spaceW22{\displaystyle W_{2}^{2}}on a compact interval, of[1][2] Remarks: It is useful to think of fitting a smoothing spline in two steps: Now, treat the second step first. Given the vectorm^=(f^(x1),…,f^(xn))T{\displaystyle {\hat {m}}=({\hat {f}}(x_{1}),\ldots ,{\hat {f}}(x_{n}))^{T}}of fitted values, the sum-of-squares part of the spline criterion is fixed. It remains only to minimize∫f^″(x)2dx{\displaystyle \int {\hat {f}}''(x)^{2}\,dx}, and the minimizer is a natural cubicsplinethat interpolates the points(xi,f^(xi)){\displaystyle (x_{i},{\hat {f}}(x_{i}))}. This interpolating spline is a linear operator, and can be written in the form wherefi(x){\displaystyle f_{i}(x)}are a set of spline basis functions. As a result, the roughness penalty has the form where the elements ofAare∫fi″(x)fj″(x)dx{\displaystyle \int f_{i}''(x)f_{j}''(x)dx}. The basis functions, and hence the matrixA, depend on the configuration of the predictor variablesxi{\displaystyle x_{i}}, but not on the responsesYi{\displaystyle Y_{i}}orm^{\displaystyle {\hat {m}}}. Ais ann×nmatrix given byA=ΔTW−1Δ{\displaystyle A=\Delta ^{T}W^{-1}\Delta }. Δis an(n-2)×nmatrix of second differences with elements: Δii=1/hi{\displaystyle \Delta _{ii}=1/h_{i}},Δi,i+1=−1/hi−1/hi+1{\displaystyle \Delta _{i,i+1}=-1/h_{i}-1/h_{i+1}},Δi,i+2=1/hi+1{\displaystyle \Delta _{i,i+2}=1/h_{i+1}} Wis an(n-2)×(n-2)symmetric tri-diagonal matrix with elements: Wi−1,i=Wi,i−1=hi/6{\displaystyle W_{i-1,i}=W_{i,i-1}=h_{i}/6},Wii=(hi+hi+1)/3{\displaystyle W_{ii}=(h_{i}+h_{i+1})/3}andhi=ξi+1−ξi{\displaystyle h_{i}=\xi _{i+1}-\xi _{i}}, the distances between successive knots (or x values). Now back to the first step. The penalized sum-of-squares can be written as whereY=(Y1,…,Yn)T{\displaystyle Y=(Y_{1},\ldots ,Y_{n})^{T}}. Minimizing overm^{\displaystyle {\hat {m}}}by differentiating againstm^{\displaystyle {\hat {m}}}. This results in:−2{Y−m^}+2λAm^=0{\displaystyle -2\{Y-{\hat {m}}\}+2\lambda A{\hat {m}}=0}[6]andm^=(I+λA)−1Y.{\displaystyle {\hat {m}}=(I+\lambda A)^{-1}Y.} De Boor's approach exploits the same idea, of finding a balance between having a smooth curve and being close to the given data.[7] wherep{\displaystyle p}is a parameter called smooth factor and belongs to the interval[0,1]{\displaystyle [0,1]}, andδi;i=1,…,n{\displaystyle \delta _{i};i=1,\dots ,n}are the quantities controlling the extent of smoothing (they represent the weightδi−2{\displaystyle \delta _{i}^{-2}}of each pointYi{\displaystyle Y_{i}}). In practice, sincecubic splinesare mostly used,m{\displaystyle m}is usually2{\displaystyle 2}. The solution form=2{\displaystyle m=2}was proposed byChristian Reinschin 1967.[8]Form=2{\displaystyle m=2}, whenp{\displaystyle p}approaches1{\displaystyle 1},f^{\displaystyle {\hat {f}}}converges to the "natural" spline interpolant to the given data.[7]Asp{\displaystyle p}approaches0{\displaystyle 0},f^{\displaystyle {\hat {f}}}converges to a straight line (the smoothest curve). Since finding a suitable value ofp{\displaystyle p}is a task of trial and error, a redundant constantS{\displaystyle S}was introduced for convenience.[8]S{\displaystyle S}is used to numerically determine the value ofp{\displaystyle p}so that the functionf^{\displaystyle {\hat {f}}}meets the following condition: The algorithm described by de Boor starts withp=0{\displaystyle p=0}and increasesp{\displaystyle p}until the condition is met.[7]Ifδi{\displaystyle \delta _{i}}is an estimation of the standard deviation forYi{\displaystyle Y_{i}}, the constantS{\displaystyle S}is recommended to be chosen in the interval[n−2n,n+2n]{\displaystyle \left[n-{\sqrt {2n}},n+{\sqrt {2n}}\right]}. HavingS=0{\displaystyle S=0}means the solution is the "natural" spline interpolant.[8]IncreasingS{\displaystyle S}means we obtain a smoother curve by getting farther from the given data. There are two main classes of method for generalizing from smoothing with respect to a scalarx{\displaystyle x}to smoothing with respect to a vectorx{\displaystyle x}. The first approach simply generalizes the spline smoothing penalty to the multidimensional setting. For example, if trying to estimatef(x,z){\displaystyle f(x,z)}we might use theThin plate splinepenalty and find thef^(x,z){\displaystyle {\hat {f}}(x,z)}minimizing The thin plate spline approach can be generalized to smoothing with respect to more than two dimensions and to other orders of differentiation in the penalty.[1]As the dimension increases there are some restrictions on the smallest order of differential that can be used,[1]but actually Duchon's original paper,[9]gives slightly more complicated penalties that can avoid this restriction. The thin plate splines are isotropic, meaning that if we rotate thex,z{\displaystyle x,z}co-ordinate system the estimate will not change, but also that we are assuming that the same level of smoothing is appropriate in all directions. This is often considered reasonable when smoothing with respect to spatial location, but in many other cases isotropy is not an appropriate assumption and can lead to sensitivity to apparently arbitrary choices of measurement units. For example, if smoothing with respect to distance and time an isotropic smoother will give different results if distance is measure in metres and time in seconds, to what will occur if we change the units to centimetres and hours. The second class of generalizations to multi-dimensional smoothing deals directly with this scale invariance issue using tensor product spline constructions.[10][11][12]Such splines have smoothing penalties with multiple smoothing parameters, which is the price that must be paid for not assuming that the same degree of smoothness is appropriate in all directions. Smoothing splines are related to, but distinct from: Source code forsplinesmoothing can be found in the examples fromCarl de Boor'sbookA Practical Guide to Splines. The examples are in theFortranprogramming language. The updated sources are available also on Carl de Boor's official site[1].
https://en.wikipedia.org/wiki/Spline_regression
Incomputer programming,array slicingis an operation that extracts a subset of elements from anarrayand packages them as another array, possibly in a differentdimensionfrom the original. Common examples of array slicing are extracting a substring from astringof characters, the "ell" in "hello", extracting a row or column from a two-dimensional array, or extracting avectorfrom amatrix. Depending on theprogramming language, an array slice can be made out of non-consecutive elements. Also depending on the language, the elements of the new array may bealiased to(i.e., share memory with) those of the original array. For "one-dimensional" (single-indexed) arrays – vectors, sequences, strings etc. – the most common slicing operation is extraction of zero or more consecutive elements. If we have a vector containing elements (2, 5, 7, 3, 8, 6, 4, 1), and want to create an array slice from the 3rd to the 6th elements, we get (7, 3, 8, 6). Inprogramming languagesthat use a 0-based indexing scheme, the slice would be from index2to5. Reducing the range of any index to a single value effectively removes the need for that index. This feature can be used, for example, to extract one-dimensional slices (vectors in 3D, including rows, columns, and tubes[1]) or two-dimensional slices (rectangular matrices) from a three-dimensional array. However, since the range can be specified at run-time, type-checked languages may require an explicit (compile-time) notation to actually eliminate the trivial indices. General array slicing can be implemented (whether or not built into the language) by referencing every array through adope vectorordescriptor– a record that contains the address of the first array element, and then the range of each index and the corresponding coefficient in the indexing formula. This technique also allows immediate arraytransposition, index reversal, subsampling, etc. For languages likeC, where the indices always start at zero, the dope vector of an array withdindices has at least 1 + 2dparameters. For languages that allow arbitrary lower bounds for indices, likePascal, the dope vector needs 1 + 3dentries. If the array abstraction does not support true negative indices (as the arrays ofAdaandPascaldo), then negative indices for the bounds of the slice for a given dimension are sometimes used to specify an offset from the end of the array in that dimension. In 1-based schemes, -1 generally indicates the second-to-last item, while in a 0-based system, it refers to the very last item. The concept of slicing was surely known even before the invention ofcompilers. Slicing as a language feature probably started withFORTRAN(1957), more as a consequence of non-existent type and range checking than by design. The concept was also alluded to in the preliminary report for theIAL(ALGOL 58) in that the syntax allowed one or more indices of an array element (or, for that matter, of a procedure call) to be omitted when used as an actual parameter. Kenneth Iverson'sAPL(1957) had very flexible multi-dimensional array slicing, which contributed much to the language's expressive power and popularity. ALGOL 68(1968) introduced comprehensive multi-dimension array slicing and trimming features. Array slicing facilities have been incorporated in several modern languages, such asAda,Cobra,D,Fortran 90,Go,Rust,Julia,MATLAB,Perl,Python,S-Lang,Windows PowerShelland the mathematical/statistical languagesGNU Octave,SandR. PL/I provides two facilities for array slicing. A reference toY(2)is a reference toX(2,2), and so on. The Fortran 66 programmers were only able to take advantage of slicing matrices by row, and then only when passing that row to asubroutine: Result: Note that there is nodope vectorin FORTRAN 66 hence the length of the slice must also be passed as an argument - or some other means - to theSUBROUTINE. 1970sPascalandChad similar restrictions. Algol68 final report contains an early example of slicing, slices are specified in the form: or: Both bounds are inclusive and can be omitted, in which case they default to the declared array bounds. Neither the stride facility, nor diagonal slice aliases are part of the revised report. Examples: HP'sHP 2000systems, introduced in November 1968, usedHP Time-Shared BASICas their primary interface and programming language. This version of BASIC used slicing for most string manipulation operations. One oddity of the language was that it allowed round or square braces interchangeably, and which was used in practice was typically a function of thecomputer terminalbeing used. Example: Will produce: The HP systems were widely used in the early 1970s, especially in technicalhigh schoolsand many small industrial and scientific settings.[3]As the firstmicrocomputersemerged in the mid-1970s, HP was often used as the pattern for their BASIC dialects as well. Notable examples include 1977'sApple BASIC, 1978'sAtari BASIC, and 1979'sSinclair BASIC. This style of manipulation generally offers advantages in terms of memory use, and was often chosen on systems that shipped with small amounts of memory. Only Sinclair's dialect differed in any meaningful way, using theTOkeyword instead of a comma-separated list: Slicing was also selected as the basis for theANSIFull BASICstandard, using the colon as the separator and thus differentiating between slicing and array access: While this style of access offered a number of advantages, especially for the small machines of the era, sometime after 1970Digital Equipment Corporationintroduced their own variation of BASIC that used theLEFT$,RIGHT$andMID$string functions.Microsoft BASICwas written on thePDP-10and its BASIC was used as the pattern. Through the late 1970s the two styles were both widely used, but by the early 1980s the DEC-style functions were thede factostandard. The:operator implements the stride syntax (lower_bound:upper_bound[:stride]) by generating a vector.1:5evaluates as[1, 2, 3, 4, 5].1:9:2evaluates as[1, 3, 5, 7, 9]. A bare:evaluates the same as1:end, withenddetermined by context. Arrays inSandGNU Rare always one-based, thus the indices of a new slice will begin withonefor each dimension, regardless of the previous indices. Dimensions with length ofonewill be dropped (unless drop = FALSE). Dimension names (where present) will be preserved. The Fortran 77 standard introduced the ability to slice andconcatenatestrings: Produces: Such strings could be passed byreferenceto another subroutine, the length would also be passed transparently to the subroutine as a kind ofshortdope vector. Again produces: Ada 83 supports slices for all array types. LikeFortran 77such arrays could be passed byreferenceto another subroutine, the length would also be passed transparently to the subroutine as a kind ofshortdope vector. Produces: Note:Since in Ada indices are n-based the termText (2 .. 4)will result in an Array with the base index of 2. The definition forText_IO.Put_Lineis: The definition forStringis: As Ada supports true negative indices as intypeHistory_Data_Arrayisarray(-6000..2010)ofHistory_Data;it places no special meaning on negative indices. In the example above the termSome_History_Data (-30 .. 30)would slice theHistory_Datafrom 31BCto 30AD(since there was no year zero, the year number 0 actually refers to 1BC). If we have as above, then the first 3 elements, middle 3 elements and last 3 elements would be: Perl supports negative list indices. The -1 index is the last element, -2 the penultimate element, etc. In addition, Perl supports slicing based on expressions, for example: If you have the following list: Then it is possible to slice by using a notation similar to element retrieval: Note that Python allows negative list indices. The index -1 represents the last element, -2 the penultimate element, etc. Python also allows a step property by appending an extra colon and a value. For example: The stride syntax (nums[1:5:2]) was introduced in the second half of the 1990s, as a result of requests put forward by scientific users in the Python "matrix-SIG" (special interest group).[4] Slice semantics potentially differ per object; new semantics can be introduced whenoperator overloadingthe indexing operator. With Python standard lists (which aredynamic arrays), every slice is a copy. Slices ofNumPyarrays, by contrast, are views onto the same underlying buffer. In Fortran 90, slices are specified in the form Both bounds are inclusive and can be omitted, in which case they default to the declared array bounds. Stride defaults to 1. Example: Each dimension of an array value in Analytica is identified by an Index variable. When slicing or subscripting, the syntax identifies the dimension(s) over which you are slicing or subscripting by naming the dimension. Such as: Naming indexes in slicing and subscripting is similar to naming parameters in function calls instead of relying on a fixed sequence of parameters. One advantage of naming indexes in slicing is that the programmer does not have to remember the sequence of Indexes, in a multidimensional array. A deeper advantage is that expressions generalize automatically and safely without requiring a rewrite when the number of dimensions of X changes. Array slicing was introduced in version 1.0. Earlier versions did not support this feature. Suppose that A is a 1-d array such as Then an array B of first 5 elements of A may be created using Similarly, B may be assigned to an array of the last 5 elements of A via: Other examples of 1-d slicing include: Slicing of higher-dimensional arrays works similarly: Array indices can also be arrays of integers. For example, suppose thatI = [0:9]is an array of 10 integers. ThenA[I]is equivalent to an array of the first 10 elements ofA. A practical example of this is a sorting operation such as: Consider the array: Take a slice out of it: and the contents ofbwill be[7, 3, 8]. The first index of the slice is inclusive, the second is exclusive. means that the dynamic arraycnow contains[8, 6]because inside the [] the$symbol refers to the length of the array. D array slices are aliased to the original array, so: means thatanow has the contents[2, 5, 7, 3, 10, 6, 4, 1]. To create a copy of the array data, instead of only an alias, do: Unlike Python, D slice bounds don't saturate, so code equivalent to this Python code is an error in D: The programming languageSuperColliderimplements some concepts fromJ/APL. Slicing looks as follows: Arrays infishare always one-based, thus the indices of a new slice will begin withone, regardless of the previous indices. Cobra supports Python-style slicing. If you have a list then the first 3 elements, middle 3 elements, and last 3 elements would be: Cobra also supports slicing-style syntax for 'numeric for loops': Arrays are zero-based in PowerShell and can be defined using the comma operator: Go supports Python-style syntax for slicing (except negative indices are not supported). Arrays and slices can be sliced. If you have a slice then the first 3 elements, middle 3 elements, last 3 elements, and a copy of the entire slice would be: Slices in Go are reference types, which means that different slices may refer to the same underlying array. Cilk Plus supports syntax for array slicing as an extension to C and C++. Cilk Plus slicing looks as follows: Cilk Plus's array slicing differs from Fortran's in two ways: Julia array slicingis like that ofMATLAB, but uses square brackets. Example:
https://en.wikipedia.org/wiki/Array_slicing
This is a list of notableprogramming languages, grouped by type. The groupings are overlapping; not mutually exclusive. A language can be listed in multiple groupings. Agent-oriented programming allows the developer to build, extend and usesoftware agents, which are abstractions of objects that can message other agents. Array programming(also termedvectorormultidimensional) languages generalize operations on scalars to apply transparently tovectors,matrices, andhigher-dimensional arrays. Aspect-oriented programming enables developers to add new functionality to code, known as "advice", without modifying that code itself; rather, it uses apointcutto implement the advice into code blocks. Assembly languagesdirectly correspond to amachine language(seebelow), so machine code instructions appear in a form understandable by humans, although there may not be a one-to-one mapping between an individual statement and an individual instruction. Assembly languages let programmers use symbolic addresses, which theassemblerconverts to absolute orrelocatableaddresses. Most assemblers also supportmacrosandsymbolic constants. Anauthoring languageis a programming language designed for use by a non-computer expert to easily create tutorials, websites, and other interactive computer programs. Command-line interface (CLI) languages are also called batch languages or job control languages. Examples: These are languages typically processed bycompilers, though theoretically any language can be compiled or interpreted. Aconcatenative programming languageis apoint-freecomputerprogramming languagein which all expressions denotefunctions, and thejuxtapositionofexpressionsdenotesfunction composition. Message passinglanguages provide language constructs forconcurrency. The predominant paradigm for concurrency in mainstream languages such asJavaisshared memoryconcurrency. Concurrent languages that make use of message passing have generally been inspired by process calculi such ascommunicating sequential processes(CSP) or theπ-calculus. Aconstraint programminglanguage is adeclarative programminglanguage where relationships between variables are expressed asconstraints. Execution proceeds by attempting to find values for the variables which satisfy all declared constraints. Acurly bracketorcurly bracelanguage has syntax that defines a block as the statements betweencurly brackets, a.k.a. braces,{}. This syntax originated withBCPL(1966), and was popularized byC. Many curly bracket languagesdescend from or are strongly influenced by C. Examples: Dataflow programminglanguages rely on a (usually visual) representation of the flow of data to specify the program. Frequently used for reacting to discrete events or for processing streams of data. Examples of dataflow languages include: Data-oriented languages provide powerful ways of searching and manipulating the relations that have been described as entity relationship tables which map one set of things into other sets.[citation needed]Examples of data-oriented languages include: Decision tablescan be used as an aid to clarifying the logic before writing a program in any language, but in the 1960s a number of languages were developed where the main logic is expressed directly in the form of a decision table, including: Declarative languagesexpress the logic of a computation without describing its control flow in detail.Declarative programmingstands in contrast toimperative programmingvia imperative programming languages, where control flow is specified by serial orders (imperatives). (Pure)functionalandlogic-basedprogramming languages are also declarative, and constitute the major subcategories of the declarative category. This section lists additional examples not in those subcategories. Source embeddable languages embed small pieces of executable code inside a piece of free-form text, often a web page. Client-side embedded languages are limited by the abilities of the browser or intended client. They aim to provide dynamism to web pages without the need to recontact the server. Server-side embedded languages are much more flexible, since almost any language can be built into a server. The aim of having fragments of server-side code embedded in a web page is to generate additional markup dynamically; the code itself disappears when the page is served, to be replaced by its output. The above examples are particularly dedicated to this purpose. A large number of other languages, such asErlang,Scala,Perl,RingandRubycan be adapted (for instance, by being made intoApachemodules). A wide variety of dynamic or scripting languages can be embedded in compiled executable code. Basically, object code for the language'sinterpreterneeds to be linked into the executable. Source code fragments for the embedded language can then be passed to an evaluation function as strings. Application control languages can be implemented this way, if the source code is input by the user. Languages with small interpreters are preferred. Languages developed primarily for the purpose of teaching and learning of programming. Anesoteric programming languageis a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke. Extension programming languagesare languages embedded into another program and used to harness its features in extension scripts. Fourth-generation programming languagesarehigh-level programming languagesbuilt arounddatabasesystems. They are generally used in commercial environments. Functional programminglanguages define programs and subroutines as mathematical functions and treat them as first-class. Many so-called functional languages are "impure", containing imperative features. Many functional languages are tied to mathematical calculation tools. Functional languages include: In electronics, ahardware description language(HDL) is a specialized computer language used to describe the structure, design, and operation of electronic circuits, and most commonly, digital logic circuits. The two most widely used and well-supported HDL varieties used in industry areVerilogandVHDL. Hardware description languages include: Imperative programming languages may be multi-paradigm and appear in other classifications. Here is a list of programming languages that follow theimperative paradigm: Known asREPL- Interactive mode languages act as a kind of shell: expressions or statements can be entered one at a time, and the result of their evaluation seen immediately. Interpreted languagesare programming languages in which programs may be executed from source code form, by an interpreter. Theoretically, any language can be compiled or interpreted, so the terminterpreted languagegenerally refers to languages that are usually interpreted rather than compiled. Iterative languages are built around or offeringgenerators. Garbage Collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory that was allocated by the program but is no longer used. Some programming languages without the inherent ability to manually manage memory, likeCython,[25]Swift,[c]andScala[26](Scala Native only), are able to import or call functions likemallocandfreefromCthrough aforeign function interface. List-based languages are a type ofdata-structured languagethat are based on thelistdata structure. Little languages[29]serve a specialized problem domain. Logic-basedlanguages specify a set of attributes that a solution must-have, rather than a set of steps to obtain a solution. Notable languages following thisprogramming paradigminclude: Machine languagesare directly executable by a computer's CPU. They are typically formulated as bit patterns, usually represented inoctalorhexadecimal. Each bit pattern causes the circuits in the CPU to execute one of the fundamental operations of the hardware. The activation of specific electrical inputs (e.g., CPU package pins for microprocessors), and logical settings for CPU state values, control the processor's computation. Individual machine languages are specific to a family of processors; machine-language code for one family of processors cannot run directly on processors in another family unless the processors in question have additional hardware to support it (for example, DEC VAX processors included a PDP-11 compatibility mode). They are (essentially) always defined by the CPU developer, not by 3rd parties.[e]The symbolic version, the processor'sassembly language, is also defined by the developer, in most cases. Some commonly used machine codeinstruction setsare: Macrolanguages transform one source code file into another. A "macro" is essentially a short piece of text that expands into a longer one (not to be confused withhygienic macros), possibly with parameter substitution. They are often used topreprocesssource code. Preprocessors can also supply facilities likefile inclusion. Macro languages may be restricted to acting on specially labeled code regions (pre-fixed with a#in the case of the C preprocessor). Alternatively, they may not, but in this case it is still often undesirable to (for instance) expand a macro embedded in astring literal, so they still need a rudimentary awareness of syntax. That being the case, they are often still applicable to more than one language. Contrast with source-embeddable languages likePHP, which are fully featured. Scripting languagessuch asTclandECMAScript(ActionScript,ECMAScript for XML,JavaScript,JScript) have been embedded into applications. These are sometimes called "macro languages", although in a somewhat different sense to textual-substitution macros likem4. Metaprogrammingis the writing of programs that write or manipulate other programs, including themselves, as their data or that do part of the work that is otherwise done atrun timeduringcompile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually. Multiparadigm languagessupport more than oneprogramming paradigm. They allow aprogramto use more than oneprogrammingstyle. The goal is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way. Several general-purpose programming languages, such asCandPython, are also used for technical computing, this list focuses on languages almost exclusively used for technical computing. Class-basedobject-oriented programminglanguages supportobjectsdefined by their class. Class definitions include member data.Message passingis a key concept, if not the main concept, in object-oriented languages. Polymorphic functions parameterized by the class of some of their arguments are typically calledmethods. In languages withsingle dispatch, classes typically also include method definitions. In languages withmultiple dispatch, methods are defined bygeneric functions. There are exceptions wheresingle dispatchmethods aregeneric functions(e.g.Bigloo's object system). Prototype-based languagesare object-oriented languages where the distinction between classes and instances has been removed: Off-side rulelanguages denote blocks of code by theirindentation. Procedural programminglanguages are based on the concept of the unit and scope (the data viewing range) of an executable code statement. A procedural program is composed of one or more units or modules, either user coded or provided in a code library; each module is composed of one or more procedures, also called a function, routine, subroutine, or method, depending on the language. Examples of procedural languages include: Reflective programminglanguages let programs examine and possibly modify their high-level structure at runtime or compile-time. This is most common in high-level virtual machine programming languages likeSmalltalk, and less common in lower-level programming languages likeC. Languages and platforms supporting reflection: Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:[citation needed] Stack-based languages are a type ofdata-structured languagethat are based on thestackdata structure. Synchronous programming languagesare optimized for programming reactive systems, systems that are often interrupted and must respond quickly. Many such systems are also calledrealtime systems, and are used often inembedded systems. Examples: Ashading languageis a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "color" and "normal". Due to the variety of target markets for 3D computer graphics. They provide both higher hardware abstraction and a more flexible programming model than previous paradigms which hardcoded transformation and shading equations. This gives the programmer greater control over the rendering process and delivers richer content at lower overhead. Shading languages used in offline rendering produce maximum image quality. Processing such shaders is time-consuming. The computational power required can be expensive because of their ability to produce photorealistic results. These languages assist with generatinglexical analyzersandparsersforcontext-free grammars. Thesystem programming languagesare for low-level tasks like memory management or task management. A system programming language usually refers to a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software. System software is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes software categories such as operating systems, utility software, device drivers, compilers, and linkers. Examples of system languages include: Transformation languagesserve the purpose of transforming (translating) source code specified in a certain formal language into a defined destination format code. It is most commonly used in intermediate components of more complex super-systems in order to adopt internal results for input into a succeeding processing routine. Visual programming languageslet users specify programs in a two-(or more)-dimensional way, instead of as one-dimensional text strings, via graphic layouts of various types. Somedataflow programminglanguages are also visual languages. Computer scientistNiklaus Wirthdesigned and implemented several influential languages. These are languages based on or that operate onXML.
https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Array_languages
This is a list of notableprogramming languages, grouped by type. The groupings are overlapping; not mutually exclusive. A language can be listed in multiple groupings. Agent-oriented programming allows the developer to build, extend and usesoftware agents, which are abstractions of objects that can message other agents. Array programming(also termedvectorormultidimensional) languages generalize operations on scalars to apply transparently tovectors,matrices, andhigher-dimensional arrays. Aspect-oriented programming enables developers to add new functionality to code, known as "advice", without modifying that code itself; rather, it uses apointcutto implement the advice into code blocks. Assembly languagesdirectly correspond to amachine language(seebelow), so machine code instructions appear in a form understandable by humans, although there may not be a one-to-one mapping between an individual statement and an individual instruction. Assembly languages let programmers use symbolic addresses, which theassemblerconverts to absolute orrelocatableaddresses. Most assemblers also supportmacrosandsymbolic constants. Anauthoring languageis a programming language designed for use by a non-computer expert to easily create tutorials, websites, and other interactive computer programs. Command-line interface (CLI) languages are also called batch languages or job control languages. Examples: These are languages typically processed bycompilers, though theoretically any language can be compiled or interpreted. Aconcatenative programming languageis apoint-freecomputerprogramming languagein which all expressions denotefunctions, and thejuxtapositionofexpressionsdenotesfunction composition. Message passinglanguages provide language constructs forconcurrency. The predominant paradigm for concurrency in mainstream languages such asJavaisshared memoryconcurrency. Concurrent languages that make use of message passing have generally been inspired by process calculi such ascommunicating sequential processes(CSP) or theπ-calculus. Aconstraint programminglanguage is adeclarative programminglanguage where relationships between variables are expressed asconstraints. Execution proceeds by attempting to find values for the variables which satisfy all declared constraints. Acurly bracketorcurly bracelanguage has syntax that defines a block as the statements betweencurly brackets, a.k.a. braces,{}. This syntax originated withBCPL(1966), and was popularized byC. Many curly bracket languagesdescend from or are strongly influenced by C. Examples: Dataflow programminglanguages rely on a (usually visual) representation of the flow of data to specify the program. Frequently used for reacting to discrete events or for processing streams of data. Examples of dataflow languages include: Data-oriented languages provide powerful ways of searching and manipulating the relations that have been described as entity relationship tables which map one set of things into other sets.[citation needed]Examples of data-oriented languages include: Decision tablescan be used as an aid to clarifying the logic before writing a program in any language, but in the 1960s a number of languages were developed where the main logic is expressed directly in the form of a decision table, including: Declarative languagesexpress the logic of a computation without describing its control flow in detail.Declarative programmingstands in contrast toimperative programmingvia imperative programming languages, where control flow is specified by serial orders (imperatives). (Pure)functionalandlogic-basedprogramming languages are also declarative, and constitute the major subcategories of the declarative category. This section lists additional examples not in those subcategories. Source embeddable languages embed small pieces of executable code inside a piece of free-form text, often a web page. Client-side embedded languages are limited by the abilities of the browser or intended client. They aim to provide dynamism to web pages without the need to recontact the server. Server-side embedded languages are much more flexible, since almost any language can be built into a server. The aim of having fragments of server-side code embedded in a web page is to generate additional markup dynamically; the code itself disappears when the page is served, to be replaced by its output. The above examples are particularly dedicated to this purpose. A large number of other languages, such asErlang,Scala,Perl,RingandRubycan be adapted (for instance, by being made intoApachemodules). A wide variety of dynamic or scripting languages can be embedded in compiled executable code. Basically, object code for the language'sinterpreterneeds to be linked into the executable. Source code fragments for the embedded language can then be passed to an evaluation function as strings. Application control languages can be implemented this way, if the source code is input by the user. Languages with small interpreters are preferred. Languages developed primarily for the purpose of teaching and learning of programming. Anesoteric programming languageis a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke. Extension programming languagesare languages embedded into another program and used to harness its features in extension scripts. Fourth-generation programming languagesarehigh-level programming languagesbuilt arounddatabasesystems. They are generally used in commercial environments. Functional programminglanguages define programs and subroutines as mathematical functions and treat them as first-class. Many so-called functional languages are "impure", containing imperative features. Many functional languages are tied to mathematical calculation tools. Functional languages include: In electronics, ahardware description language(HDL) is a specialized computer language used to describe the structure, design, and operation of electronic circuits, and most commonly, digital logic circuits. The two most widely used and well-supported HDL varieties used in industry areVerilogandVHDL. Hardware description languages include: Imperative programming languages may be multi-paradigm and appear in other classifications. Here is a list of programming languages that follow theimperative paradigm: Known asREPL- Interactive mode languages act as a kind of shell: expressions or statements can be entered one at a time, and the result of their evaluation seen immediately. Interpreted languagesare programming languages in which programs may be executed from source code form, by an interpreter. Theoretically, any language can be compiled or interpreted, so the terminterpreted languagegenerally refers to languages that are usually interpreted rather than compiled. Iterative languages are built around or offeringgenerators. Garbage Collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory that was allocated by the program but is no longer used. Some programming languages without the inherent ability to manually manage memory, likeCython,[25]Swift,[c]andScala[26](Scala Native only), are able to import or call functions likemallocandfreefromCthrough aforeign function interface. List-based languages are a type ofdata-structured languagethat are based on thelistdata structure. Little languages[29]serve a specialized problem domain. Logic-basedlanguages specify a set of attributes that a solution must-have, rather than a set of steps to obtain a solution. Notable languages following thisprogramming paradigminclude: Machine languagesare directly executable by a computer's CPU. They are typically formulated as bit patterns, usually represented inoctalorhexadecimal. Each bit pattern causes the circuits in the CPU to execute one of the fundamental operations of the hardware. The activation of specific electrical inputs (e.g., CPU package pins for microprocessors), and logical settings for CPU state values, control the processor's computation. Individual machine languages are specific to a family of processors; machine-language code for one family of processors cannot run directly on processors in another family unless the processors in question have additional hardware to support it (for example, DEC VAX processors included a PDP-11 compatibility mode). They are (essentially) always defined by the CPU developer, not by 3rd parties.[e]The symbolic version, the processor'sassembly language, is also defined by the developer, in most cases. Some commonly used machine codeinstruction setsare: Macrolanguages transform one source code file into another. A "macro" is essentially a short piece of text that expands into a longer one (not to be confused withhygienic macros), possibly with parameter substitution. They are often used topreprocesssource code. Preprocessors can also supply facilities likefile inclusion. Macro languages may be restricted to acting on specially labeled code regions (pre-fixed with a#in the case of the C preprocessor). Alternatively, they may not, but in this case it is still often undesirable to (for instance) expand a macro embedded in astring literal, so they still need a rudimentary awareness of syntax. That being the case, they are often still applicable to more than one language. Contrast with source-embeddable languages likePHP, which are fully featured. Scripting languagessuch asTclandECMAScript(ActionScript,ECMAScript for XML,JavaScript,JScript) have been embedded into applications. These are sometimes called "macro languages", although in a somewhat different sense to textual-substitution macros likem4. Metaprogrammingis the writing of programs that write or manipulate other programs, including themselves, as their data or that do part of the work that is otherwise done atrun timeduringcompile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually. Multiparadigm languagessupport more than oneprogramming paradigm. They allow aprogramto use more than oneprogrammingstyle. The goal is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way. Several general-purpose programming languages, such asCandPython, are also used for technical computing, this list focuses on languages almost exclusively used for technical computing. Class-basedobject-oriented programminglanguages supportobjectsdefined by their class. Class definitions include member data.Message passingis a key concept, if not the main concept, in object-oriented languages. Polymorphic functions parameterized by the class of some of their arguments are typically calledmethods. In languages withsingle dispatch, classes typically also include method definitions. In languages withmultiple dispatch, methods are defined bygeneric functions. There are exceptions wheresingle dispatchmethods aregeneric functions(e.g.Bigloo's object system). Prototype-based languagesare object-oriented languages where the distinction between classes and instances has been removed: Off-side rulelanguages denote blocks of code by theirindentation. Procedural programminglanguages are based on the concept of the unit and scope (the data viewing range) of an executable code statement. A procedural program is composed of one or more units or modules, either user coded or provided in a code library; each module is composed of one or more procedures, also called a function, routine, subroutine, or method, depending on the language. Examples of procedural languages include: Reflective programminglanguages let programs examine and possibly modify their high-level structure at runtime or compile-time. This is most common in high-level virtual machine programming languages likeSmalltalk, and less common in lower-level programming languages likeC. Languages and platforms supporting reflection: Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:[citation needed] Stack-based languages are a type ofdata-structured languagethat are based on thestackdata structure. Synchronous programming languagesare optimized for programming reactive systems, systems that are often interrupted and must respond quickly. Many such systems are also calledrealtime systems, and are used often inembedded systems. Examples: Ashading languageis a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "color" and "normal". Due to the variety of target markets for 3D computer graphics. They provide both higher hardware abstraction and a more flexible programming model than previous paradigms which hardcoded transformation and shading equations. This gives the programmer greater control over the rendering process and delivers richer content at lower overhead. Shading languages used in offline rendering produce maximum image quality. Processing such shaders is time-consuming. The computational power required can be expensive because of their ability to produce photorealistic results. These languages assist with generatinglexical analyzersandparsersforcontext-free grammars. Thesystem programming languagesare for low-level tasks like memory management or task management. A system programming language usually refers to a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software. System software is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes software categories such as operating systems, utility software, device drivers, compilers, and linkers. Examples of system languages include: Transformation languagesserve the purpose of transforming (translating) source code specified in a certain formal language into a defined destination format code. It is most commonly used in intermediate components of more complex super-systems in order to adopt internal results for input into a succeeding processing routine. Visual programming languageslet users specify programs in a two-(or more)-dimensional way, instead of as one-dimensional text strings, via graphic layouts of various types. Somedataflow programminglanguages are also visual languages. Computer scientistNiklaus Wirthdesigned and implemented several influential languages. These are languages based on or that operate onXML.
https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Numerical_analysis
The followingoutlineis provided as an overview of and topical guide to software: Software– collection ofcomputer programsand relateddatathat provides the information for the functioning of acomputer. It is held in various forms ofmemoryof the computer. It comprises procedures, algorithms, and documentation concerned with the operation of a data processing system. The term was coined to contrast to the term hardware, meaning physical devices. In contrast to hardware, software "cannot be touched".[1]Software is also sometimes used in a more narrow sense, meaningapplication softwareonly. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.[2] Software development entails the establishment of asystems development life cycleof a software product. It encompasses a planned and structured process from the conception of the desired software to its final manifestation,[4]which constitutescomputer programming, the process of writing and maintaining thesource code. Software development includes research, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[5] Software distribution–
https://en.wikipedia.org/wiki/Outline_of_software
Incomputer science, anarrayis adata structureconsisting of a collection ofelements(valuesorvariables), of same memory size, each identified by at least onearray indexorkey, a collection of which may be a tuple, known as an index tuple. An array is stored such that the position (memory address) of each element can be computed from its indextupleby a mathematical formula.[1][2][3]The simplest type of data structure is a linear array, also called a one-dimensional array. For example, an array of ten32-bit(4-byte) integer variables, with indices 0 through 9, may be stored as tenwordsat memory addresses 2000, 2004, 2008, ..., 2036, (inhexadecimal:0x7D0,0x7D4,0x7D8, ...,0x7F4) so that the element with indexihas the address 2000 + (i× 4).[4]The memory address of the first element of an array is called first address, foundation address, or base address. Because the mathematical concept of amatrixcan be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called "matrices". In some cases the term "vector" is used in computing to refer to an array, althoughtuplesrather thanvectorsare the more mathematically correct equivalent.Tablesare often implemented in the form of arrays, especiallylookup tables; the word "table" is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such aslistsandstrings. They effectively exploit the addressing logic of computers. In most modern computers and manyexternal storagedevices, the memory is a one-dimensional array of words, whose indices are their addresses.Processors, especiallyvector processors, are often optimized for array operations. Arrays are useful mostly because the element indices can be computed atrun time. Among other things, this feature allows a single iterativestatementto process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually,[3][5]but not always,[2]fixed while the array is in use. The term "array" may also refer to anarray data type, a kind ofdata typeprovided by mosthigh-level programming languagesthat consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented byhash tables,linked lists,search trees, or other data structures. The term is also used, especially in the description ofalgorithms, to meanassociative arrayor "abstract array", atheoretical computer sciencemodel (anabstract data typeor ADT) intended to capture the essential properties of arrays. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, and for many other purposes.John von Neumannwrote the first array-sorting program (merge sort) in 1945, during the building of thefirst stored-program computer.[6]Array indexing was originally done byself-modifying code, and later usingindex registersandindirect addressing. Some mainframes designed in the 1960s, such as theBurroughs B5000and its successors, usedmemory segmentationto perform index-bounds checking in hardware.[7] Assembly languages generally have no special support for arrays, other than what the machine itself provides. The earliest high-level programming languages, includingFORTRAN(1957),Lisp(1958),COBOL(1960), andALGOL 60(1960), had support for multi-dimensional arrays, and so hasC(1972). InC++(1983), class templates exist for multi-dimensional arrays whose dimension is fixed at runtime[3][5]as well as for runtime-flexible arrays.[2] Arrays are used to implement mathematicalvectorsandmatrices, as well as other kinds of rectangular tables. Manydatabases, small and large, consist of (or include) one-dimensional arrays whose elements arerecords. Arrays are used to implement other data structures, such as lists,heaps,hash tables,deques,queues,stacks,strings, and VLists. Array-based implementations of other data structures are frequently simple and space-efficient (implicit data structures), requiring little spaceoverhead, but may have poor space complexity, particularly when modified, compared to tree-based data structures (compare asorted arrayto asearch tree). One or more large arrays are sometimes used to emulate in-programdynamic memory allocation, particularlymemory poolallocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably. Arrays can be used to determine partial or completecontrol flowin programs, as a compact alternative to (otherwise repetitive) multipleIFstatements. They are known in this context ascontrol tablesand are used in conjunction with a purpose-built interpreter whosecontrol flowis altered according to values contained in the array. The array may containsubroutinepointers(or relative subroutine numbers that can be acted upon bySWITCHstatements) that direct the path of the execution. When data objects are stored in an array, individual objects are selected by an index that is usually a non-negativescalarinteger. Indexes are also called subscripts. An indexmapsthe array value to a stored object. There are three ways in which the elements of an array can be indexed: Using zero based indexing is the design choice of many influential programming languages, includingC,JavaandLisp. This leads to simpler implementation where the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero. Arrays can have multiple dimensions, thus it is not uncommon to access an array using multiple indices. For example, a two-dimensional arrayAwith three rows and four columns might provide access to the element at the 2nd row and 4th column by the expressionA[1][3]in the case of a zero-based indexing system. Thus two indices are used for a two-dimensional array, three for a three-dimensional array, andnfor ann-dimensional array. The number of indices needed to specify an element is called the dimension, dimensionality, orrankof the array. In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of someenumerated type), and the address of an element is computed by a "linear" formula on the indices. A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a single subscript which can either represent a row or column index. As an example consider the C declarationint anArrayName[10];which declares a one-dimensional array of ten integers. Here, the array can store ten elements of typeint. This array has indices starting from zero through nine. For example, the expressionsanArrayName[0]andanArrayName[9]are the first and last elements respectively. For a vector with linear addressing, the element with indexiis located at the addressB+c·i, whereBis a fixedbase addressandca fixed constant, sometimes called theaddress incrementorstride. If the valid element indices begin at 0, the constantBis simply the address of the first element of the array. For this reason, theC programming languagespecifies that array indices always begin at 0; and many programmers will call that element "zeroth" rather than "first". However, one can choose the index of the first element by an appropriate choice of the base addressB. For example, if the array has five elements, indexed 1 through 5, and the base addressBis replaced byB+ 30c, then the indices of those same elements will be 31 to 35. If the numbering does not start at 0, the constantBmay not be the address of any element. For a multidimensional array, the element with indicesi,jwould have addressB+c·i+d·j, where the coefficientscanddare therowandcolumn address increments, respectively. More generally, in ak-dimensional array, the address of an element with indicesi1,i2, ...,ikis For example: int a[2][3]; This means that array a has 2 rows and 3 columns, and the array is of integer type. Here we can store 6 elements they will be stored linearly but starting from first row linear then continuing with second row. The above array will be stored as a11, a12, a13, a21, a22, a23. This formula requires onlykmultiplications andkadditions, for any array that can fit in memory. Moreover, if any coefficient is a fixed power of 2, the multiplication can be replaced bybit shifting. The coefficientsckmust be chosen so that every valid index tuple maps to the address of a distinct element. If the minimum legal value for every index is 0, thenBis the address of the element whose indices are all zero. As in the one-dimensional case, the element indices may be changed by changing the base addressB. Thus, if a two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacingBbyB+c1− 3c2will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index. The addressing formula is completely defined by the dimensiond, the base addressB, and the incrementsc1,c2, ...,ck. It is often useful to pack these parameters into a record called the array's descriptor, stride vector, ordope vector.[2][3]The size of each element, and the minimum and maximum values allowed for each index may also be included in the dope vector. The dope vector is a completehandlefor the array, and is a convenient way to pass arrays as arguments toprocedures. Many usefularray slicingoperations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector.[2] Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create non-contiguous sub-arrays from them. There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive row: In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and all of the elements of a column have a lower address than any of the elements of a consecutive column: For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index tuples differ only by one in thelastindex. "Column major order" is analogous with respect to thefirstindex. In systems which useprocessor cacheorvirtual memory, scanning an array is much faster if successive elements are stored in consecutive positions in memory, rather than sparsely scattered. This is known as spatial locality, which is a type oflocality of reference. Many algorithms that use multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use this information to choose between row- or column-major layout for each array. For example, when computing the productA·Bof two matrices, it would be best to haveAstored in row-major order, andBin column-major order. Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to effectively implement adynamicversion of an array; seedynamic array. If this operation is done infrequently, insertions at the end of the array require only amortized constant time. Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in use, called the count or size. This effectively makes the array adynamic arraywith a fixed maximum size or capacity;Pascal stringsare examples of this. More complicated (non-linear) formulas are occasionally used. For a compact two-dimensionaltriangular array, for instance, the addressing formula is a polynomial of degree 2. Bothstoreandselecttake (deterministic worst case)constant time. Arrays take linear (O(n)) space in the number of elementsnthat they hold. In an array with element sizekand on a machine with a cache line size of B bytes, iterating through an array ofnelements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory locations. This is roughly a factor of B/kbetter than the number of cache misses needed to accessnelements at random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than iteration over many other data structures, a property calledlocality of reference(this doesnotmean however, that using aperfect hashortrivial hashwithin the same (local) array, will not be even faster - and achievable inconstant time). Libraries provide low-level optimized facilities for copying ranges of memory (such asmemcpy) which can be used to movecontiguousblocks of array elements significantly faster than can be achieved through individual element access. The speedup of such optimized routines varies by array element size, architecture, and implementation. Memory-wise, arrays are compact data structures with no per-elementoverhead. There may be a per-array overhead (e.g., to store index bounds) but this is language-dependent. It can also happen that elements stored in an array requirelessmemory than the same elements stored in individual variables, because several array elements can be stored in a singleword; such arrays are often calledpackedarrays. An extreme (but commonly used) case is thebit array, where every bit represents a single element. A singleoctetcan thus hold up to 256 different combinations of up to 8 different conditions, in the most compact form. Array accesses with statically predictable access patterns are a major source ofdata parallelism. Dynamic arraysor growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end is particularly efficient. However, they reserve linear (Θ(n)) additional storage, whereas arrays do not reserve additional storage. Associative arraysprovide a mechanism for array-like functionality without huge storage overheads when the index values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using such a structure. Specialized associative arrays with integer keys includePatricia tries,Judy arrays, andvan Emde Boas trees. Balanced treesrequire O(logn) time for indexed access, but also permit inserting or deleting elements in O(logn) time,[11]whereas growable arrays require linear (Θ(n)) time to insert or delete elements at an arbitrary position. Linked listsallow constant time removal and insertion in the middle but take linear time for indexed access. Their memory use is typically worse than arrays, but is still linear. AnIliffe vectoris an alternative to a multidimensional array structure. It uses a one-dimensional array ofreferencesto arrays of one dimension less. For two dimensions, in particular, this alternative structure would be a vector of pointers to vectors, one for each row(pointer on c or c++). Thus an element in rowiand columnjof an arrayAwould be accessed by double indexing (A[i][j] in typical notation). This alternative structure allowsjagged arrays, where each row may have a different size—or, in general, where the valid range of each index depends on the values of all preceding indices. It also saves one multiplication (by the column address increment) replacing it by a bit shift (to index the vector of row pointers) and one extra memory access (fetching the row address), which may be worthwhile in some architectures. Thedimensionof an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array is a rectangle of data,[12]a three-dimensional array a block of data, etc. This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a 20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size three.
https://en.wikipedia.org/wiki/Array_(data_structure)
Thiscomparison of programming languages (array)compares the features ofarray data structuresormatrixprocessing for various computerprogramming languages. The following list containssyntaxexamples of how to determine the dimensions (index of the first element, the last element or the size in elements). Some languages index from zero. Some index from one. Some carry no such restriction, or even allow indexing by any enumerated type, not only integers. 2 UPBnameetc. The following list contains syntax examples of how to access a single element of an array. The following list contains syntax examples of how a range of element of an array can be accessed. In the following table: Some compiled languages such asAdaandFortran, and some scripting languages such asIDL,MATLAB, andS-Lang, have native support for vectorized operations on arrays. For example, to perform an element by element sum of two arrays,aandbto produce a thirdc, it is only necessary to write In addition to support for vectorized arithmetic and relational operations, these languages also vectorize common mathematical functions such as sine. For example, ifxis an array, then will result in an arrayywhose elements are sine of the corresponding elements of the arrayx. Vectorized index operations are also supported. As an example, is how one would useFortranto create arrays from the even and odd entries of an array. Another common use of vectorized indices is a filtering operation. Consider a clipping operation of a sine wave where amplitudes larger than 0.5 are to be set to 0.5. UsingS-Lang, this can be done by m'for real matrices
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)
Incomputer science,arrayis adata typethat represents a collection ofelements(valuesorvariables), each selected by one or more indices (identifying keys) that can be computed atrun timeduring program execution. Such a collection is usually called anarray variableorarray value.[1]By analogy with the mathematical conceptsvectorandmatrix, array types with one and two indices are often calledvector typeandmatrix type, respectively. More generally, a multidimensional array type can be called atensor type, by analogy with the mathematical concept,tensor.[2] Language support for array types may include certainbuilt-inarray data types, some syntactic constructions (array type constructors) that theprogrammermay use to define such types and declare array variables, and special notation for indexing array elements.[1]For example, in thePascal programming language, the declarationtypeMyTable=array[1..4,1..2]ofinteger, defines a new array data type calledMyTable. The declarationvar A: MyTablethen defines a variableAof that type, which is an aggregate of eight elements, each being an integer variable identified by two indices. In the Pascal program, those elements are denotedA[1,1],A[1,2],A[2,1], …,A[4,2].[3]Special array types are often defined by the language's standardlibraries. Dynamic listsare also more common and easier to implement[dubious–discuss]thandynamic arrays. Array types are distinguished fromrecordtypes mainly because they allow the element indices to be computed atrun time, as in the PascalassignmentA[I,J] := A[N-I,2*J]. Among other things, this feature allows a single iterativestatementto process arbitrarily many elements of an array variable. In more theoretical contexts, especially intype theoryand in the description of abstractalgorithms, the terms "array" and "array type" sometimes refer to anabstract data type(ADT) also calledabstract arrayor may refer to anassociative array, amathematicalmodel with the basic operations and behavior of a typical array type in most languages – basically, a collection of elements that are selected by indices computed at run-time. Depending on the language, array types may overlap (or be identified with) other data types that describe aggregates of values, such aslistsandstrings. Array types are often implemented byarray data structures, but sometimes by other means, such ashash tables,linked lists, orsearch trees. Heinz Rutishauser's programming language Superplan (1949–1951) included multi-dimensional arrays. However, although Rutishauser described how a compiler for his language should be built, did not implement one. Assembly languages and low-level languages like BCPL[4]generally have no syntactic support for arrays. Because of the importance of array structures for efficient computation, the earliest high-level programming languages, includingFORTRAN(1957),COBOL(1960), andAlgol 60(1960), provided support for multi-dimensional arrays. An array data structure can be mathematically modeled as anabstract data structure(anabstract array) with two operations These operations are required to satisfy theaxioms[5] for any array stateA, any valueV, and any tuplesI,Jfor which the operations are defined. The first axiom means that each element behaves like a variable. The second axiom means that elements with distinct indices behave asdisjointvariables, so that storing a value in one element does not affect the value of any other element. These axioms do not place any constraints on the set of valid index tuplesI, therefore this abstract model can be used fortriangular matricesand other oddly-shaped arrays. In order to effectively implement variables of such types asarray structures(with indexing done bypointer arithmetic), many languages restrict the indices tointegerdata types[6][7](or other types that can be interpreted as integers, such asbytesandenumerated types), and require that all elements have the same data type and storage size. Most of those languages also restrict each index to a finiteintervalof integers, that remains fixed throughout the lifetime of the array variable. In somecompiledlanguages, in fact, the index ranges may have to be known atcompile time. On the other hand, some programming languages provide more liberal array types, that allow indexing by arbitrary values, such asfloating-point numbers,strings,objects,references, etc.. Such index values cannot be restricted to an interval, much less a fixed interval. So, these languages usually allow arbitrary new elements to be created at any time. This choice precludes the implementation of array types as array data structures. That is, those languages use array-like syntax to implement a more generalassociative arraysemantics, and must therefore be implemented by ahash tableor some othersearch data structure. The number of indices needed to specify an element is called thedimension,dimensionality, orrankof the array type. (This nomenclature conflicts with the concept of dimension in linear algebra, which expresses theshape of a matrix. Thus, an array of numbers with 5 rows and 4 columns, hence 20 elements, is said to have dimension 2 in computing contexts, but represents a matrix that is said to be 4×5-dimensional. Also, the computer science meaning of "rank" conflicts with the notion oftensor rank, which is a generalization of the linear algebra concept ofrank of a matrix.) Many languages support only one-dimensional arrays. In those languages, a multi-dimensional array is typically represented by anIliffe vector, a one-dimensional array ofreferencesto arrays of one dimension less. A two-dimensional array, in particular, would be implemented as a vector of pointers to its rows. Thus an element in rowiand columnjof an arrayAwould be accessed by double indexing (A[i][j]in typical notation). This way of emulating multi-dimensional arrays allows the creation ofjagged arrays, where each row may have a different size – or, in general, where the valid range of each index depends on the values of all preceding indices. This representation for multi-dimensional arrays is quite prevalent in C and C++ software. However, C and C++ will use a linear indexing formula for multi-dimensional arrays that are declared with compile time constant size, e.g. byintA[10][20]orintA[m][n], instead of the traditionalint**A.[8] The C99 standard introduced Variable Length Array types that let define array types with dimensions computed in run time. The dynamic 4D array can be constructed using a pointer to 4d array, e.g.int(*arr)[t][u][v][w]=malloc(sizeof*arr);. The individual elements are accessed by first de-referencing an array pointer followed by indexing, e.g.(*arr)[i][j][k][l]. Alternatively, n-d arrays can be declared as pointers to its first element which is a (n-1) dimensional array, e.g.int(*arr)[u][v][w]=malloc(t*sizeof*arr);and accessed using more idiomatic syntax, e.g.arr[i][j][k][l]. Most programming languages that support arrays support thestoreandselectoperations, and have special syntax for indexing. Early languages used parentheses, e.g.A(i,j), as in FORTRAN; others choose square brackets, e.g.A[i,j]orA[i][j], as in Algol 60 and Pascal (to distinguish from the use of parentheses forfunction calls). Array data types are most often implemented as array structures: with the indices restricted to integer (or totally ordered) values, index ranges fixed at array creation time, and multilinear element addressing. This was the case in most"third generation"languages, and is still the case of mostsystems programming languagessuch asAda,C, andC++. In some languages, however, array data types have the semantics of associative arrays, with indices of arbitrary type and dynamic element creation. This is the case in somescripting languagessuch asAwkandLua, and of some array types provided by standardC++libraries. Some languages (like Pascal and Modula) performbounds checkingon every access, raising anexceptionor aborting the program when any index is out of its valid range. Compilers may allow these checks to be turned off to trade safety for speed. Other languages (like FORTRAN and C) trust the programmer and perform no checks. Good compilers may also analyze the program to determine the range of possible values that the index may have, and this analysis may lead tobounds-checking elimination. Some languages, such as C, provide onlyzero-basedarray types, for which the minimum valid value for any index is 0.[9]This choice is convenient for array implementation and address computations. With a language such as C, a pointer to the interior of any array can be defined that will symbolically act as a pseudo-array that accommodates negative indices. This works only because C does not check an index against bounds when used. Other languages provide onlyone-basedarray types, where each index starts at 1; this is the traditional convention in mathematics for matrices and mathematicalsequences. A few languages, such as Pascal and Lua, supportn-basedarray types, whose minimum legal indices are chosen by the programmer. The relative merits of each choice have been the subject of heated debate. Zero-based indexing can avoidoff-by-oneorfencepost errors.[10] The relation between numbers appearing in an array declaration and the index of that array's last element also varies by language. In many languages (such as C), one should specify the number of elements contained in the array; whereas in others (such as Pascal andVisual Basic .NET) one should specify the numeric value of the index of the last element. Needless to say, this distinction is immaterial in languages where the indices start at 1, such asLua. Some programming languages supportarray programming, where operations and functions defined for certain data types are implicitly extended to arrays of elements of those types. Thus one can writeA+Bto add corresponding elements of two arraysAandB. Usually these languages provide both theelement-by-element multiplicationand the standardmatrix productoflinear algebra, and which of these is represented by the*operator varies by language. Languages providing array programming capabilities have proliferated since the innovations in this area ofAPL. These are core capabilities ofdomain-specific languagessuch asGAUSS,IDL,Matlab, andMathematica. They are a core facility in newer languages, such asJuliaand recent versions ofFortran. These capabilities are also provided via standard extension libraries for other general purpose programming languages (such as the widely usedNumPylibrary forPython). Many languages provide a built-instringdata type, with specialized notation ("string literals") to build values of that type. In some languages (such as C), a string is just an array of characters, or is handled in much the same way. Other languages, likePascal, may provide vastly different operations for strings and arrays. Some programming languages provide operations that return the size (number of elements) of a vector, or, more generally, range of each index of an array. InCandC++arrays do not support thesizefunction, so programmers often have to declare separate variable to hold the size, and pass it to procedures as a separate parameter. Elements of a newly created array may have undefined values (as in C), or may be defined to have a specific "default" value such as 0 or anull pointer(as in Java). InC++astd::vectorobject supports thestore,select, andappendoperations with the performance characteristics discussed above. Vectors can be queried for their size and can be resized. Slower operations like inserting an element in the middle are also supported. Anarray slicingoperation takes a subset of the elements of an array-typed entity (value or variable) and then assembles them as another array-typed entity, possibly with other indices. If array types are implemented as array structures, many useful slicing operations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating thedope vectorof the structure. The possible slicings depend on the implementation details: for example,Fortranallows slicing off one column of a matrix variable, but not a row, and treat it as a vector. On the other hand, other slicing operations are possible when array types are implemented in other ways. Some languages allowdynamic arrays(also called resizable, growable, or extensible): array variables whose index ranges may be expanded at any time after creation, without changing the values of its current elements. For one-dimensional arrays, this facility may be provided as an operationappend(A,x)that increases the size of the arrayAby one and then sets the value of the last element tox. Other array types (such as Pascal strings) provide a concatenation operator, which can be used together with slicing to achieve that effect and more. In some languages, assigning a value to an element of an array automatically extends the array, if necessary, to include that element. In other array types, a slice can be replaced by an array of different size, with subsequent elements being renumbered accordingly – as in Python's list assignmentA[5:5] = [10,20,30], that inserts three new elements (10, 20, and 30) before element "A[5]". Resizable arrays are conceptually similar tolists, and the two concepts are synonymous in some languages. An extensible array can be implemented as a fixed-size array, with a counter that records how many elements are actually in use. Theappendoperation merely increments the counter; until the whole array is used, when theappendoperation may be defined to fail. This is an implementation of adynamic arraywith a fixed capacity, as in thestringtype of Pascal. Alternatively, theappendoperation may re-allocate the underlying array with a larger size, and copy the old elements to the new area.
https://en.wikipedia.org/wiki/Index_origin
Matrix representationis a method used by acomputer languageto store column-vectormatricesof more than one dimension inmemory.FortranandCuse different schemes for their native arrays.Fortranuses "Column Major" (AoS), in which all the elements for a given column are stored contiguously in memory.Cuses "Row Major" (SoA), which stores all the elements for a given row contiguously in memory.LAPACKdefines various matrix representations in memory. There is alsoSparse matrix representationandMorton-order matrix representation. According to the documentation, inLAPACKtheunitary matrixrepresentation is optimized.[1][2]Some languages such asJavastore matrices usingIliffe vectors. These are particularly useful for storingirregular matrices. Matrices are of primary importance inlinear algebra. An m × n (read as m by n) ordermatrixis a set of numbers arranged in m rows and n columns. Matrices of the same order can be added by adding the corresponding elements. Two matrices can be multiplied, the condition being that the number of columns of the first matrix is equal to the number of rows of the second matrix. Hence, if an m × n matrix is multiplied with an n × r matrix, then the resultant matrix will be of the order m × r.[3] Operations like row operations or column operations can be performed on a matrix, using which we can obtain the inverse of a matrix. The inverse may be obtained by determining the adjoint as well.[3]rows and columns are the different classes of matrices The choice of representation for 4×4 matrices commonly used in3D graphicsaffects the implementation of matrix/vector operations in systems with packedSIMD instructions: With row-major matrix order, it is easy to transform vectors usingdot productoperations, since the coefficients of each component are sequential in memory. Consequently, this layout may be desirable if a processor supports dot product operations natively. It is also possible to efficiently use a '3×4' affine transformation matrix without padding or awkward permutes. With column-major order, a "matrix × vector" multiply can be implemented with vectorizedmultiply-addoperations, if the vector's components are broadcast to eachSIMD lane. It is also easy to access thebasis vectorsrepresented by atransformation matrixas individual column vectors, as these are contiguous in memory.
https://en.wikipedia.org/wiki/Matrix_representation
Inmathematical analysisandcomputer science,functionswhich areZ-order,Lebesgue curve,Mortonspace-filling curve,[1]Morton orderorMorton codemapmultidimensional data to one dimensionwhile preserving locality of the data points (two points close together in multidimensions with high probability lie also close together in Morton order). It is named in France afterHenri Lebesgue, who studied it in 1904,[2]and named in the United States afterGuy Macdonald Morton, who first applied the order to file sequencing in 1966.[3]The z-value of a point in multidimensions is simply calculated by bit interleaving thebinaryrepresentations of its coordinate values. However, when querying a multidimensional search range in these data, using binary search is not really efficient: It is necessary for calculating, from a point encountered in the data structure, the next possible Z-value which is in the multidimensional search range, called BIGMIN. The BIGMIN problem has first been stated and its solution shown by Tropf and Herzog in 1981.[4]Once the data are sorted by bit interleaving, any one-dimensional data structure can be used, such as simple one dimensionalarrays,binary search trees,B-trees,skip listsor (with low significant bits truncated)hash tables. The resulting ordering can equivalently be described as the order one would get from adepth-firsttraversal of aquadtreeoroctree. The figure below shows the Z-values for the two dimensional case with integer coordinates 0 ≤x≤ 7, 0 ≤y≤ 7 (shown both in decimal and binary).Interleavingthe binary coordinate values (starting to the right with thex-bit (in blue) and alternating to the left with they-bit (in red)) yields the binaryz-values (tilted by 45° as shown). Connecting thez-values in their numerical order produces the recursively Z-shaped curve. Two-dimensional Z-values are also known as quadkey values. The Z-values of thexcoordinates are described as binary numbers from theMoser–de Bruijn sequence, having nonzero bits only in their even positions: The sum and difference of twoxvalues are calculated by usingbitwise operations: This property can be used to offset a Z-value, for example in two dimensions the coordinates to the top (decreasing y), bottom (increasing y), left (decreasing x) and right (increasing x) from the current Z-valuezare: And in general to add two two-dimensional Z-valueswandz: The Z-ordering can be used to efficiently build aquadtree(2D) oroctree(3D) for a set of points.[5][6]The basic idea is to sort the input set according to Z-order. Once sorted, the points can either be stored in a binary search tree and used directly, which is called a linear quadtree,[7]or they can be used to build a pointer based quadtree. The input points are usually scaled in each dimension to be positive integers, either as a fixed point representation over the unit range[0, 1]or corresponding to the machine word size. Both representations are equivalent and allow for the highest order non-zero bit to be found in constant time. Each square in the quadtree has a side length which is a power of two, and corner coordinates which are multiples of the side length. Given any two points, thederived squarefor the two points is the smallest square covering both points. The interleaving of bits from thexandycomponents of each point is called theshuffleofxandy, and can be extended to higher dimensions.[5] Points can be sorted according to their shuffle without explicitly interleaving the bits. To do this, for each dimension, the most significant bit of theexclusive orof the coordinates of the two points for that dimension is examined. The dimension for which the most significant bit is largest is then used to compare the two points to determine their shuffle order. The exclusive or operation masks off the higher order bits for which the two coordinates are identical. Since the shuffle interleaves bits from higher order to lower order, identifying the coordinate with the largest most significant bit, identifies the first bit in the shuffle order which differs, and that coordinate can be used to compare the two points.[8]This is shown in the following Python code: One way to determine whether the most significant bit is smaller is to compare the floor of the base-2 logarithm of each point. It turns out the following operation is equivalent, and only requires exclusive or operations:[8] It is also possible to compare floating point numbers using the same technique. Theless_msbfunction is modified to first compare the exponents. Only when they are equal is the standardless_msbfunction used on the mantissas.[9] Once the points are in sorted order, two properties make it easy to build a quadtree: The first is that the points contained in a square of the quadtree form a contiguous interval in the sorted order. The second is that if more than one child of a square contains an input point, the square is thederived squarefor two adjacent points in the sorted order. For each adjacent pair of points, the derived square is computed and its side length determined. For each derived square, the interval containing it is bounded by the first larger square to the right and to the left in sorted order.[5]Each such interval corresponds to a square in the quadtree. The result of this is a compressed quadtree, where only nodes containing input points or two or more children are present. A non-compressed quadtree can be built by restoring the missing nodes, if desired. Rather than building a pointer based quadtree, the points can be maintained in sorted order in a data structure such as a binary search tree. This allows points to be added and deleted inO(logn)time. Two quadtrees can be merged by merging the two sorted sets of points, and removing duplicates. Point location can be done by searching for the points preceding and following the query point in the sorted order. If the quadtree is compressed, the predecessor node found may be an arbitrary leaf inside the compressed node of interest. In this case, it is necessary to find the predecessor of the least common ancestor of the query point and the leaf found.[10] By bit interleaving, the database records are converted to a (possibly very long) sequence of bits. The bit sequences are interpreted as binary numbers and the data are sorted or indexed by the binary values, using any one dimensional data structure, as mentioned in the introduction. However, when querying a multidimensional search range in these data, using binary search is not really efficient. Although Z-order is preserving locality well, for efficient range searches an algorithm is necessary for calculating, from a point encountered in the data structure, the next possible Z-value which is in the multidimensional search range: In this example, the range being queried (x= 2, ..., 3,y= 2, ..., 6) is indicated by the dotted rectangle. Its highest Z-value (MAX) is 45. In this example, the valueF= 19 is encountered when searching a data structure in increasing Z-value direction, so we would have to search in the interval betweenFand MAX (hatched area). To speed up the search, one would calculate the next Z-value which is in the search range, called BIGMIN (36 in the example) and only search in the interval between BIGMIN and MAX (bold values), thus skipping most of the hatched area. Searching in decreasing direction is analogous with LITMAX which is the highest Z-value in the query range lower thanF. The BIGMIN problem has first been stated and its solution shown in Tropf and Herzog.[4]For the history after the puplication see.[11] An extensive explanation of the LITMAX/BIGMIN calculation algorithm, together with Pascal Source Code (3D, easy to adapt to nD) and hints on how to handle floating point data and possibly negative data, is provided2021 by Tropf: Here, bit interleaving is not done explicitly; the data structure has just pointers to the original (unsorted) database records. With a general record comparison function (greater-less-equal, in the sense of z-value), complications with bit sequences length exceeding the computer word length are avoided, and the code can easily be adapted to any number of dimensions and any record key word length. As the approach does not depend on the one dimensional data structure chosen, there is still free choice of structuring the data, so well known methods such as balanced trees can be used to cope with dynamic data, and keeping the tree balance when inserting or deleting takes O(log n) time. The method is also used inUB-trees(balanced).[12] The Free choice makes it easier to incorporate the method into existing databases. This is in contrast for example toR-treeswhere special considerations are necessary. Applying the method hierarchically (according to the data structure at hand), optionally in both increasing and decreasing direction, yields highly efficient multidimensional range search which is important in both commercial and technical applications, e.g. as a procedure underlying nearest neighbour searches. Z-order is one of the few multidimensional access methods that has found its way into commercial database systems.[13]The method is used in various technical applications of different fields.[14]and in commercial database systems.[15] As long ago as 1966, G.M.Morton proposed Z-order for file sequencing of a static two dimensional geographical database. Areal data units are contained in one or a few quadratic frames represented by their sizes and lower right corner Z-values, the sizes complying with the Z-order hierarchy at the corner position. With high probability, changing to an adjacent frame is done with one or a few relatively small scanning steps.[3] As an alternative, theHilbert curvehas been suggested as it has a better order-preserving behaviour,[6]and, in fact, was used in an optimized index, the S2-geometry.[16] TheStrassen algorithmfor matrix multiplication is based on splitting the matrices in four blocks, and then recursively splitting each of these blocks in four smaller blocks, until the blocks are single elements (or more practically: until reaching matrices so small that the Moser–de Bruijn sequence trivial algorithm is faster). Arranging the matrix elements in Z-order then improves locality, and has the additional advantage (compared to row- or column-major ordering) that the subroutine for multiplying two blocks does not need to know the total size of the matrix, but only the size of the blocks and their location in memory. Effective use of Strassen multiplication with Z-order has been demonstrated, see Valsalam and Skjellum's 2002 paper.[17] Buluçet al.present asparse matrixdata structure that Z-orders its non-zero elements to enableparallelmatrix-vector multiplication.[18] Matrices in linear algebra can also be traversed using a space-filling curve.[19]Conventional loops traverse a matrix row by row. Traversing with the Z-curve allows efficient access to thememory hierarchy.[20] SomeGPUsstoretexture mapsin Z-order to increase spatiallocality of referenceduringtexture mapped rasterization. This allowscache linesto represent rectangular tiles, increasing the probability that nearby accesses are in the cache. At a larger scale, it also decreases the probability of costly, so called, "page breaks" (i.e.,the cost of changing rows) in SDRAM/DDRAM. This is important because 3D rendering involves arbitrary transformations (rotations, scaling, perspective, and distortion by animated surfaces). These formats are often referred to asswizzled texturesortwiddled textures. Other tiled formats may also be used. TheBarnes–Hutalgorithm requires construction of an octree. Storing the data as apointer-based tree requires many sequential pointer dereferences to iterate over the octree indepth-firstorder (expensive on a distributed-memory machine). Instead, if one stores the data in ahashtable, using octree hashing, the Z-order curve naturally iterates the octree in depth-first order.[6]
https://en.wikipedia.org/wiki/Morton_order
Innumerical analysisandscientific computing, asparse matrixorsparse arrayis amatrixin which most of the elements are zero.[1]There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify assparsebut a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considereddense.[1]The number of zero-valued elements divided by the total number of elements (e.g.,m×nfor anm×nmatrix) is sometimes referred to as thesparsityof the matrix. Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful incombinatoricsand application areas such asnetwork theoryandnumerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear inscientificorengineeringapplications when solvingpartial differential equations. When storing and manipulating sparse matrices on acomputer, it is beneficial and often necessary to use specializedalgorithmsanddata structuresthat take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,[2]as they are common in the machine learning field.[3]Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing andmemoryare wasted on the zeros. Sparse data is by nature more easilycompressedand thus requires significantly lessstorage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms. An important special type of sparse matrices isband matrix, defined as follows. Thelower bandwidth of a matrixAis the smallest numberpsuch that the entryai,jvanishes wheneveri>j+p. Similarly, theupper bandwidthis the smallest numberpsuch thatai,j= 0wheneveri<j−p(Golub & Van Loan 1996, §1.2.1). For example, atridiagonal matrixhas lower bandwidth1and upper bandwidth1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.[XXX⋅⋅⋅⋅XX⋅XX⋅⋅X⋅X⋅X⋅⋅⋅X⋅X⋅X⋅⋅XX⋅XXX⋅⋅⋅XXX⋅⋅⋅⋅⋅X⋅X]{\displaystyle {\begin{bmatrix}X&X&X&\cdot &\cdot &\cdot &\cdot &\\X&X&\cdot &X&X&\cdot &\cdot &\\X&\cdot &X&\cdot &X&\cdot &\cdot &\\\cdot &X&\cdot &X&\cdot &X&\cdot &\\\cdot &X&X&\cdot &X&X&X&\\\cdot &\cdot &\cdot &X&X&X&\cdot &\\\cdot &\cdot &\cdot &\cdot &X&\cdot &X&\\\end{bmatrix}}} Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices. By rearranging the rows and columns of a matrixAit may be possible to obtain a matrixA′with a lower bandwidth. A number of algorithms are designed forbandwidth minimization. A very efficient structure for an extreme case of band matrices, thediagonal matrix, is to store just the entries in themain diagonalas aone-dimensional array, so a diagonaln×nmatrix requires onlynentries. A symmetric sparse matrix arises as theadjacency matrixof anundirected graph; it can be stored efficiently as anadjacency list. Ablock-diagonal matrixconsists of sub-matrices along its diagonal blocks. A block-diagonal matrixAhas the formA=[A10⋯00A2⋯0⋮⋮⋱⋮00⋯An],{\displaystyle \mathbf {A} ={\begin{bmatrix}\mathbf {A} _{1}&0&\cdots &0\\0&\mathbf {A} _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\mathbf {A} _{n}\end{bmatrix}},} whereAkis a square matrix for allk= 1, ...,n. Thefill-inof a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. Thesymbolic Cholesky decompositioncan be used to calculate the worst possible fill-in before doing the actualCholesky decomposition. There are other methods than theCholesky decompositionin use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in. Bothiterativeand direct methods exist for sparse matrix solving. Iterative methods, such asconjugate gradientmethod andGMRESutilize fast computations of matrix-vector productsAxi{\displaystyle Ax_{i}}, where matrixA{\displaystyle A}is sparse. The use ofpreconditionerscan significantly accelerate convergence of such iterative methods. A matrix is typically stored as a two-dimensional array. Each entry in the array represents an elementai,jof the matrix and is accessed by the twoindicesiandj. Conventionally,iis the row index, numbered from top to bottom, andjis the column index, numbered from left to right. For anm×nmatrix, the amount of memory required to store the matrix in this format is proportional tom×n(disregarding the fact that the dimensions of the matrix also need to be stored). In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously. Formats can be divided into two groups: DOK consists of adictionarythat maps(row, column)-pairsto the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[4] LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[5] COO stores a list of(row, column, value)tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.[6] Thecompressed sparse row(CSR) orcompressed row storage(CRS) or Yale format represents a matrixMby three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.[7] The CSR format stores a sparsem×nmatrixMin row form using three (one-dimensional) arrays(V, COL_INDEX, ROW_INDEX). LetNNZdenote the number of nonzero entries inM. (Note thatzero-based indicesshall be used here.) For example, the matrix(5000080000300600){\displaystyle {\begin{pmatrix}5&0&0&0\\0&8&0&0\\0&0&3&0\\0&6&0&0\\\end{pmatrix}}}is a4 × 4matrix with 4 nonzero elements, hence assuming a zero-indexed language. To extract a row, we first define: Then we take slices from V and COL_INDEX starting at row_start and ending at row_end. To extract the row 1 (the second row) of this matrix we setrow_start=1androw_end=2. Then we make the slicesV[1:2] = [8]andCOL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8. In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only whenNNZ < (m(n− 1) − 1) / 2. Another example, the matrix(10200000030040000050607000000080){\displaystyle {\begin{pmatrix}10&20&0&0&0&0\\0&30&0&40&0&0\\0&0&50&60&70&0\\0&0&0&0&0&80\\\end{pmatrix}}}is a4 × 6matrix (24 entries) with 8 nonzero elements, so The whole is stored as 21 entries: 8 inV, 8 inCOL_INDEX, and 5 inROW_INDEX. Note that in this format, the first value ofROW_INDEXis always zero and the last is alwaysNNZ, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored,NNZwould not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formulaROW_INDEX[i+ 1] − ROW_INDEX[i]works for any rowi. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix. The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combinesROW_INDEXandCOL_INDEXinto a single array and handles the diagonal of the matrix separately.[9] Forlogicaladjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation. It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.[10] CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is(val, row_ind, col_ptr), wherevalis an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix;row_indis the row indices corresponding to the values; and,col_ptris the list ofvalindexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via thesparsefunction). Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source: The termsparse matrixwas possibly coined byHarry Markowitzwho initiated some pioneering work but then left the field.[11]
https://en.wikipedia.org/wiki/Sparse_matrix#Storing_a_sparse_matrix
Inmathematics, especially inlinear algebraandmatrix theory, thevectorizationof amatrixis alinear transformationwhich converts the matrix into avector. Specifically, the vectorization of am×nmatrixA, denoted vec(A), is themn× 1column vector obtained by stacking the columns of the matrixAon top of one another:vec⁡(A)=[a1,1,…,am,1,a1,2,…,am,2,…,a1,n,…,am,n]T{\displaystyle \operatorname {vec} (A)=[a_{1,1},\ldots ,a_{m,1},a_{1,2},\ldots ,a_{m,2},\ldots ,a_{1,n},\ldots ,a_{m,n}]^{\mathrm {T} }}Here,ai,j{\displaystyle a_{i,j}}represents the element in thei-th row andj-th column ofA, and the superscriptT{\displaystyle {}^{\mathrm {T} }}denotes thetranspose. Vectorization expresses, through coordinates, theisomorphismRm×n:=Rm⊗Rn≅Rmn{\displaystyle \mathbf {R} ^{m\times n}:=\mathbf {R} ^{m}\otimes \mathbf {R} ^{n}\cong \mathbf {R} ^{mn}}between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrixA=[abcd]{\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}}, the vectorization isvec⁡(A)=[acbd]{\displaystyle \operatorname {vec} (A)={\begin{bmatrix}a\\c\\b\\d\end{bmatrix}}}. The connection between the vectorization ofAand the vectorization of its transpose is given by thecommutation matrix. The vectorization is frequently used together with theKronecker productto expressmatrix multiplicationas a linear transformation on matrices. In particular,vec⁡(ABC)=(CT⊗A)vec⁡(B){\displaystyle \operatorname {vec} (ABC)=(C^{\mathrm {T} }\otimes A)\operatorname {vec} (B)}for matricesA,B, andCof dimensionsk×l,l×m, andm×n.[note 1]For example, ifadA⁡(X)=AX−XA{\displaystyle \operatorname {ad} _{A}(X)=AX-XA}(theadjoint endomorphismof theLie algebragl(n,C)of alln×nmatrices withcomplexentries), thenvec⁡(adA⁡(X))=(A⊗In−In⊗AT)vec(X){\displaystyle \operatorname {vec} (\operatorname {ad} _{A}(X))=(A\otimes I_{n}-I_{n}\otimes A^{\mathrm {T} }){\text{vec}}(X)}, whereIn{\displaystyle I_{n}}is then×nidentity matrix. There are two other useful formulations:vec⁡(ABC)=(In⊗AB)vec⁡(C)=(CTBT⊗Ik)vec⁡(A)vec⁡(AB)=(Im⊗A)vec⁡(B)=(BT⊗Ik)vec⁡(A){\displaystyle {\begin{aligned}\operatorname {vec} (ABC)&=(I_{n}\otimes AB)\operatorname {vec} (C)=(C^{\mathrm {T} }B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\\\operatorname {vec} (AB)&=(I_{m}\otimes A)\operatorname {vec} (B)=(B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\end{aligned}}} More generally, it has been shown that vectorization is aself-adjunctionin the monoidal closed structure of any category of matrices.[1] Vectorization is analgebra homomorphismfrom the space ofn×nmatrices with theHadamard(entrywise) product toCn2with its Hadamard product:vec⁡(A∘B)=vec⁡(A)∘vec⁡(B).{\displaystyle \operatorname {vec} (A\circ B)=\operatorname {vec} (A)\circ \operatorname {vec} (B).} Vectorization is aunitary transformationfrom the space ofn×nmatrices with theFrobenius(orHilbert–Schmidt)inner producttoCn2:tr⁡(A†B)=vec⁡(A)†vec⁡(B),{\displaystyle \operatorname {tr} (A^{\dagger }B)=\operatorname {vec} (A)^{\dagger }\operatorname {vec} (B),}where the superscript†denotes theconjugate transpose. The matrix vectorization operation can be written in terms of a linear sum. LetXbe anm×nmatrix that we want to vectorize, and leteibe thei-th canonical basis vector for then-dimensional space, that isei=[0,…,0,1,0,…,0]T{\textstyle \mathbf {e} _{i}=\left[0,\dots ,0,1,0,\dots ,0\right]^{\mathrm {T} }}. LetBibe a(mn) ×mblock matrix defined as follows:Bi=[0⋮0Im0⋮0]=ei⊗Im{\displaystyle \mathbf {B} _{i}={\begin{bmatrix}\mathbf {0} \\\vdots \\\mathbf {0} \\\mathbf {I} _{m}\\\mathbf {0} \\\vdots \\\mathbf {0} \end{bmatrix}}=\mathbf {e} _{i}\otimes \mathbf {I} _{m}} Biconsists ofnblock matrices of sizem×m, stacked column-wise, and all these matrices are all-zero except for thei-th one, which is am×midentity matrixIm. Then the vectorized version ofXcan be expressed as follows:vec⁡(X)=∑i=1nBiXei{\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {B} _{i}\mathbf {X} \mathbf {e} _{i}} Multiplication ofXbyeiextracts thei-th column, while multiplication byBiputs it into the desired position in the final vector. Alternatively, the linear sum can be expressed using theKronecker product:vec⁡(X)=∑i=1nei⊗Xei{\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {e} _{i}\otimes \mathbf {X} \mathbf {e} _{i}} For asymmetric matrixA, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with thelower triangularportion, that is, then(n+ 1)/2entries on and below themain diagonal. For such matrices, thehalf-vectorizationis sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetricn×nmatrixAis then(n+ 1)/2 × 1column vector obtained by vectorizing only the lower triangular part ofA:vech⁡(A)=[A1,1,…,An,1,A2,2,…,An,2,…,An−1,n−1,An,n−1,An,n]T.{\displaystyle \operatorname {vech} (A)=[A_{1,1},\ldots ,A_{n,1},A_{2,2},\ldots ,A_{n,2},\ldots ,A_{n-1,n-1},A_{n,n-1},A_{n,n}]^{\mathrm {T} }.} For example, for the 2×2 matrixA=[abbd]{\displaystyle A={\begin{bmatrix}a&b\\b&d\end{bmatrix}}}, the half-vectorization isvech⁡(A)=[abd]{\displaystyle \operatorname {vech} (A)={\begin{bmatrix}a\\b\\d\end{bmatrix}}}. There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, theduplication matrixand theelimination matrix. Programming languages that implement matrices may have easy means for vectorization. InMatlab/GNU Octavea matrixAcan be vectorized byA(:).GNU Octavealso allows vectorization and half-vectorization withvec(A)andvech(A)respectively.Juliahas thevec(A)function as well. InPythonNumPyarrays implement theflattenmethod,[note 1]while inRthe desired effect can be achieved via thec()oras.vector()functions or, more efficiently, by removing the dimensions attribute of a matrixAwithdim(A) <- NULL. InR, functionvec()of package 'ks' allows vectorization and functionvech()implemented in both packages 'ks' and 'sn' allows half-vectorization.[2][3][4] Vectorization is used inmatrix calculusand its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices.[5]It is also used in local sensitivity and statistical diagnostics.[6]
https://en.wikipedia.org/wiki/Vectorization_(mathematics)
Instatistics, thegeneralizedcanonical correlationanalysis(gCCA), is a way of making sense ofcross-correlationmatrices between the sets of random variables when there are more than two sets. While a conventional CCA generalizesprincipal component analysis(PCA) to two sets of random variables, a gCCA generalizes PCA to more than two sets of random variables. Thecanonical variablesrepresent thosecommon factorsthat can be found by a large PCA of all of the transformed random variables after each set underwent its own PCA. TheHelmert-Wolf blocking(HWB) method of estimatinglinear regressionparameters can find an optimal solution only if all cross-correlations between the data blocks are zero. They can always be made to vanish by introducing a new regression parameter for each common factor. The gCCA method can be used for finding those harmful common factors that create cross-correlation between the blocks. However, no optimal HWB solution exists if the random variables do not contain enough information on all of the new regression parameters.
https://en.wikipedia.org/wiki/Generalized_canonical_correlation
The concept ofanglesbetweenlines(in theplaneor inspace), between two planes (dihedral angle) or between a line and a plane can be generalized to arbitrarydimensions. This generalization was first discussed byCamille Jordan.[1]For any pair offlatsin aEuclidean spaceof arbitrary dimension one can define a set of mutual angles which areinvariantunderisometrictransformation of the Euclidean space. If the flats do not intersect, their shortestdistanceis one more invariant.[1]These angles are calledcanonical[2]orprincipal.[3]The concept of angles can be generalized to pairs of flats in a finite-dimensionalinner product spaceover thecomplex numbers. LetF{\displaystyle F}andG{\displaystyle G}be flats of dimensionsk{\displaystyle k}andl{\displaystyle l}in then{\displaystyle n}-dimensional Euclidean spaceEn{\displaystyle E^{n}}. By definition, atranslationofF{\displaystyle F}orG{\displaystyle G}does not alter their mutual angles. IfF{\displaystyle F}andG{\displaystyle G}do not intersect, they will do so upon any translation ofG{\displaystyle G}which maps some point inG{\displaystyle G}to some point inF{\displaystyle F}. It can therefore be assumed without loss of generality thatF{\displaystyle F}andG{\displaystyle G}intersect. Jordan shows thatCartesian coordinatesx1,…,xρ,{\displaystyle x_{1},\dots ,x_{\rho },}y1,…,yσ,{\displaystyle y_{1},\dots ,y_{\sigma },}z1,…,zτ,{\displaystyle z_{1},\dots ,z_{\tau },}u1,…,uυ,{\displaystyle u_{1},\dots ,u_{\upsilon },}v1,…,vα,{\displaystyle v_{1},\dots ,v_{\alpha },}w1,…,wα{\displaystyle w_{1},\dots ,w_{\alpha }}inEn{\displaystyle E^{n}}can then be defined such thatF{\displaystyle F}andG{\displaystyle G}are described, respectively, by the sets of equations and with0<θi<π/2,i=1,…,α{\displaystyle 0<\theta _{i}<\pi /2,i=1,\dots ,\alpha }. Jordan calls these coordinatescanonical. By definition, the anglesθi{\displaystyle \theta _{i}}are theanglesbetweenF{\displaystyle F}andG{\displaystyle G}. The non-negative integersρ,σ,τ,υ,α{\displaystyle \rho ,\sigma ,\tau ,\upsilon ,\alpha }are constrained by For these equations to determine the five non-negative integers completely, besides the dimensionsn,k{\displaystyle n,k}andℓ{\displaystyle \ell }and the numberα{\displaystyle \alpha }of anglesθi{\displaystyle \theta _{i}}, the non-negative integerσ{\displaystyle \sigma }must be given. This is the number of coordinatesyi{\displaystyle y_{i}}, whose corresponding axes are those lying entirely within bothF{\displaystyle F}andG{\displaystyle G}. The integerσ{\displaystyle \sigma }is thus the dimension ofF∩G{\displaystyle F\cap G}. The set of anglesθi{\displaystyle \theta _{i}}may be supplemented withσ{\displaystyle \sigma }angles0{\displaystyle 0}to indicate thatF∩G{\displaystyle F\cap G}has that dimension. Jordan's proof applies essentially unaltered whenEn{\displaystyle E^{n}}is replaced with then{\displaystyle n}-dimensional inner product spaceCn{\displaystyle \mathbb {C} ^{n}}over the complex numbers. (Forangles between subspaces, the generalization toCn{\displaystyle \mathbb {C} ^{n}}is discussed by Galántai and Hegedũs in terms of the belowvariational characterization.[4])[1] Now letF{\displaystyle F}andG{\displaystyle G}besubspacesof then{\displaystyle n}-dimensional inner product space over therealor complex numbers. Geometrically,F{\displaystyle F}andG{\displaystyle G}are flats, so Jordan's definition of mutual angles applies. When for any canonical coordinateξ{\displaystyle \xi }the symbolξ^{\displaystyle {\hat {\xi }}}denotes theunit vectorof theξ{\displaystyle \xi }axis, the vectorsy^1,…,y^σ,{\displaystyle {\hat {y}}_{1},\dots ,{\hat {y}}_{\sigma },}w^1,…,w^α,{\displaystyle {\hat {w}}_{1},\dots ,{\hat {w}}_{\alpha },}z^1,…,z^τ{\displaystyle {\hat {z}}_{1},\dots ,{\hat {z}}_{\tau }}form anorthonormalbasisforF{\displaystyle F}and the vectorsy^1,…,y^σ,{\displaystyle {\hat {y}}_{1},\dots ,{\hat {y}}_{\sigma },}w^1′,…,w^α′,{\displaystyle {\hat {w}}'_{1},\dots ,{\hat {w}}'_{\alpha },}u^1,…,u^υ{\displaystyle {\hat {u}}_{1},\dots ,{\hat {u}}_{\upsilon }}form an orthonormal basis forG{\displaystyle G}, where Being related to canonical coordinates, these basic vectors may be calledcanonical. Whenai,i=1,…,k{\displaystyle a_{i},i=1,\dots ,k}denote the canonical basic vectors forF{\displaystyle F}andbi,i=1,…,l{\displaystyle b_{i},i=1,\dots ,l}the canonical basic vectors forG{\displaystyle G}then the inner product⟨ai,bj⟩{\displaystyle \langle a_{i},b_{j}\rangle }vanishes for any pair ofi{\displaystyle i}andj{\displaystyle j}except the following ones. With the above ordering of the basic vectors, thematrixof the inner products⟨ai,bj⟩{\displaystyle \langle a_{i},b_{j}\rangle }is thusdiagonal. In other words, if(ai′,i=1,…,k){\displaystyle (a'_{i},i=1,\dots ,k)}and(bi′,i=1,…,ℓ){\displaystyle (b'_{i},i=1,\dots ,\ell )}are arbitrary orthonormal bases inF{\displaystyle F}andG{\displaystyle G}then thereal, orthogonalorunitarytransformations from the basis(ai′){\displaystyle (a'_{i})}to the basis(ai){\displaystyle (a_{i})}and from the basis(bi′){\displaystyle (b'_{i})}to the basis(bi){\displaystyle (b_{i})}realize asingular value decompositionof the matrix of inner products⟨ai′,bj′⟩{\displaystyle \langle a'_{i},b'_{j}\rangle }. The diagonal matrix elements⟨ai,bi⟩{\displaystyle \langle a_{i},b_{i}\rangle }are the singular values of the latter matrix. By the uniqueness of the singular value decomposition, the vectorsy^i{\displaystyle {\hat {y}}_{i}}are then unique up to a real, orthogonal or unitary transformation among them, and the vectorsw^i{\displaystyle {\hat {w}}_{i}}andw^i′{\displaystyle {\hat {w}}'_{i}}(and hencev^i{\displaystyle {\hat {v}}_{i}}) are unique up to equal real, orthogonal or unitary transformations applied simultaneously to the sets of the vectorsw^i{\displaystyle {\hat {w}}_{i}}associated with a common value ofθi{\displaystyle \theta _{i}}and to the corresponding sets of vectorsw^i′{\displaystyle {\hat {w}}'_{i}}(and hence to the corresponding sets ofv^i{\displaystyle {\hat {v}}_{i}}). A singular value1{\displaystyle 1}can be interpreted ascos0{\displaystyle \cos \,0}corresponding to the angles0{\displaystyle 0}introduced above and associated withF∩G{\displaystyle F\cap G}and a singular value0{\displaystyle 0}can be interpreted ascos⁡π/2{\displaystyle \cos \pi /2}corresponding to right angles between theorthogonalspacesF∩G⊥{\displaystyle F\cap G^{\bot }}andF⊥∩G{\displaystyle F^{\bot }\cap G}, where superscript⊥{\displaystyle \bot }denotes theorthogonal complement. Thevariational characterizationof singular values and vectors implies as a special case a variational characterization of the angles between subspaces and their associated canonical vectors. This characterization includes the angles0{\displaystyle 0}andπ/2{\displaystyle \pi /2}introduced above and orders the angles by increasing value. It can be given the form of the below alternative definition. In this context, it is customary to talk ofprincipalangles and vectors.[3] LetV{\displaystyle V}be an inner product space. Given two subspacesU,W{\displaystyle {\mathcal {U}},{\mathcal {W}}}withdim⁡(U)=k≤dim⁡(W):=ℓ{\displaystyle \dim({\mathcal {U}})=k\leq \dim({\mathcal {W}}):=\ell }, there exists then a sequence ofk{\displaystyle k}angles0≤θ1≤θ2≤⋯≤θk≤π/2{\displaystyle 0\leq \theta _{1}\leq \theta _{2}\leq \cdots \leq \theta _{k}\leq \pi /2}called the principal angles, the first one defined as where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is theinner productand‖⋅‖{\displaystyle \|\cdot \|}the inducednorm. The vectorsu1{\displaystyle u_{1}}andw1{\displaystyle w_{1}}are the correspondingprincipal vectors. The other principal angles and vectors are then defined recursively via This means that the principal angles(θ1,…,θk){\displaystyle (\theta _{1},\ldots ,\theta _{k})}form a set of minimized angles between the two subspaces, and the principal vectors in each subspace are orthogonal to each other. Geometrically, subspaces areflats(points, lines, planes etc.) that include the origin, thus any two subspaces intersect at least in the origin. Two two-dimensional subspacesU{\displaystyle {\mathcal {U}}}andW{\displaystyle {\mathcal {W}}}generate a set of two angles. In a three-dimensionalEuclidean space, the subspacesU{\displaystyle {\mathcal {U}}}andW{\displaystyle {\mathcal {W}}}are either identical, or their intersection forms a line. In the former case, bothθ1=θ2=0{\displaystyle \theta _{1}=\theta _{2}=0}. In the latter case, onlyθ1=0{\displaystyle \theta _{1}=0}, where vectorsu1{\displaystyle u_{1}}andw1{\displaystyle w_{1}}are on the line of the intersectionU∩W{\displaystyle {\mathcal {U}}\cap {\mathcal {W}}}and have the same direction. The angleθ2>0{\displaystyle \theta _{2}>0}will be the angle between the subspacesU{\displaystyle {\mathcal {U}}}andW{\displaystyle {\mathcal {W}}}in theorthogonal complementtoU∩W{\displaystyle {\mathcal {U}}\cap {\mathcal {W}}}. Imagining the angle between two planes in 3D, one intuitively thinks of the largest angle,θ2>0{\displaystyle \theta _{2}>0}. In 4-dimensional real coordinate spaceR4, let the two-dimensional subspaceU{\displaystyle {\mathcal {U}}}be spanned byu1=(1,0,0,0){\displaystyle u_{1}=(1,0,0,0)}andu2=(0,1,0,0){\displaystyle u_{2}=(0,1,0,0)}, and let the two-dimensional subspaceW{\displaystyle {\mathcal {W}}}be spanned byw1=(1,0,0,a)/1+a2{\displaystyle w_{1}=(1,0,0,a)/{\sqrt {1+a^{2}}}}andw2=(0,1,b,0)/1+b2{\displaystyle w_{2}=(0,1,b,0)/{\sqrt {1+b^{2}}}}with some reala{\displaystyle a}andb{\displaystyle b}such that|a|<|b|{\displaystyle |a|<|b|}. Thenu1{\displaystyle u_{1}}andw1{\displaystyle w_{1}}are, in fact, the pair of principal vectors corresponding to the angleθ1{\displaystyle \theta _{1}}withcos⁡(θ1)=1/1+a2{\displaystyle \cos(\theta _{1})=1/{\sqrt {1+a^{2}}}}, andu2{\displaystyle u_{2}}andw2{\displaystyle w_{2}}are the principal vectors corresponding to the angleθ2{\displaystyle \theta _{2}}withcos⁡(θ2)=1/1+b2.{\displaystyle \cos(\theta _{2})=1/{\sqrt {1+b^{2}}}.} To construct a pair of subspaces with any given set ofk{\displaystyle k}anglesθ1,…,θk{\displaystyle \theta _{1},\ldots ,\theta _{k}}in a2k{\displaystyle 2k}(or larger) dimensionalEuclidean space, take a subspaceU{\displaystyle {\mathcal {U}}}with an orthonormal basis(e1,…,ek){\displaystyle (e_{1},\ldots ,e_{k})}and complete it to an orthonormal basis(e1,…,en){\displaystyle (e_{1},\ldots ,e_{n})}of the Euclidean space, wheren≥2k{\displaystyle n\geq 2k}. Then, an orthonormal basis of the other subspaceW{\displaystyle {\mathcal {W}}}is, e.g., The notion of the angles and some of the variational properties can be naturally extended to arbitraryinner products[10]and subspaces with infinitedimensions.[7] Historically, the principal angles and vectors first appear in the context ofcanonical correlationand wereoriginally computedusingSVDof correspondingcovariancematrices. However, as first noticed in,[3]thecanonical correlationis related to thecosineof the principal angles, which isill-conditionedfor small angles, leading to very inaccurate computation of highly correlated principal vectors in finiteprecisioncomputer arithmetic. Thesine-based algorithm[3]fixes this issue, but creates a new problem of very inaccurate computation of highly uncorrelated principal vectors, since thesinefunction isill-conditionedfor angles close toπ/2.To produce accurate principal vectors incomputer arithmeticfor the full range of the principal angles, the combined technique[10]first compute all principal angles and vectors using the classicalcosine-based approach, and then recomputes the principal angles smaller thanπ/4and the corresponding principal vectors using thesine-based approach.[3]The combined technique[10]is implemented inopen-sourcelibrariesOctave[11]andSciPy[12]and contributed[13]and[14]toMATLAB.
https://en.wikipedia.org/wiki/Angles_between_flats
Regularized canonical correlation analysisis a way of usingridge regressionto solve thesingularityproblem in thecross-covariance matricesofcanonical correlation analysis. By convertingcov⁡(X,X){\displaystyle \operatorname {cov} (X,X)}andcov⁡(Y,Y){\displaystyle \operatorname {cov} (Y,Y)}intocov⁡(X,X)+λIX{\displaystyle \operatorname {cov} (X,X)+\lambda I_{X}}andcov⁡(Y,Y)+λIY{\displaystyle \operatorname {cov} (Y,Y)+\lambda I_{Y}}, it ensures that the above matrices will have reliableinverses. The idea probably dates back toHrishikesh D. Vinod's publication in 1976 where he called it "Canonical ridge".[1][2]It has been suggested for use in the analysis offunctional neuroimagingdata as such data are often singular.[3]It is possible to compute the regularized canonical vectors in the lower-dimensional space.[4]
https://en.wikipedia.org/wiki/Regularized_canonical_correlation_analysis
Inlinear algebra, aneigenvector(/ˈaɪɡən-/EYE-gən-) orcharacteristic vectoris avectorthat has itsdirectionunchanged (or reversed) by a givenlinear transformation. More precisely, an eigenvectorv{\displaystyle \mathbf {v} }of a linear transformationT{\displaystyle T}isscaled by a constant factorλ{\displaystyle \lambda }when the linear transformation is applied to it:Tv=λv{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }. The correspondingeigenvalue,characteristic value, orcharacteristic rootis the multiplying factorλ{\displaystyle \lambda }(possibly negative). Geometrically, vectorsare multi-dimensionalquantities with magnitude and direction, often pictured as arrows. A linear transformationrotates,stretches, orshearsthe vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.[1] The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, fromgeologytoquantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is thesteady stateof the system. For ann×n{\displaystyle n{\times }n}matrixAand a nonzero vectorv{\displaystyle \mathbf {v} }of lengthn{\displaystyle n}, if multiplyingAbyv{\displaystyle \mathbf {v} }(denotedAv{\displaystyle A\mathbf {v} }) simply scalesv{\displaystyle \mathbf {v} }by a factorλ, whereλis ascalar, thenv{\displaystyle \mathbf {v} }is called an eigenvector ofA, andλis the corresponding eigenvalue. This relationship can be expressed as:Av=λv{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }.[2] Given ann-dimensional vector spaceand a choice ofbasis, there is a direct correspondence between linear transformations from the vector space into itself andn-by-nsquare matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language ofmatrices.[3][4] Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefixeigen-is adopted from theGermanwordeigen(cognatewith theEnglishwordown) for 'proper', 'characteristic', 'own'.[5][6]Originally used to studyprincipal axesof the rotational motion ofrigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example instability analysis,vibration analysis,atomic orbitals,facial recognition, andmatrix diagonalization. In essence, an eigenvectorvof a linear transformationTis a nonzero vector that, whenTis applied to it, does not change direction. ApplyingTto the eigenvector only scales the eigenvector by the scalar valueλ, called an eigenvalue. This condition can be written as the equationT(v)=λv,{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}referred to as theeigenvalue equationoreigenequation. In general,λmay be anyscalar. For example,λmay be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero orcomplex. The example here, based on theMona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called ashear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Pointsalongthe horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be adifferential operatorlikeddx{\displaystyle {\tfrac {d}{dx}}}, in which case the eigenvectors are functions calledeigenfunctionsthat are scaled by that differential operator, such asddxeλx=λeλx.{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}Alternatively, the linear transformation could take the form of annbynmatrix, in which case the eigenvectors arenby 1 matrices. If the linear transformation is expressed in the form of annbynmatrixA, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplicationAv=λv,{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}where the eigenvectorvis annby 1 matrix. For a matrix, eigenvalues and eigenvectors can be used todecompose the matrix—for example bydiagonalizingit. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefixeigen-is applied liberally when naming them: Eigenvalues are often introduced in the context oflinear algebraormatrix theory. Historically, however, they arose in the study ofquadratic formsanddifferential equations. In the 18th century,Leonhard Eulerstudied the rotational motion of arigid body, and discovered the importance of theprincipal axes.[a]Joseph-Louis Lagrangerealized that the principal axes are the eigenvectors of the inertia matrix.[10] In the early 19th century,Augustin-Louis Cauchysaw how their work could be used to classify thequadric surfaces, and generalized it to arbitrary dimensions.[11]Cauchy also coined the termracine caractéristique(characteristic root), for what is now calledeigenvalue; his term survives incharacteristic equation.[b] Later,Joseph Fourierused the work of Lagrange andPierre-Simon Laplaceto solve theheat equationbyseparation of variablesin his 1822 treatiseThe Analytic Theory of Heat (Théorie analytique de la chaleur).[12]Charles-François Sturmelaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that realsymmetric matriceshave real eigenvalues.[11]This was extended byCharles Hermitein 1855 to what are now calledHermitian matrices.[13] Around the same time,Francesco Brioschiproved that the eigenvalues oforthogonal matriceslie on theunit circle,[11]andAlfred Clebschfound the corresponding result forskew-symmetric matrices.[13]Finally,Karl Weierstrassclarified an important aspect in thestability theorystarted by Laplace, by realizing thatdefective matricescan cause instability.[11] In the meantime,Joseph Liouvillestudied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now calledSturm–Liouville theory.[14]Schwarzstudied the first eigenvalue ofLaplace's equationon general domains towards the end of the 19th century, whilePoincaréstudiedPoisson's equationa few years later.[15] At the start of the 20th century,David Hilbertstudied the eigenvalues ofintegral operatorsby viewing the operators as infinite matrices.[16]He was the first to use theGermanwordeigen, which means "own",[6]to denote eigenvalues and eigenvectors in 1904,[c]though he may have been following a related usage byHermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.[17] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, whenRichard von Misespublished thepower method. One of the most popular methods today, theQR algorithm, was proposed independently byJohn G. F. Francis[18]andVera Kublanovskaya[19]in 1961.[20][21] Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[22][23]Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4]which is especially common in numerical and computational applications.[24] Considern-dimensional vectors that are formed as a list ofnscalars, such as the three-dimensional vectorsx=[1−34]andy=[−2060−80].{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to bescalar multiplesof each other, orparallelorcollinear, if there is a scalarλsuch thatx=λy.{\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case,λ=−120{\displaystyle \lambda =-{\frac {1}{20}}}. Now consider the linear transformation ofn-dimensional vectors defined by annbynmatrixA,Av=w,{\displaystyle A\mathbf {v} =\mathbf {w} ,}or[A11A12⋯A1nA21A22⋯A2n⋮⋮⋱⋮An1An2⋯Ann][v1v2⋮vn]=[w1w2⋮wn]{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}where, for each row,wi=Ai1v1+Ai2v2+⋯+Ainvn=∑j=1nAijvj.{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs thatvandware scalar multiples, that is if thenvis aneigenvectorof the linear transformationAand the scale factorλis theeigenvaluecorresponding to that eigenvector. Equation (1) is theeigenvalue equationfor the matrixA. Equation (1) can be stated equivalently as whereIis thenbynidentity matrixand0is the zero vector. Equation (2) has a nonzero solutionvif and only ifthedeterminantof the matrix(A−λI)is zero. Therefore, the eigenvalues ofAare values ofλthat satisfy the equation Using theLeibniz formula for determinants, the left-hand side of equation (3) is apolynomialfunction of the variableλand thedegreeof this polynomial isn, the order of the matrixA. Itscoefficientsdepend on the entries ofA, except that its term of degreenis always (−1)nλn. This polynomial is called thecharacteristic polynomialofA. Equation (3) is called thecharacteristic equationor thesecular equationofA. Thefundamental theorem of algebraimplies that the characteristic polynomial of ann-by-nmatrixA, being a polynomial of degreen, can befactoredinto the product ofnlinear terms, where eachλimay be real but in general is a complex number. The numbersλ1,λ2, ...,λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues ofA. As a brief example, which is described in more detail in the examples section later, consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of(A−λI), the characteristic polynomial ofAisdet(A−λI)=|2−λ112−λ|=3−4λ+λ2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA. The eigenvectors corresponding to each eigenvalue can be found by solving for the components ofvin the equation(A−λI)v=0{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }.In this example, the eigenvectors are any nonzero scalar multiples ofvλ=1=[1−1],vλ=3=[11].{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrixAare all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may beirrational numberseven if all the entries ofAarerational numbersor even if they are all integers. However, if the entries ofAare allalgebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs ofcomplex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by theintermediate value theoremat least one of the roots is real. Therefore, anyreal matrixwith odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. Thespectrumof a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as thespectral radiusof the matrix. Letλibe an eigenvalue of annbynmatrixA. Thealgebraic multiplicityμA(λi) of the eigenvalue is itsmultiplicity as a rootof the characteristic polynomial, that is, the largest integerksuch that (λ−λi)kdivides evenlythat polynomial.[9][25][26] Suppose a matrixAhas dimensionnandd≤ndistinct eigenvalues. Whereas equation (4) factors the characteristic polynomial ofAinto the product ofnlinear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product ofdterms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,det(A−λI)=(λ1−λ)μA(λ1)(λ2−λ)μA(λ2)⋯(λd−λ)μA(λd).{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} Ifd=nthen the right-hand side is the product ofnlinear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimensionnas1≤μA(λi)≤n,μA=∑i=1dμA(λi)=n.{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} IfμA(λi) = 1, thenλiis said to be asimple eigenvalue.[26]IfμA(λi) equals the geometric multiplicity ofλi,γA(λi), defined in the next section, thenλiis said to be asemisimple eigenvalue. Given a particular eigenvalueλof thenbynmatrixA, define thesetEto be all vectorsvthat satisfy equation (2),E={v:(A−λI)v=0}.{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely thekernelor nullspace of the matrix(A−λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector ofAassociated withλ. So, the setEis theunionof the zero vector with the set of all eigenvectors ofAassociated withλ, andEequals the nullspace of(A−λI).Eis called theeigenspaceorcharacteristic spaceofAassociated withλ.[27][9]In generalλis a complex number and the eigenvectors are complexnby 1 matrices. A property of the nullspace is that it is alinear subspace, soEis a linear subspace ofCn{\displaystyle \mathbb {C} ^{n}}. Because the eigenspaceEis a linear subspace, it isclosedunder addition. That is, if two vectorsuandvbelong to the setE, writtenu,v∈E, then(u+v) ∈Eor equivalentlyA(u+v) =λ(u+v). This can be checked using thedistributive propertyof matrix multiplication. Similarly, becauseEis a linear subspace, it is closed under scalar multiplication. That is, ifv∈Eandαis a complex number,(αv) ∈Eor equivalentlyA(αv) =λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers iscommutative. As long asu+vandαvare not zero, they are also eigenvectors ofAassociated withλ. The dimension of the eigenspaceEassociated withλ, or equivalently the maximum number of linearly independent eigenvectors associated withλ, is referred to as the eigenvalue'sgeometric multiplicityγA(λ){\displaystyle \gamma _{A}(\lambda )}. BecauseEis also the nullspace of(A−λI), the geometric multiplicity ofλis the dimension of the nullspace of(A−λI),also called thenullityof(A−λI),which relates to the dimension and rank of(A−λI)asγA(λ)=n−rank⁡(A−λI).{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceedn.1≤γA(λ)≤μA(λ)≤n{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequalityγA(λ)≤μA(λ){\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}, consider how the definition of geometric multiplicity implies the existence ofγA(λ){\displaystyle \gamma _{A}(\lambda )}orthonormaleigenvectorsv1,…,vγA(λ){\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}, such thatAvk=λvk{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}. We can therefore find a (unitary) matrixVwhose firstγA(λ){\displaystyle \gamma _{A}(\lambda )}columns are these eigenvectors, and whose remaining columns can be any orthonormal set ofn−γA(λ){\displaystyle n-\gamma _{A}(\lambda )}vectors orthogonal to these eigenvectors ofA. ThenVhas full rank and is therefore invertible. EvaluatingD:=VTAV{\displaystyle D:=V^{T}AV}, we get a matrix whose top left block is the diagonal matrixλIγA(λ){\displaystyle \lambda I_{\gamma _{A}(\lambda )}}. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding−ξV{\displaystyle -\xi V}on both sides, we get(A−ξI)V=V(D−ξI){\displaystyle (A-\xi I)V=V(D-\xi I)}sinceIcommutes withV. In other words,A−ξI{\displaystyle A-\xi I}is similar toD−ξI{\displaystyle D-\xi I}, anddet(A−ξI)=det(D−ξI){\displaystyle \det(A-\xi I)=\det(D-\xi I)}. But from the definition ofD, we know thatdet(D−ξI){\displaystyle \det(D-\xi I)}contains a factor(ξ−λ)γA(λ){\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}, which means that the algebraic multiplicity ofλ{\displaystyle \lambda }must satisfyμA(λ)≥γA(λ){\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}. SupposeAhasd≤n{\displaystyle d\leq n}distinct eigenvaluesλ1,…,λd{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}, where the geometric multiplicity ofλi{\displaystyle \lambda _{i}}isγA(λi){\displaystyle \gamma _{A}(\lambda _{i})}. The total geometric multiplicity ofA,γA=∑i=1dγA(λi),d≤γA≤n,{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}is the dimension of thesumof all the eigenspaces ofA's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors ofA. IfγA=n{\displaystyle \gamma _{A}=n}, then LetA{\displaystyle A}be an arbitraryn×n{\displaystyle n\times n}matrix of complex numbers with eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}. Each eigenvalue appearsμA(λi){\displaystyle \mu _{A}(\lambda _{i})}times in this list, whereμA(λi){\displaystyle \mu _{A}(\lambda _{i})}is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to aright eigenvector, namely acolumnvector thatrightmultiplies then×n{\displaystyle n\times n}matrixA{\displaystyle A}in the defining equation, equation (1),Av=λv.{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined forrowvectors thatleftmultiply matrixA{\displaystyle A}. In this formulation, the defining equation isuA=κu,{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} whereκ{\displaystyle \kappa }is a scalar andu{\displaystyle u}is a1×n{\displaystyle 1\times n}matrix. Any row vectoru{\displaystyle u}satisfying this equation is called aleft eigenvectorofA{\displaystyle A}andκ{\displaystyle \kappa }is its associated eigenvalue. Taking the transpose of this equation,ATuT=κuT.{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector ofA{\displaystyle A}is the same as the transpose of a right eigenvector ofAT{\displaystyle A^{\textsf {T}}}, with the same eigenvalue. Furthermore, since the characteristic polynomial ofAT{\displaystyle A^{\textsf {T}}}is the same as the characteristic polynomial ofA{\displaystyle A}, the left and right eigenvectors ofA{\displaystyle A}are associated with the same eigenvalues. Suppose the eigenvectors ofAform a basis, or equivalentlyAhasnlinearly independent eigenvectorsv1,v2, ...,vnwith associated eigenvaluesλ1,λ2, ...,λn. The eigenvalues need not be distinct. Define asquare matrixQwhose columns are thenlinearly independent eigenvectors ofA, Since each column ofQis an eigenvector ofA, right multiplyingAbyQscales each column ofQby its associated eigenvalue, With this in mind, define a diagonal matrix Λ where each diagonal element Λiiis the eigenvalue associated with theith column ofQ. Then Because the columns ofQare linearly independent, Q is invertible. Right multiplying both sides of the equation byQ−1, or by instead left multiplying both sides byQ−1, Acan therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called theeigendecompositionand it is asimilarity transformation. Such a matrixAis said to besimilarto the diagonal matrix Λ ordiagonalizable. The matrixQis the change of basis matrix of the similarity transformation. Essentially, the matricesAand Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrixAis diagonalizable. LetPbe a non-singular square matrix such thatP−1APis some diagonal matrixD. Left multiplying both byP,AP=PD. Each column ofPmust therefore be an eigenvector ofAwhose eigenvalue is the corresponding diagonal element ofD. Since the columns ofPmust be linearly independent forPto be invertible, there existnlinearly independent eigenvectors ofA. It then follows that the eigenvectors ofAform a basis if and only ifAis diagonalizable. A matrix that is not diagonalizable is said to bedefective. For defective matrices, the notion of eigenvectors generalizes togeneralized eigenvectorsand the diagonal matrix of eigenvalues generalizes to theJordan normal form. Over an algebraically closed field, any matrixAhas aJordan normal formand therefore admits a basis of generalized eigenvectors and a decomposition intogeneralized eigenspaces. In theHermitiancase, eigenvalues can be given a variational characterization. The largest eigenvalue ofH{\displaystyle H}is the maximum value of thequadratic formxTHx/xTx{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }. A value ofx{\displaystyle \mathbf {x} }that realizes that maximum is an eigenvector. Consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectorsvof this transformation satisfy equation (1), and the values ofλfor which the determinant of the matrix (A−λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial ofA,det(A−λI)=|[2112]−λ[1001]|=|2−λ112−λ|=3−4λ+λ2=(λ−3)(λ−1).{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA. Forλ=1, equation (2) becomes,(A−I)vλ=1=[1111][v1v2]=[00]{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}1v1+1v2=0{\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector withv1= −v2solves this equation. Therefore,vλ=1=[v1−v1]=[1−1]{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}is an eigenvector ofAcorresponding toλ= 1, as is any scalar multiple of this vector. Forλ=3, equation (2) becomes(A−3I)vλ=3=[−111−1][v1v2]=[00]−1v1+1v2=0;1v1−1v2=0{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector withv1=v2solves this equation. Therefore,vλ=3=[v1v1]=[11]{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector ofAcorresponding toλ= 3, as is any scalar multiple of this vector. Thus, the vectorsvλ=1andvλ=3are eigenvectors ofAassociated with the eigenvaluesλ=1andλ=3, respectively. Consider the matrixA=[200034049].{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=|[200034049]−λ[100010001]|=|2−λ0003−λ4049−λ|,=(2−λ)[(3−λ)(9−λ)−16]=−λ3+14λ2−35λ+22.{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues ofA. These eigenvalues correspond to the eigenvectors[100]T{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}},[0−21]T{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}},and[012]T{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}},or any nonzero multiple thereof. Consider thecyclic permutation matrixA=[010001100].{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 −λ3, whose roots areλ1=1λ2=−12+i32λ3=λ2∗=−12−i32{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}wherei{\displaystyle i}is animaginary unitwithi2=−1{\displaystyle i^{2}=-1}. For the real eigenvalueλ1= 1, any vector with three equal nonzero entries is an eigenvector. For example,A[555]=[555]=1⋅[555].{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues,λ2λ3=1,λ22=λ3,λ32=λ2.{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} ThenA[1λ2λ3]=[λ2λ31]=λ2⋅[1λ2λ3],{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}andA[1λ3λ2]=[λ3λ21]=λ3⋅[1λ3λ2].{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors ofAare complex and arevλ2=[1λ2λ3]T{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}andvλ3=[1λ3λ2]T{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}with eigenvaluesλ2andλ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,vλ2=vλ3∗.{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} Matrices with entries only along the main diagonal are calleddiagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrixA=[100020003].{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,vλ1=[100],vλ2=[010],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. A matrix whose elements above the main diagonal are all zero is called alowertriangular matrix, while a matrix whose elements below the main diagonal are all zero is called anupper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix,A=[100120233].{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA. These eigenvalues correspond to the eigenvectors,vλ1=[1−112],vλ2=[01−3],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. As in the previous example, the lower triangular matrixA=[2000120001300013],{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}has a characteristic polynomial that is the product of its diagonal elements,det(A−λI)=|2−λ00012−λ00013−λ00013−λ|=(2−λ)2(3−λ)2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. Thealgebraic multiplicityof each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues isμA= 4 =n, the order of the characteristic polynomial and the dimension ofA. On the other hand, thegeometric multiplicityof the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector[01−11]T{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector[0001]T{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}. The total geometric multiplicityγAis 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. For aHermitian matrix, the norm squared of thejth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the correspondingminor matrix,|vi,j|2=∏k(λi−λk(Mj))∏k≠i(λi−λk),{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}whereMj{\textstyle M_{j}}is thesubmatrixformed by removing thejth row and column from the original matrix.[33][34][35]This identity also extends todiagonalizable matrices, and has been rediscovered many times in the literature.[34][36] The definitions of eigenvalue and eigenvectors of a linear transformationTremains valid even if the underlying vector space is an infinite-dimensionalHilbertorBanach space. A widely used class of linear transformations acting on infinite-dimensional spaces are thedifferential operatorsonfunction spaces. LetDbe a linear differential operator on the spaceC∞of infinitelydifferentiablereal functions of a real argumentt. The eigenvalue equation forDis thedifferential equationDf(t)=λf(t){\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors ofDand are commonly calledeigenfunctions. Consider the derivative operatorddt{\displaystyle {\tfrac {d}{dt}}}with eigenvalue equationddtf(t)=λf(t).{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides bydt/f(t) andintegrating. Its solution, theexponential functionf(t)=f(0)eλt,{\displaystyle f(t)=f(0)e^{\lambda t},}is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, forλ= 0 the eigenfunctionf(t) is a constant. The maineigenfunctionarticle gives other examples. The concept of eigenvalues and eigenvectors extends naturally to arbitrarylinear transformationson arbitrary vector spaces. LetVbe any vector space over somefieldKofscalars, and letTbe a linear transformation mappingVintoV,T:V→V.{\displaystyle T:V\to V.} We say that a nonzero vectorv∈Vis aneigenvectorofTif and only if there exists a scalarλ∈Ksuch that This equation is called the eigenvalue equation forT, and the scalarλis theeigenvalueofTcorresponding to the eigenvectorv.T(v) is the result of applying the transformationTto the vectorv, whileλvis the product of the scalarλwithv.[37][38] Given an eigenvalueλ, consider the setE={v:T(v)=λv},{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated withλ.Eis called theeigenspaceorcharacteristic spaceofTassociated withλ.[39] By definition of a linear transformation,T(x+y)=T(x)+T(y),T(αx)=αT(x),{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} forx,y∈Vandα∈K. Therefore, ifuandvare eigenvectors ofTassociated with eigenvalueλ, namelyu,v∈E, thenT(u+v)=λ(u+v),T(αv)=λ(αv).{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, bothu+vand αvare either zero or eigenvectors ofTassociated withλ, namelyu+v,αv∈E, andEis closed under addition and scalar multiplication. The eigenspaceEassociated withλis therefore a linear subspace ofV.[40]If that subspace has dimension 1, it is sometimes called aneigenline.[41] Thegeometric multiplicityγT(λ) of an eigenvalueλis the dimension of the eigenspace associated withλ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[9][26][42]By the definition of eigenvalues and eigenvectors,γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces ofTalways form adirect sum. As a consequence, eigenvectors ofdifferenteigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimensionnof the vector space on whichToperates, and there cannot be more thanndistinct eigenvalues.[d] Any subspace spanned by eigenvectors ofTis aninvariant subspaceofT, and the restriction ofTto such a subspace is diagonalizable. Moreover, if the entire vector spaceVcan be spanned by the eigenvectors ofT, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues ofTis the entire vector spaceV, then a basis ofVcalled aneigenbasiscan be formed from linearly independent eigenvectors ofT. WhenTadmits an eigenbasis,Tis diagonalizable. Ifλis an eigenvalue ofT, then the operator (T−λI) is notone-to-one, and therefore its inverse (T−λI)−1does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T−λI) may not have an inverse even ifλis not an eigenvalue. For this reason, infunctional analysiseigenvalues can be generalized to thespectrum of a linear operatorTas the set of all scalarsλfor which the operator (T−λI) has noboundedinverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with analgebra representation– anassociative algebraacting on amodule. The study of such actions is the field ofrepresentation theory. Therepresentation-theoretical concept of weightis an analog of eigenvalues, whileweight vectorsandweight spacesare the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheafis a tensor-multiple of itself and is considered inLanglands correspondence. The simplestdifference equationshave the form The solution of this equation forxin terms oftis found by using its characteristic equation which can be found by stacking into matrix form a set of equations consisting of the above difference equation and thek– 1 equationsxt−1=xt−1,…,xt−k+1=xt−k+1,{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}giving ak-dimensional system of the first order in the stacked variable vector[xt⋯xt−k+1]{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation giveskcharacteristic rootsλ1,…,λk,{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}for use in the solution equation A similar procedure is used for solving adifferential equationof the form The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such asfloating-point. The eigenvalues of a matrixA{\displaystyle A}can be determined by finding the roots of the characteristic polynomial. This is easy for2×2{\displaystyle 2\times 2}matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any requiredaccuracy.[43]However, this approach is not viable in practice because the coefficients would be contaminated by unavoidableround-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified byWilkinson's polynomial).[43]Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is thedeterminant, which for ann×n{\displaystyle n\times n}matrix is a sum ofn!{\displaystyle n!}different products.[e] Explicitalgebraic formulasfor the roots of a polynomial exist only if the degreen{\displaystyle n}is 4 or less. According to theAbel–Ruffini theoremthere is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degreen{\displaystyle n}is the characteristic polynomial of somecompanion matrixof ordern{\displaystyle n}.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximatenumerical methods. Even theexact formulafor the roots of a degree 3 polynomial is numerically impractical. Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes asystem of linear equationswith known coefficients. For example, once it is known that 6 is an eigenvalue of the matrixA=[4163]{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equationAv=6v{\displaystyle Av=6v}, that is[4163][xy]=6⋅[xy]{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to twolinear equations{4x+y=6x6x+3y=6y{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}that is{−2x+y=06x−3y=0{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equationy=2x{\displaystyle y=2x}. Therefore, any vector of the form[a2a]T{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numbera{\displaystyle a}, is an eigenvector ofA{\displaystyle A}with eigenvalueλ=6{\displaystyle \lambda =6}. The matrixA{\displaystyle A}above has another eigenvalueλ=1{\displaystyle \lambda =1}. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of3x+y=0{\displaystyle 3x+y=0}, that is, any vector of the form[b−3b]T{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numberb{\displaystyle b}. The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector.A variationis to instead multiply the vector by(A−μI)−1{\displaystyle (A-\mu I)^{-1}};this causes it to converge to an eigenvector of the eigenvalue closest toμ∈C{\displaystyle \mu \in \mathbb {C} }. Ifv{\displaystyle \mathbf {v} }is (a good approximation of) an eigenvector ofA{\displaystyle A}, then the corresponding eigenvalue can be computed as wherev∗{\displaystyle \mathbf {v} ^{*}}denotes theconjugate transposeofv{\displaystyle \mathbf {v} }. Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until theQR algorithmwas designed in 1961.[43]Combining theHouseholder transformationwith the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed]For largeHermitiansparse matrices, theLanczos algorithmis one example of an efficientiterative methodto compute eigenvalues and eigenvectors, among several other possibilities.[43] Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is aquadratic equationwithdiscriminantD=−4(sin⁡θ)2{\displaystyle D=-4(\sin \theta )^{2}}, which is a negative number wheneverθis not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,cos⁡θ±isin⁡θ{\displaystyle \cos \theta \pm i\sin \theta }; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (asqueeze mapping) has reciprocal eigenvalues. Theeigendecompositionof asymmetricpositive semidefinite(PSD)matrixyields anorthogonal basisof eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used inmultivariate analysis, where thesamplecovariance matricesare PSD. This orthogonal decomposition is calledprincipal component analysis(PCA) in statistics. PCA studieslinear relationsamong variables. PCA is performed on thecovariance matrixor thecorrelation matrix(in which each variable is scaled to have itssample varianceequal to one). For the covariance or correlation matrix, the eigenvectors correspond toprincipal componentsand the eigenvalues to thevariance explainedby the principal components. Principal component analysis of the correlation matrix provides anorthogonal basisfor the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means ofdimensionality reductionin the study of largedata sets, such as those encountered inbioinformatics. InQ methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment ofpracticalsignificance (which differs from thestatistical significanceofhypothesis testing; cf.criteria for determining the number of factors). More generally, principal component analysis can be used as a method offactor analysisinstructural equation modeling. Inspectral graph theory, an eigenvalue of agraphis defined as an eigenvalue of the graph'sadjacency matrixA{\displaystyle A}, or (increasingly) of the graph'sLaplacian matrixdue to itsdiscrete Laplace operator, which is eitherD−A{\displaystyle D-A}(sometimes called thecombinatorial Laplacian) orI−D−1/2AD−1/2{\displaystyle I-D^{-1/2}AD^{-1/2}}(sometimes called thenormalized Laplacian), whereD{\displaystyle D}is a diagonal matrix withDii{\displaystyle D_{ii}}equal to the degree of vertexvi{\displaystyle v_{i}}, and inD−1/2{\displaystyle D^{-1/2}}, thei{\displaystyle i}th diagonal entry is1/deg⁡(vi){\textstyle 1/{\sqrt {\deg(v_{i})}}}. Thek{\displaystyle k}th principal eigenvector of a graph is defined as either the eigenvector corresponding to thek{\displaystyle k}th largest ork{\displaystyle k}th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure thecentralityof its vertices. An example isGoogle'sPageRankalgorithm. The principal eigenvector of a modifiedadjacency matrixof the World Wide Web graph gives the page ranks as its components. This vector corresponds to thestationary distributionof theMarkov chainrepresented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, viaspectral clustering. Other methods are also available for clustering. AMarkov chainis represented by a matrix whose entries are thetransition probabilitiesbetween states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. ThePerron–Frobenius theoremgives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with manydegrees of freedom. The eigenvalues are thenatural frequencies(oreigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed bymx¨+kx=0{\displaystyle m{\ddot {x}}+kx=0}ormx¨=−kx{\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expectx{\displaystyle x}to be sinusoidal in time). Inn{\displaystyle n}dimensions,m{\displaystyle m}becomes amass matrixandk{\displaystyle k}astiffness matrix. Admissible solutions are then a linear combination of solutions to thegeneralized eigenvalue problemkx=ω2mx{\displaystyle kx=\omega ^{2}mx}whereω2{\displaystyle \omega ^{2}}is the eigenvalue andω{\displaystyle \omega }is the (imaginary)angular frequency. The principalvibration modesare different from the principal compliance modes, which are the eigenvectors ofk{\displaystyle k}alone. Furthermore,damped vibration, governed bymx¨+cx˙+kx=0{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}leads to a so-calledquadratic eigenvalue problem,(ω2m+ωc+k)x=0.{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem byalgebraic manipulationat the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of thedifferential equationsso that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved usingfinite element analysis, but neatly generalize the solution to scalar-valued vibration problems. Inmechanics, the eigenvectors of themoment of inertia tensordefine theprincipal axesof arigid body. Thetensorof moment ofinertiais a key quantity required to determine the rotation of a rigid body around itscenter of mass. Insolid mechanics, thestresstensor is symmetric and so can be decomposed into adiagonaltensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has noshearcomponents; the components it does have are the principal components. An example of an eigenvalue equation where the transformationT{\displaystyle T}is represented in terms of a differential operator is the time-independentSchrödinger equationinquantum mechanics: whereH{\displaystyle H}, theHamiltonian, is a second-orderdifferential operatorandψE{\displaystyle \psi _{E}}, thewavefunction, is one of its eigenfunctions corresponding to the eigenvalueE{\displaystyle E}, interpreted as itsenergy. However, in the case where one is interested only in thebound statesolutions of the Schrödinger equation, one looks forψE{\displaystyle \psi _{E}}within the space ofsquare integrablefunctions. Since this space is aHilbert spacewith a well-definedscalar product, one can introduce abasis setin whichψE{\displaystyle \psi _{E}}andH{\displaystyle H}can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. Thebra–ket notationis often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by|ΨE⟩{\displaystyle |\Psi _{E}\rangle }. In this notation, the Schrödinger equation is: where|ΨE⟩{\displaystyle |\Psi _{E}\rangle }is aneigenstateofH{\displaystyle H}andE{\displaystyle E}represents the eigenvalue.H{\displaystyle H}is anobservableself-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation aboveH|ΨE⟩{\displaystyle H|\Psi _{E}\rangle }is understood to be the vector obtained by application of the transformationH{\displaystyle H}to|ΨE⟩{\displaystyle |\Psi _{E}\rangle }. Light,acoustic waves, andmicrowavesare randomlyscatterednumerous times when traversing a staticdisordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrixt{\displaystyle \mathbf {t} }.[44][45]The eigenvectors of the transmission operatort†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,τ{\displaystyle \tau }, oft†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution withτmax=1{\displaystyle \tau _{\max }=1}andτmin=0{\displaystyle \tau _{\min }=0}.[45]Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.[46] Inquantum mechanics, and in particular inatomicandmolecular physics, within theHartree–Focktheory, theatomicandmolecular orbitalscan be defined by the eigenvectors of theFock operator. The corresponding eigenvalues are interpreted asionization potentialsviaKoopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by aniterationprocedure, called in this caseself-consistent fieldmethod. Inquantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonalbasis set. This particular representation is ageneralized eigenvalue problemcalledRoothaan equations. Ingeology, especially in the study ofglacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of aclast'sfabriccan be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as astereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,.[47][48]A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used incrystallographyto createstereograms.[49] The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are orderedv1,v2,v3{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}by their eigenvaluesE1≥E2≥E3{\displaystyle E_{1}\geq E_{2}\geq E_{3}};[50]v1{\displaystyle \mathbf {v} _{1}}then is the primary orientation/dip of clast,v2{\displaystyle \mathbf {v} _{2}}is the secondary andv3{\displaystyle \mathbf {v} _{3}}is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on acompass roseof360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values ofE1{\displaystyle E_{1}},E2{\displaystyle E_{2}}, andE3{\displaystyle E_{3}}are dictated by the nature of the sediment's fabric. IfE1=E2=E3{\displaystyle E_{1}=E_{2}=E_{3}}, the fabric is said to be isotropic. IfE1=E2>E3{\displaystyle E_{1}=E_{2}>E_{3}}, the fabric is said to be planar. IfE1>E2>E3{\displaystyle E_{1}>E_{2}>E_{3}}, the fabric is said to be linear.[51] The basic reproduction number (R0{\displaystyle R_{0}}) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, thenR0{\displaystyle R_{0}}is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,tG{\displaystyle t_{G}}, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after timetG{\displaystyle t_{G}}has passed. The valueR0{\displaystyle R_{0}}is then the largest eigenvalue of the next generation matrix.[52][53] Inimage processing, processed images of faces can be seen as vectors whose components are thebrightnessesof eachpixel.[54]The dimension of this vector space is the number of pixels. The eigenvectors of thecovariance matrixassociated with a large set of normalized pictures of faces are calledeigenfaces; this is an example ofprincipal component analysis. They are very useful for expressing any face image as alinear combinationof some of them. In thefacial recognitionbranch ofbiometrics, eigenfaces provide a means of applyingdata compressionto faces foridentificationpurposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept,eigenvoicesrepresent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. Wikiversity uses introductory physics to introduceEigenvalues and eigenvectors
https://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
Ordinationorgradient analysis, inmultivariate analysis, is a method complementary todata clustering, and used mainly inexploratory data analysis(rather than inhypothesis testing). In contrast to cluster analysis, ordinationordersquantities in a (usually lower-dimensional) latent space. In the ordination space, quantities that are near each other share attributes (i.e., are similar to some degree), and dissimilar objects are farther from each other. Such relationships between the objects, on each of several axes orlatent variables, are then characterized numerically and/or graphically in abiplot. The first ordination method,principal components analysis, was suggested by Karl Pearson in 1901. Ordination methods can broadly be categorized in eigenvector-, algorithm-, or model-based methods. Many classical ordination techniques, including principal components analysis,correspondence analysis(CA) and its derivatives (detrended correspondence analysis, canonical correspondence analysis, andredundancy analysis, belong to the first group). The second group includes some distance-based methods such as non-metricmultidimensional scaling, and machine learning methods such asT-distributed stochastic neighbor embeddingandnonlinear dimensionality reduction. The third group includes model-based ordination methods, which can be considered as multivariate extensions ofGeneralized Linear Models.[1][2][3][4]Model-based ordination methods are more flexible in their application than classical ordination methods, so that it is for example possible to include random-effects.[5]Unlike in the aforementioned two groups, there is no (implicit or explicit) distance measure in the ordination. Instead, a distribution needs to be specified for the responses as is typical for statistical models. These and other assumptions, such as the assumed mean-variance relationship, can be validated with the use of residual diagnostics, unlike in other ordination methods. Ordination can be used on the analysis of any set of multivariate objects. It is frequently used in several environmental or ecological sciences, particularly plantcommunity ecology. It is also used ingeneticsandsystems biologyformicroarraydata analysis and inpsychometrics.
https://en.wikipedia.org/wiki/Ordination_(statistics)
Inarchaeology, seriation is arelative datingmethod in whichassemblagesorartifactsfrom numerous sites in the same culture are placed in chronological order. Whereabsolute datingmethods, such as radio carbon, cannot be applied, archaeologists have to userelative datingmethods to date archaeological finds and features. Seriation is a standard method of dating in archaeology. It can be used to date stone tools, pottery fragments, and other artifacts. In Europe, it has been used frequently to reconstruct the chronological sequence of graves in a cemetery (e.g. Jørgensen 1992;[1]Müssemeier, Nieveler et al. 2003).[2] Two different variants of seriation have been applied: contextual seriation and frequency seriation (Renfrew and Bahn 1996, pp. 116–117). Whereas contextual seriation is based on the presence or absence of adesign style, frequency seriation relies on measuring the proportional abundance or frequency of a design style. Contextual seriation is often used for reconstructing the chronological sequence of graves as only the presence or absence of a design style or type is important. Frequency seriation is applied in cases of large quantities of objects belonging to the same style. An example of this being assemblages of pottery shards that include roughly the same range of types, though in different proportions. Flinders Petrieexcavated atDiospolis ParvainEgyptin the late nineteenth century. He found that the graves he was uncovering contained no evidence of their dates and their discrete nature meant that a sequence could not be constructed through theirstratigraphy. Petrie listed the contents of each grave on a strip of cardboard and swapped the papers around until he arrived at a sequence he was satisfied with.[3]He reasoned that the most accurate sequence would be the one where concentrations of certain design styles had the shortest duration across the sequence of papers (Renfrew and Bahn 1996, p. 116; Kendall 1971, p. 215; Shennan 1997, p. 341[4]). Whereas Petrie is considered the inventor of contextual seriation, Brainerd (1951)[5]and Robinson (1951)[6]were the first to address the problem of frequency seriation (Shennan 1997, p. 342[4]). The assumption that design styles follow a bell curve of popularity – starting slowly, growing to a peak and then dying away as another style becomes popular – provides the basis for frequency seriation. It also assumes that design popularity will be broadly similar from site to site within the sameculture. In addition, it is vital that the lifespans of the different design styles overlap. Following these rules, anassemblageof objects can be placed into sequence so that sites with the most similar proportions of certain styles are always together (Lock 2003, p. 125). The task of identifying design styles i.e. to form groups of objects belonging to the same design style is by no means trivial. Creating atypologyfrequently is the basis of a seriation. Errors in typology result in errors in seriation: For example, if a certain design style had two peaks in popularity (bimodal distribution), this design style is not appropriate for seriation and its inclusion in the analysis may result in strange results. Some design styles were used for a very long time as the shape constructed was handy and no improvement or ornament was added. Of course, these design styles are not eligible for chronological seriation. For example, knives in early medieval times in Europe are said to show no chronological variation. In addition to temporal organization, seriation results may reflect assemblage differences in social status, age, sex or those resulting from regional variation (or a combination of two or more of these factors). Shennan (1997, p. 343)[4]presents a seriation result of Danishhoardsbased on artefact types like daggers, axes, and swords. The result is not a chronological sequence due to the selection of types, the ordering seems to start with extremely male hoards and ends with extremely female ones. Doran and Hodson (1975, p. 269)[7]list three conditions that must be satisfied to obtain a chronological seriation result: Nowadays, seriation results are no longer produced manually as in Petrie's times but by appropriate algorithms. Though according toDavid George Kendall(1971), Petrie's paper showed already a deep understanding of the mathematics of the seriation problem (Quote: "..in my view Petrie should be ranked with the greatest applied mathematicians of the nineteenth century"). In Baxter's (2003, p. 8) list of landmarks of statistics in archaeology the paper of Robinson (1951)[6]is the first entry. Robinson based his frequency seriation method on asimilarity matrix. In 1971, Kendall proposed the use ofmultidimensional scalingtechniques for seriation problems, and this approach has also been used by some other scientists (see Baxter 2003, pp. 202–203). Baxter also presents a review of statistical methods for seriation and a description of these approaches (pp. 202–207). In 1975, Doran and Hodson (pp. 269–281)[7]summarized the state of the art of seriation methods thoroughly, giving detailed descriptions of Kendall's and Robinson's approaches. Today, the most popular seriation method both for contextual and frequency problems is based oncorrespondence analysis. The sequence of the first axis of a correspondence analysis is considered the best seriation order (Shennan 1997,[4]p. 342; Lock 2003, p. 127; Jensen & Høilund Nielsen 1997). Using this technique, not only the sequence of the objects but also those of the design styles is established. Note that external evidence is needed to establish the direction of the sequence calculated, i.e. the method does not tell whether the first object in the sequence is the oldest or the youngest object. Kendall (1971) appliedmultidimensional scalingto the cemetery data of Münsingen. The resultingscatterplotshowed the form of a horse-shoe where the graves were arranged on the curve according to their chronological order. Similarly, a mapping of the component scores for the first two axes of thecorrespondence analysisresult will display aparabolaif the design styles considered are controlled by one factor only (like chronology). This is called thearch effectby Hill and Gauch (1980).[8]Both Kendall and Jensen & Høilund Nielsen (1997) created artificial data sets to show that the parabola results in ideal circumstances. Therefore, it is recommended inspecting the scatterplot of the first two axes ofcorrespondence analysisto find out if other factors play a role as well (see Examples 2 and 3). If more than one factor is important, the arch effect may distort the results. Hill and Gauch (1980) presented a method to remove this effect. In 2003, Groenen and Poblome adapted the correspondence analysis algorithm to combine seriation with absolute dates and stratigraphic relationships.[9][10] The small example below was inspired by Flinders Petrie's serial ordering of Egyptian pottery as published by Renfrew and Bahn (1996, p. 117). The raw data are stored in an unsorted binarycontingency tableindicating which design style can be found in which context by a star symbol. For example, consider the first column: context 3 contains the design stylesblackrim,bottle, andhandle. Abeakeris contained in contexts 1 and 2. Contextual seriation sorts the design styles and the contexts in such a way that the star symbols are found as close as possible to the diagonal of the table. Of course, for a small examples like this, no computer programs are needed to find the best ordering, but for larger data sets like the 900 graves studied by Petrie they are extremely helpful. The data presented in this example wassimulatedby WinBasp. Initially 60 contexts (called units in WinBasp) were created along with 50 types. The contexts were labeled in chronological order by numbers 01 to 60, the types are labeled in the form T00001 to T00050. If a type is represented by one object only this object is not relevant for the chronological sequence as it does not provide a link to another context. Similarly, contexts containing one object only are irrelevant for seriation. Therefore, the contexts with one or no object and types represented by one object or not at all were eliminated. The resulting raw simulated data consisting of 43 contexts and 34 types are shown on the left. As expected, the dots indicating the occurrence of a type in a context are close to the diagonal of the table. The image on the right hand side shows the result of the seriation for this data set. Note that the dots are even more compact along the diagonal of the table compared to the raw data. This shows a minor problem of seriation: In fact, the intervals of production may be somewhat longer than those calculated by the algorithm. In general, the sequences of contexts and types calculated by a seriation algorithm are not the correct chronological sequences but they are fairly close. The image above shows the scatterplot with the typical parabola shape of the first two axes of acorrespondence analysisfor the contexts of the simulated data set. The contingency table shows 29 contexts with ideal seriation data as created by Kendall and Jensen & Høilund Nielsen (see above). With each new context a new type appears and another type disappears. For this regular data, it seems reasonable to assume constant time intervals for contexts adjacent in time. The correspondence analysis results shown in the figures below were calculated on the basis of 49 contexts with ideal seriation data. The scatterplot of the first two correspondence analysis axes shows the typical parabola shape. The display of the scores on the first and the third axes exhibits points lying on a third degreepolynomial curve. Similarly, the plot of the scores on the first and the fourth axes will show a fourth degree polynomial for ideal data – and so on. Note that the distances of the scores for adjacent contexts on the first axis vary: At the beginning and the end, the distances are extremely small, the largest distances in the centre is about 30 times as large as the smallest distance. Hill and Gauch (1979)[8]created a similar contingency table with a regular structure with each context containing six types. They note, too, that the within-context distances are smaller at the ends than in the middle. This was one of the reasons why they proposed an adjustment which is calleddetrended correspondence analysis. Nevertheless, some archaeologists think that alinear transformationof the scores on the first axis on the basis of some known absolute dates will create good estimates for the unknown absolute dates, and this approach is the basis of the method presented by Groenen and Poblome (see above) to combine relative and absolute dates. This ideal example shows that a linear transformation might not be appropriate in all cases, though a simulation study by van de Velden, Groenen and Poblome comes to the conclusion that the predictions of the approach are quite good.[11] The archaeological sequence (or sequence) for short, on a specificarchaeological sitecan be defined on two levels of rigour.
https://en.wikipedia.org/wiki/Seriation_(archaeology)
Principal component analysis(PCA) is alineardimensionality reductiontechnique with applications inexploratory data analysis, visualization anddata preprocessing. The data islinearly transformedonto a newcoordinate systemsuch that the directions (principal components) capturing the largest variation in the data can be easily identified. Theprincipal componentsof a collection of points in areal coordinate spaceare a sequence ofp{\displaystyle p}unit vectors, where thei{\displaystyle i}-th vector is the direction of a line that best fits the data while beingorthogonalto the firsti−1{\displaystyle i-1}vectors. Here, a best-fitting line is defined as one that minimizes the average squaredperpendiculardistance from the points to the line. These directions (i.e., principal components) constitute anorthonormal basisin which different individual dimensions of the data arelinearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.[1] Principal component analysis has applications in many fields such aspopulation genetics,microbiomestudies, andatmospheric science.[2] When performing PCA, the first principal component of a set ofp{\displaystyle p}variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed throughp{\displaystyle p}iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to anindependent set. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. Thei{\displaystyle i}-th principal component can be taken as a direction orthogonal to the firsti−1{\displaystyle i-1}principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components areeigenvectorsof the data'scovariance matrix. Thus, the principal components are often computed byeigendecompositionof the data covariance matrix orsingular value decompositionof the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related tofactor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related tocanonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe thecross-covariancebetween two datasets while PCA defines a neworthogonal coordinate systemthat optimally describes variance in a single dataset.[3][4][5][6]RobustandL1-norm-based variants of standard PCA have also been proposed.[7][8][9][6] PCA was invented in 1901 byKarl Pearson,[10]as an analogue of theprincipal axis theoremin mechanics; it was later independently developed and named byHarold Hotellingin the 1930s.[11]Depending on the field of application, it is also named the discreteKarhunen–Loèvetransform (KLT) insignal processing, theHotellingtransform in multivariate quality control,proper orthogonal decomposition(POD) in mechanical engineering,singular value decomposition(SVD) ofX(invented in the last quarter of the 19th century[12]),eigenvalue decomposition(EVD) ofXTXin linear algebra,factor analysis(for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe'sPrincipal Component Analysis),[13]Eckart–Young theorem(Harman, 1960), orempirical orthogonal functions(EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988),spectral decompositionin noise and vibration, andempirical modal analysisin structural dynamics. PCA can be thought of as fitting ap-dimensionalellipsoidto the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small. To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute thecovariance matrixof the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we mustnormalizeeach of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. Biplotsandscree plots(degree ofexplained variance) are used to interpret findings of the PCA. PCA is defined as anorthogonallinear transformationon a realinner product spacethat transforms the data to a newcoordinate systemsuch that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[13] Consider ann×p{\displaystyle n\times p}datamatrix,X, with column-wise zeroempirical mean(the sample mean of each column has been shifted to zero), where each of thenrows represents a different repetition of the experiment, and each of thepcolumns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of sizel{\displaystyle l}ofp-dimensional vectors of weights or coefficientsw(k)=(w1,…,wp)(k){\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}that map each row vectorx(i)=(x1,…,xp)(i){\displaystyle \mathbf {x} _{(i)}=(x_{1},\dots ,x_{p})_{(i)}}ofXto a new vector of principal componentscorest(i)=(t1,…,tl)(i){\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}, given by in such a way that the individual variablest1,…,tl{\displaystyle t_{1},\dots ,t_{l}}oftconsidered over the data set successively inherit the maximum possible variance fromX, with each coefficient vectorwconstrained to be aunit vector(wherel{\displaystyle l}is usually selected to be strictly less thanp{\displaystyle p}to reduce dimensionality). The above may equivalently be written in matrix form as whereTik=tk(i){\displaystyle {\mathbf {T} }_{ik}={t_{k}}_{(i)}},Xij=xj(i){\displaystyle {\mathbf {X} }_{ij}={x_{j}}_{(i)}}, andWjk=wj(k){\displaystyle {\mathbf {W} }_{jk}={w_{j}}_{(k)}}. In order to maximize variance, the first weight vectorw(1)thus has to satisfy Equivalently, writing this in matrix form gives Sincew(1)has been defined to be a unit vector, it equivalently also satisfies The quantity to be maximised can be recognised as aRayleigh quotient. A standard result for apositive semidefinite matrixsuch asXTXis that the quotient's maximum possible value is the largesteigenvalueof the matrix, which occurs whenwis the correspondingeigenvector. Withw(1)found, the first principal component of a data vectorx(i)can then be given as a scoret1(i)=x(i)⋅w(1)in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)⋅w(1)}w(1). Thek-th component can be found by subtracting the firstk− 1 principal components fromX: and then finding the weight vector which extracts the maximum variance from this new data matrix It turns out that this gives the remaining eigenvectors ofXTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors ofXTX. Thek-th principal component of a data vectorx(i)can therefore be given as a scoretk(i)=x(i)⋅w(k)in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)⋅w(k)}w(k), wherew(k)is thekth eigenvector ofXTX. The full principal components decomposition ofXcan therefore be given as whereWis ap-by-pmatrix of weights whose columns are the eigenvectors ofXTX. The transpose ofWis sometimes called thewhitening or sphering transformation. Columns ofWmultiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are calledloadingsin PCA or in Factor analysis. XTXitself can be recognized as proportional to the empirical samplecovariance matrixof the datasetXT.[13]: 30–31 The sample covarianceQbetween two of the different principal components over the dataset is given by: where the eigenvalue property ofw(k)has been used to move from line 2 to line 3. However eigenvectorsw(j)andw(k)corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written The empirical covariance matrix between the principal components becomes whereΛis the diagonal matrix of eigenvaluesλ(k)ofXTX.λ(k)is equal to the sum of the squares over the dataset associated with each componentk, that is,λ(k)= Σitk2(i)= Σi(x(i)⋅w(k))2. The transformationP=XWmaps a data vectorx(i)from an original space ofxvariables to a new space ofpvariables which are uncorrelated over the dataset. To non-dimensionalize the centered data, letXcrepresent the characteristic values of data vectorsXi, given by: for a dataset of sizen. These norms are used to transform the original space of variablesx, yto a new space of uncorrelated variablesp, q(givenYcwith same meaning), such thatpi=XiXc,qi=YiYc{\displaystyle p_{i}={\frac {X_{i}}{X_{c}}},\quad q_{i}={\frac {Y_{i}}{Y_{c}}}}; and the new variables are linearly related as:q=αp{\displaystyle q=\alpha p}. To find the optimal linear relationship, we minimize the total squared reconstruction error:E(α)=11−α2∑i=1n(αpi−qi)2{\displaystyle E(\alpha )={\frac {1}{1-\alpha ^{2}}}\sum _{i=1}^{n}(\alpha p_{i}-q_{i})^{2}}; such that setting the derivative of the error function to zero(E′(α)=0){\displaystyle (E'(\alpha )=0)}yields:α=12(−λ±λ2+4){\displaystyle \alpha ={\frac {1}{2}}\left(-\lambda \pm {\sqrt {\lambda ^{2}+4}}\right)}whereλ=p⋅p−q⋅qp⋅q{\displaystyle \lambda ={\frac {p\cdot p-q\cdot q}{p\cdot q}}}.[14] Suchdimensionality reductioncan be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selectingL= 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data containsclustersthese too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, inregression analysis, the larger the number ofexplanatory variablesallowed, the greater is the chance ofoverfittingthe model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method calledprincipal component regression. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns ofTwill also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrixW, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a highersignal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested usingparametric bootstrap, as an aid in determining how many principal components to retain.[15] The principal components transformation can also be associated with another matrix factorization, thesingular value decomposition(SVD) ofX, HereΣis ann-by-prectangular diagonal matrixof positive numbersσ(k), called the singular values ofX;Uis ann-by-nmatrix, the columns of which are orthogonal unit vectors of lengthncalled the left singular vectors ofX; andWis ap-by-pmatrix whose columns are orthogonal unit vectors of lengthpand called the right singular vectors ofX. In terms of this factorization, the matrixXTXcan be written whereΣ^{\displaystyle \mathbf {\hat {\Sigma }} }is the square diagonal matrix with the singular values ofXand the excess zeros chopped off that satisfiesΣ^2=ΣTΣ{\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }. Comparison with the eigenvector factorization ofXTXestablishes that the right singular vectorsWofXare equivalent to the eigenvectors ofXTX, while the singular valuesσ(k)ofX{\displaystyle \mathbf {X} }are equal to the square-root of the eigenvaluesλ(k)ofXTX. Using the singular value decomposition the score matrixTcan be written so each column ofTis given by one of the left singular vectors ofXmultiplied by the corresponding singular value. This form is also thepolar decompositionofT. Efficient algorithms exist to calculate the SVD ofXwithout having to form the matrixXTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,[16]unless only a handful of components are required. As with the eigen-decomposition, a truncatedn×Lscore matrixTLcan be obtained by considering only the first L largest singular values and their singular vectors: The truncation of a matrixMorTusing a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix ofrankLto the original matrix, in the sense of the difference between the two having the smallest possibleFrobenius norm, a result known as theEckart–Young theorem[1936]. Theorem (Optimal k‑dimensional fit).Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and letP=UΣVT{\displaystyle P=U\,\Sigma \,V^{T}}be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense isPk=UkΣkVkT{\displaystyle P_{k}=U_{k}\,\Sigma _{k}\,V_{k}^{T}}, where Vkconsists of the first k columns of V. Moreover, the relative residual variance isR(k)=∑j=k+1mσj2∑j=1mσj2{\displaystyle R(k)={\frac {\sum _{j=k+1}^{m}\sigma _{j}^{2}}{\sum _{j=1}^{m}\sigma _{j}^{2}}}}. [14] The singular values (inΣ) are the square roots of theeigenvaluesof the matrixXTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (seebelow). PCA is often used in this manner fordimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to thediscrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT".Nonlinear dimensionality reductiontechniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.[14] LetXc{\displaystyle \mathbf {X} _{\text{c}}}be the *centered* data matrix (nrows,pcolumns) and define the covarianceΣ=1nXcTXc.{\displaystyle \Sigma ={\frac {1}{n}}\,\mathbf {X} _{\text{c}}^{\mathsf {T}}\mathbf {X} _{\text{c}}.}If thej{\displaystyle j}‑th variable is multiplied by a factorαj{\displaystyle \alpha _{j}}we obtainXc(α)=XcD,D=diag⁡(α1,…,αp).{\displaystyle \mathbf {X} _{\text{c}}^{(\alpha )}=\mathbf {X} _{\text{c}}D,\qquad D=\operatorname {diag} (\alpha _{1},\ldots ,\alpha _{p}).}Hence the new covariance isΣ(α)=DTΣD.{\displaystyle \Sigma ^{(\alpha )}=D^{\mathsf {T}}\,\Sigma \,D.} Because the eigenvalues and eigenvectors ofΣ(α){\displaystyle \Sigma ^{(\alpha )}}are those ofΣ{\displaystyle \Sigma }scaled byD{\displaystyle D}, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates. If we have just two variables and they have the samesample varianceand are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.[14] Write each observation asqi=μ+zi,μ=1n∑i=1nqi.{\displaystyle \mathbf {q} _{i}={\boldsymbol {\mu }}+\mathbf {z} _{i},\qquad {\boldsymbol {\mu }}={\tfrac {1}{n}}\sum _{i=1}^{n}\mathbf {q} _{i}.} Without subtractingμ{\displaystyle {\boldsymbol {\mu }}}we are in effect diagonalising Σunc=nμμT+1nZTZ,{\displaystyle \Sigma _{\text{unc}}\;=\;n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}\;+\;{\tfrac {1}{n}}\,\mathbf {Z} ^{\mathsf {T}}\mathbf {Z} ,} whereZ{\displaystyle \mathbf {Z} }is the centered matrix. The rank‑one termnμμT{\displaystyle n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}}often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred partZ{\displaystyle \mathbf {Z} }. After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance. Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name:Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on"Mean-centering in Moderated Regression: Much Ado About Nothing". Sincecovariances are correlations of normalized variables(Z- or standard-scores) a PCA based on the correlation matrix ofXisequalto a PCA based on the covariance matrix ofZ, the standardized version ofX. PCA is a popular primary technique inpattern recognition. It is not, however, optimized for class separability.[17]However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[18]Thelinear discriminant analysisis an alternative which is optimized for class separability. Some properties of PCA include:[13][page needed] The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements ofx, and they may also be useful inregression, in selecting a subset of variables fromx, and in outlier detection. Before we look at its usage, we first look atdiagonalelements, Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements ofxinto decreasing contributions due to each PC, but we can also decompose the wholecovariance matrixinto contributionsλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}from each PC. Although not strictly decreasing, the elements ofλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}will tend to become smaller ask{\displaystyle k}increases, asλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}is nonincreasing for increasingk{\displaystyle k}, whereas the elements ofαk{\displaystyle \alpha _{k}}tend to stay about the same size because of the normalization constraints:αk′αk=1,k=1,…,p{\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}. As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[19] The applicability of PCA as described above is limited by certain (tacit) assumptions[20]made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (seekernel PCA). Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[21]and forward modeling has to be performed to recover the true magnitude of the signals.[22]As an alternative method,non-negative matrix factorizationfocusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.[23][24][25]See more atthe relation between PCA and non-negative matrix factorization. PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[26] PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[27][page needed]Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[28]The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[28] Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models. Under the assumption that that is, that the data vectorx{\displaystyle \mathbf {x} }is the sum of the desired information-bearing signals{\displaystyle \mathbf {s} }and a noise signaln{\displaystyle \mathbf {n} }one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view. In particular, Linsker showed that ifs{\displaystyle \mathbf {s} }is Gaussian andn{\displaystyle \mathbf {n} }is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes themutual informationI(y;s){\displaystyle I(\mathbf {y} ;\mathbf {s} )}between the desired informations{\displaystyle \mathbf {s} }and the dimensionality-reduced outputy=WLTx{\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }.[29] If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vectorn{\displaystyle \mathbf {n} }areiid), but the information-bearing signals{\displaystyle \mathbf {s} }is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on theinformation loss, which is defined as[30][31] The optimality of PCA is also preserved if the noisen{\displaystyle \mathbf {n} }is iid and at least more Gaussian (in terms of theKullback–Leibler divergence) than the information-bearing signals{\displaystyle \mathbf {s} }.[32]In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noisen{\displaystyle \mathbf {n} }becomes dependent. The following is a detailed description of PCA using the covariance method[33]as opposed to the correlation method.[34] The goal is to transform a given data setXof dimensionpto an alternative data setYof smaller dimensionL. Equivalently, we are seeking to find the matrixY, whereYis theKarhunen–Loèvetransform (KLT) of matrixX: Y=KLT{X}{\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}} Suppose you have data comprising a set of observations ofpvariables, and you want to reduce the data so that each observation can be described with onlyLvariables,L<p. Suppose further, that the data are arranged as a set ofndata vectorsx1…xn{\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}with eachxi{\displaystyle \mathbf {x} _{i}}representing a single grouped observation of thepvariables. Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[35]Hence we proceed by centering the data as follows: In some applications, each variable (column ofB) may also be scaled to have a variance equal to 1 (seeZ-score).[36]This step affects the calculated principal components, but makes them independent of the units used to measure the different variables. LetXbe ad-dimensional random vector expressed as column vector. Without loss of generality, assumeXhas zero mean. We want to find(∗){\displaystyle (\ast )}ad×dorthonormal transformation matrixPso thatPXhas a diagonal covariance matrix (that is,PXis a random vector with all its distinct components pairwise uncorrelated). A quick computation assumingP{\displaystyle P}were unitary yields: Hence(∗){\displaystyle (\ast )}holds if and only ifcov⁡(X){\displaystyle \operatorname {cov} (X)}were diagonalisable byP{\displaystyle P}. This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix. In practical implementations, especially withhigh dimensional data(largep), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids thenp2operations of explicitly calculating and storing the covariance matrixXTX, instead utilizing one ofmatrix-free methods, for example, based on the function evaluating the productXT(X r)at the cost of2npoperations. One way to compute the first principal component efficiently[41]is shown in the following pseudo-code, for a data matrixXwith zero mean, without ever computing its covariance matrix. Thispower iterationalgorithm simply calculates the vectorXT(X r), normalizes, and places the result back inr. The eigenvalue is approximated byrT(XTX) r, which is theRayleigh quotienton the unit vectorrfor the covariance matrixXTX. If the largest singular value is well separated from the next largest one, the vectorrgets close to the first principal component ofXwithin the number of iterationsc, which is small relative top, at the total cost2cnp. Thepower iterationconvergence can be accelerated without noticeably sacrificing the small cost per iteration using more advancedmatrix-free methods, such as theLanczos algorithmor the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectorsrandswith block-vectors, matricesRandS. Every column ofRapproximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the productXT(X R). Implemented, for example, inLOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-levelBLASmatrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique. Non-linear iterative partial least squares (NIPALS)is a variant the classicalpower iterationwith matrix deflation by subtraction implemented for computing the first few components in a principal component orpartial least squaresanalysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example,genomics,metabolomics) it is usually only necessary to compute the first few PCs. Thenon-linear iterative partial least squares(NIPALS) algorithm updates iterative approximations to the leading scores and loadingst1andr1Tby thepower iterationmultiplying on every iteration byXon the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations toXTX, based on the function evaluating the productXT(X r)=((X r)TX)T. The matrix deflation by subtraction is performed by subtracting the outer product,t1r1TfromXleaving the deflated residual matrix used to calculate the subsequent leading PCs.[42]For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precisionround-off errorsaccumulated in each iteration and matrix deflation by subtraction.[43]AGram–Schmidtre-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[44]NIPALS reliance on single-vector multiplications cannot take advantage of high-levelBLASand results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method. In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[45] In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variablespecies. For this, the following results are produced. These results are what is calledintroducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013). Few software offer this option in an "automatic" way. This is the case ofSPADthat historically, following the work ofLudovic Lebart, was the first to propose this option, and the R packageFactoMineR. The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as theIntelligence Quotient(IQ). The pioneering statistical psychologistSpearmanactually developed factor analysis in 1904 for histwo-factor theoryof intelligence, adding a formal technique to the science ofpsychometrics. In 1924Thurstonelooked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[46] In 1949, Shevky and Williams introduced the theory offactorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[47]Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms. One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[48] About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[49] PCA can be used as a formal method for the development of indexes. As an alternativeconfirmatory composite analysishas been proposed to develop and assess indexes.[50] The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city. The country-levelHuman Development Index(HDI) fromUNDP, which has been published since 1990 and is very extensively used in development studies,[51]has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA. In 1978Cavalli-Sforzaand others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events. Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[52] PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologistEran Elhaikpublished a theoretical paper inScientific Reportsanalyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking andcircular reasoning.[53] Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[54] PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[55] Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[56] Inquantitative finance, PCA is used[57]infinancial risk management, and has been applied toother problemssuch asportfolio optimization. PCA is commonly used in problems involvingfixed incomesecurities andportfolios, andinterest rate derivatives. Valuations here depend on the entireyield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,[58]thereby facilitating the modelling. One common risk management application is tocalculating value at risk, VaR, applying PCA to theMonte Carlo simulation.[59]Here, for each simulation-sample, the components are stressed, and rates, andin turn option values, are then reconstructed; with VaR calculated, finally, over the entire run. PCA is also used inhedgingexposure tointerest rate risk, givenpartial durationsand other sensitivities.[58]Under both, the first three, typically, principal components of the system are of interest (representing"shift", "twist", and "curvature"). These principal components are derived from an eigen-decomposition of thecovariance matrixofyieldat predefined maturities;[60]and where thevarianceof each component is itseigenvalue(and as the components areorthogonal, no correlation need be incorporated in subsequent modelling). Forequity, an optimal portfolio is one where theexpected returnis maximized for a given level of risk, or alternatively, where risk is minimized for a given return; seeMarkowitz modelfor discussion. Thus, one approach is to reduce portfolio risk, whereallocation strategiesare applied to the "principal portfolios" instead of the underlyingstocks. A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.[61][62]PCA has also been used to understand relationships[57]between internationalequity markets, and within markets between groups of companies in industries orsectors. PCA may also be applied tostress testing,[63]essentially an analysis of a bank's ability to endurea hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several]macroeconomic variablesinto a more manageable data set, which can then [be used] for analysis."[63]Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor'seigenvector– and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks. A variant of principal components analysis is used inneuroscienceto identify the specific properties of a stimulus that increases aneuron's probability of generating anaction potential.[64][65]This technique is known asspike-triggered covariance analysis. In a typical application an experimenter presents awhite noiseprocess as a stimulus (usually either as a sensory input to a test subject, or as acurrentinjected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates thecovariance matrixof thespike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. Theeigenvectorsof the difference between the spike-triggered covariance matrix and the covariance matrix of theprior stimulus ensemble(the set of all stimuli, defined over the same length time window) then indicate the directions in thespaceof stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features. In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential.Spike sortingis an important procedure becauseextracellularrecording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performsclustering analysisto associate specific action potentials with individual neurons. PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is,order parameters, duringphase transitionsin the brain.[66] Correspondence analysis(CA) was developed byJean-Paul Benzécri[67]and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied tocontingency tables. CA decomposes thechi-squared statisticassociated to this table into orthogonal factors.[68]Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not. Several variants of CA are available includingdetrended correspondence analysisandcanonical correspondence analysis. One special extension ismultiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[69] Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors. Factor analysisis similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[70]In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[13]: 158Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) orcausal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[71] It has been asserted that the relaxed solution ofk-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[72][73]However, that PCA is a useful relaxation ofk-means clustering was not a new result,[74]and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[75] Non-negative matrix factorization(NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[23][24][25]in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis. In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[21]For NMF, its components are ranked based only on the empirical FRV curves.[25]The residual fractional eigenvalue plots, that is,1−∑i=1kλi/∑j=1nλj{\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}as a function of component numberk{\displaystyle k}given a total ofn{\displaystyle n}components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise).[21]The FRV curves for NMF is decreasing continuously[25]when the NMF components are constructedsequentially,[24]indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[25]indicating the less over-fitting property of NMF. It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane. Theiconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables. The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation). A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable". A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables.Sparse PCAovercomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables. Several approaches have been proposed, including The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[82] Most of the modern methods fornonlinear dimensionality reductionfind their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points.Trevor Hastieexpanded on this concept by proposingPrincipalcurves[86]as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for dataapproximationfollowed byprojectingthe points onto it. See also theelastic mapalgorithm andprincipal geodesic analysis.[87]Another popular generalization iskernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel. Inmultilinear subspace learning,[88][89][90]PCA is generalized tomultilinear PCA(MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA. N-way principal component analysis may be performed with models such asTucker decomposition,PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS. While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive tooutliersin the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.[91]For example, indata miningalgorithms likecorrelation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA[92]based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy. Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[7][5] Robust principal component analysis(RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[93][94][95] Independent component analysis(ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations. Given a matrixE{\displaystyle E}, it tries to decompose it into two matrices such thatE=AP{\displaystyle E=AP}. A key difference from techniques such as PCA and ICA is that some of the entries ofA{\displaystyle A}are constrained to be 0. HereP{\displaystyle P}is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied : then the decomposition is unique up to multiplication by a scalar.[96] Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[97]In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA). A DAPC can be realized on R using the package Adegenet. (more info:adegenet on the web) Directional component analysis(DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.[98]Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets. Also like PCA, it is based on a covariance matrix derived from the input dataset. The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact. Whereas PCA maximises explained variance, DCA maximises probability density given impact. The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact). DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles ,[99]and the most likely and most impactful changes in rainfall due to climate change .[100]
https://en.wikipedia.org/wiki/Principal_Component_Analysis
Inlinear algebra,eigendecompositionis thefactorizationof amatrixinto acanonical form, whereby the matrix is represented in terms of itseigenvalues and eigenvectors. Onlydiagonalizable matricescan be factorized in this way. When the matrix being factorized is anormalor realsymmetric matrix, the decomposition is called "spectral decomposition", derived from thespectral theorem. A (nonzero) vectorvof dimensionNis an eigenvector of a squareN×NmatrixAif it satisfies alinear equationof the formAv=λv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }for some scalarλ. Thenλis called the eigenvalue corresponding tov. Geometrically speaking, the eigenvectors ofAare the vectors thatAmerely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvaluesp(λ)=det(A−λI)=0.{\displaystyle p\left(\lambda \right)=\det \left(\mathbf {A} -\lambda \mathbf {I} \right)=0.}We callp(λ)thecharacteristic polynomial, and the equation, called the characteristic equation, is anNth-order polynomial equation in the unknownλ. This equation will haveNλdistinct solutions, where1 ≤Nλ≤N. The set of solutions, that is, the eigenvalues, is called thespectrumofA.[1][2][3] If the field of scalars isalgebraically closed, then we canfactorpasp(λ)=(λ−λ1)n1(λ−λ2)n2⋯(λ−λNλ)nNλ=0.{\displaystyle p(\lambda )=\left(\lambda -\lambda _{1}\right)^{n_{1}}\left(\lambda -\lambda _{2}\right)^{n_{2}}\cdots \left(\lambda -\lambda _{N_{\lambda }}\right)^{n_{N_{\lambda }}}=0.}The integerniis termed thealgebraic multiplicityof eigenvalueλi. The algebraic multiplicities sum toN:∑i=1Nλni=N.{\textstyle \sum _{i=1}^{N_{\lambda }}{n_{i}}=N.} For each eigenvalueλi, we have a specific eigenvalue equation(A−λiI)v=0.{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} =0.}There will be1 ≤mi≤nilinearly independentsolutions to each eigenvalue equation. The linear combinations of themisolutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalueλi. The integermiis termed thegeometric multiplicityofλi. It is important to keep in mind that the algebraic multiplicityniand geometric multiplicitymimay or may not be equal, but we always havemi≤ni. The simplest case is of course whenmi=ni= 1. The total number of linearly independent eigenvectors,Nv, can be calculated by summing the geometric multiplicities∑i=1Nλmi=Nv.{\displaystyle \sum _{i=1}^{N_{\lambda }}{m_{i}}=N_{\mathbf {v} }.} The eigenvectors can be indexed by eigenvalues, using a double index, withvijbeing thejth eigenvector for theith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single indexvk, withk= 1, 2, ...,Nv. LetAbe a squaren×nmatrix withnlinearly independent eigenvectorsqi(wherei= 1, ...,n). ThenAcan befactoredasA=QΛQ−1{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}whereQis the squaren×nmatrix whoseith column is the eigenvectorqiofA, andΛis thediagonal matrixwhose diagonal elements are the corresponding eigenvalues,Λii=λi. Note that onlydiagonalizable matricescan be factorized in this way. For example, thedefective matrix[1101]{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}(which is ashear matrix) cannot be diagonalized. Theneigenvectorsqiare usually normalized, but they don't have to be. A non-normalized set ofneigenvectors,vican also be used as the columns ofQ. That can be understood by noting that the magnitude of the eigenvectors inQgets canceled in the decomposition by the presence ofQ−1. If one of the eigenvaluesλihas multiple linearly independent eigenvectors (that is, the geometric multiplicity ofλiis greater than 1), then these eigenvectors for this eigenvalueλican be chosen to be mutuallyorthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that ifAis a normal matrix, then by the spectral theorem, it's always possible to diagonalizeAin anorthonormal basis{qi}. The decomposition can be derived from the fundamental property of eigenvectors:Av=λvAQ=QΛA=QΛQ−1.{\displaystyle {\begin{aligned}\mathbf {A} \mathbf {v} &=\lambda \mathbf {v} \\\mathbf {A} \mathbf {Q} &=\mathbf {Q} \mathbf {\Lambda } \\\mathbf {A} &=\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}.\end{aligned}}}The linearly independent eigenvectorsqiwith nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible productsAx, forx∈Cn, which is the same as theimage(orrange) of the correspondingmatrix transformation, and also thecolumn spaceof the matrixA. The number of linearly independent eigenvectorsqiwith nonzero eigenvalues is equal to therankof the matrixA, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space. The linearly independent eigenvectorsqiwith an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for thenull space(also known as the kernel) of the matrix transformationA. The 2 × 2 real matrixAA=[1013]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0\\1&3\\\end{bmatrix}}}may be decomposed into a diagonal matrix through multiplication of a non-singular matrixQQ=[abcd]∈R2×2.{\displaystyle \mathbf {Q} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.} Then[abcd]−1[1013][abcd]=[x00y],{\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}x&0\\0&y\end{bmatrix}},}for some real diagonal matrix[x00y]{\displaystyle \left[{\begin{smallmatrix}x&0\\0&y\end{smallmatrix}}\right]}. Multiplying both sides of the equation on the left byQ:[1013][abcd]=[abcd][x00y].{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}The above equation can be decomposed into twosimultaneous equations:{[1013][ac]=[axcx][1013][bd]=[bydy].{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}={\begin{bmatrix}ax\\cx\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}={\begin{bmatrix}by\\dy\end{bmatrix}}\end{cases}}.}Factoring out theeigenvaluesxandy:{[1013][ac]=x[ac][1013][bd]=y[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=x{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=y{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}Lettinga=[ac],b=[bd],{\displaystyle \mathbf {a} ={\begin{bmatrix}a\\c\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b\\d\end{bmatrix}},}this gives us two vector equations:{Aa=xaAb=yb{\displaystyle {\begin{cases}\mathbf {A} \mathbf {a} =x\mathbf {a} \\\mathbf {A} \mathbf {b} =y\mathbf {b} \end{cases}}}And can be represented by a single vector equation involving two solutions as eigenvalues:Au=λu{\displaystyle \mathbf {A} \mathbf {u} =\lambda \mathbf {u} }whereλrepresents the two eigenvaluesxandy, andurepresents the vectorsaandb. Shiftingλuto the left hand side and factoringuout(A−λI)u=0{\displaystyle \left(\mathbf {A} -\lambda \mathbf {I} \right)\mathbf {u} =\mathbf {0} }SinceQis non-singular, it is essential thatuis nonzero. Therefore,det(A−λI)=0{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}Thus(1−λ)(3−λ)=0{\displaystyle (1-\lambda )(3-\lambda )=0}giving us the solutions of the eigenvalues for the matrixAasλ= 1orλ= 3, and the resulting diagonal matrix from the eigendecomposition ofAis thus[1003]{\displaystyle \left[{\begin{smallmatrix}1&0\\0&3\end{smallmatrix}}\right]}. Putting the solutions back into the above simultaneous equations{[1013][ac]=1[ac][1013][bd]=3[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=1{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=3{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}} Solving the equations, we havea=−2candb=0,c,d∈R.{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad c,d\in \mathbb {R} .}Thus the matrixQrequired for the eigendecomposition ofAisQ=[−2c0cd],c,d∈R,{\displaystyle \mathbf {Q} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad c,d\in \mathbb {R} ,}that is:[−2c0cd]−1[1013][−2c0cd]=[1003],c,d∈R{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad c,d\in \mathbb {R} } If a matrixAcan be eigendecomposed and if none of its eigenvalues are zero, thenAisinvertibleand its inverse is given byA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}IfA{\displaystyle \mathbf {A} }is a symmetric matrix, sinceQ{\displaystyle \mathbf {Q} }is formed from the eigenvectors ofA{\displaystyle \mathbf {A} },Q{\displaystyle \mathbf {Q} }is guaranteed to be anorthogonal matrix, thereforeQ−1=QT{\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }}. Furthermore, becauseΛis adiagonal matrix, its inverse is easy to calculate:[Λ−1]ii=1λi{\displaystyle \left[\mathbf {\Lambda } ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}} When eigendecomposition is used on a matrix of measured, realdata, theinversemay be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4] Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See alsoTikhonov regularizationas a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise. The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution. The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found. The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems). If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of theLaplacianof the sorted eigenvalues:[5]min|∇2λs|{\displaystyle \min \left|\nabla ^{2}\lambda _{\mathrm {s} }\right|}where the eigenvalues are subscripted with ansto denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system. The eigendecomposition allows for much easier computation ofpower seriesof matrices. Iff(x)is given byf(x)=a0+a1x+a2x2+⋯{\displaystyle f(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots }then we know thatf(A)=Qf(Λ)Q−1{\displaystyle f\!\left(\mathbf {A} \right)=\mathbf {Q} \,f\!\left(\mathbf {\Lambda } \right)\mathbf {Q} ^{-1}}BecauseΛis adiagonal matrix, functions ofΛare very easy to calculate:[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)} The off-diagonal elements off(Λ)are zero; that is,f(Λ)is also a diagonal matrix. Therefore, calculatingf(A)reduces to just calculating the function on each of the eigenvalues. A similar technique works more generally with theholomorphic functional calculus, usingA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}fromabove. Once again, we find that[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)} A2=(QΛQ−1)(QΛQ−1)=QΛ(Q−1Q)ΛQ−1=QΛ2Q−1An=QΛnQ−1exp⁡A=Qexp⁡(Λ)Q−1{\displaystyle {\begin{aligned}\mathbf {A} ^{2}&=\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)=\mathbf {Q} \mathbf {\Lambda } \left(\mathbf {Q} ^{-1}\mathbf {Q} \right)\mathbf {\Lambda } \mathbf {Q} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{2}\mathbf {Q} ^{-1}\\[1.2ex]\mathbf {A} ^{n}&=\mathbf {Q} \mathbf {\Lambda } ^{n}\mathbf {Q} ^{-1}\\[1.2ex]\exp \mathbf {A} &=\mathbf {Q} \exp(\mathbf {\Lambda } )\mathbf {Q} ^{-1}\end{aligned}}}which are examples for the functionsf(x)=x2,f(x)=xn,f(x)=exp⁡x{\displaystyle f(x)=x^{2},\;f(x)=x^{n},\;f(x)=\exp {x}}. Furthermore,exp⁡A{\displaystyle \exp {\mathbf {A} }}is thematrix exponential. Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.[6] A complex-valued square matrixA{\displaystyle A}isnormal(meaning ,A∗A=AA∗{\displaystyle \mathbf {A} ^{*}\mathbf {A} =\mathbf {A} \mathbf {A} ^{*}}, whereA∗{\displaystyle \mathbf {A} ^{*}}is theconjugate transpose) if and only if it can be decomposed asA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU{\displaystyle \mathbf {U} }is aunitary matrix(meaningU∗=U−1{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}}) andΛ={\displaystyle \mathbf {\Lambda } =}diag(λ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}) is adiagonal matrix.[7]The columnsu1,⋯,un{\displaystyle \mathbf {u} _{1},\cdots ,\mathbf {u} _{n}}ofU{\displaystyle \mathbf {U} }form anorthonormal basisand are eigenvectors ofA{\displaystyle \mathbf {A} }with corresponding eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}.[8] For example, consider the 2 x 2 normal matrixA=[1221]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}}. The eigenvalues areλ1=3{\displaystyle \lambda _{1}=3}andλ2=−1{\displaystyle \lambda _{2}=-1}. The (normalized) eigenvectors corresponding to these eigenvalues areu1=12[11]{\displaystyle \mathbf {u} _{1}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}andu2=12[−11]{\displaystyle \mathbf {u} _{2}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}-1\\1\end{bmatrix}}}. The diagonalization isA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU=[1/21/21/2−1/2]{\displaystyle \mathbf {U} ={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}},Λ={\displaystyle \mathbf {\Lambda } =}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}andU∗=U−1={\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}. The verification isUΛU∗={\displaystyle \mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}=[1221]=A{\displaystyle ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}=\mathbf {A} }. This example illustrates the process of diagonalizing a normal matrixA{\displaystyle \mathbf {A} }by finding its eigenvalues and eigenvectors, forming the unitary matrixU{\displaystyle \mathbf {U} }, the diagonal matrixΛ{\displaystyle \mathbf {\Lambda } }, and verifying the decomposition. As a special case, for everyn×nrealsymmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real andorthonormal. Thus a real symmetric matrixAcan be decomposed asA=QΛQT{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{\mathsf {T}}}, whereQis anorthogonal matrixwhose columns are the real, orthonormal eigenvectors ofA, andΛis a diagonal matrix whose entries are the eigenvalues ofA.[9] Diagonalizable matricescan be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed asA=PDP−1{\displaystyle \mathbf {A} =\mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}, whereP{\displaystyle \mathbf {P} }is a matrix whose columns are eigenvectors ofA{\displaystyle \mathbf {A} }andD{\displaystyle \mathbf {D} }is a diagonal matrix consisting of the corresponding eigenvalues ofA{\displaystyle \mathbf {A} }.[8] Positivedefinite matricesare matrices for which all eigenvalues are positive. They can be decomposed asA=LLT{\displaystyle \mathbf {A} =\mathbf {L} \mathbf {L} ^{\mathsf {T}}}using theCholesky decomposition, whereL{\displaystyle \mathbf {L} }is a lower triangular matrix.[10] Unitary matricessatisfyUU∗=I{\displaystyle \mathbf {U} \mathbf {U} ^{*}=\mathbf {I} }(real case) orUU†=I{\displaystyle \mathbf {U} \mathbf {U} ^{\dagger }=\mathbf {I} }(complex case), whereU∗{\displaystyle \mathbf {U} ^{*}}denotes theconjugate transposeandU†{\displaystyle \mathbf {U} ^{\dagger }}denotes the conjugate transpose. They diagonalize usingunitary transformations.[8] Hermitian matricessatisfyH=H†{\displaystyle \mathbf {H} =\mathbf {H} ^{\dagger }}, whereH†{\displaystyle \mathbf {H} ^{\dagger }}denotes the conjugate transpose. They can be diagonalized using unitary ororthogonal matrices.[8] Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using thecharacteristic polynomial. However, this is often impossible for larger matrices, in which case we must use anumerical method. In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: theAbel–Ruffini theoremimplies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply usingnth roots. Therefore, general algorithms to find eigenvectors and eigenvalues areiterative. Iterative numerical algorithms for approximating roots of polynomials exist, such asNewton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that smallround-off errorsin the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremelyill-conditionedfunction of the coefficients.[11] A simple and accurate iterative method is thepower method: arandomvectorvis chosen and a sequence ofunit vectorsis computed asAv‖Av‖,A2v‖A2v‖,A3v‖A3v‖,…{\displaystyle {\frac {\mathbf {A} \mathbf {v} }{\left\|\mathbf {A} \mathbf {v} \right\|}},{\frac {\mathbf {A} ^{2}\mathbf {v} }{\left\|\mathbf {A} ^{2}\mathbf {v} \right\|}},{\frac {\mathbf {A} ^{3}\mathbf {v} }{\left\|\mathbf {A} ^{3}\mathbf {v} \right\|}},\ldots } Thissequencewillalmost alwaysconverge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided thatvhas a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example,Googleuses it to calculate thepage rankof documents in their search engine.[12]Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at thespanofallthe vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis ofArnoldi iteration.[11]Alternatively, the importantQR algorithmis also based on a subtle transformation of a power method.[11] Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation(A−λiI)vi,j=0{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} _{i,j}=\mathbf {0} }usingGaussian eliminationorany other methodfor solvingmatrix equations. However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. Inpower iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by theRayleigh quotientof the eigenvector).[11]In the QR algorithm for aHermitian matrix(or any normal matrix), the orthonormal eigenvectors are obtained as a product of theQmatrices from the steps in the algorithm.[11](For more general matrices, the QR algorithm yields theSchur decompositionfirst, from which the eigenvectors can be obtained by abacksubstitutionprocedure.[13]) For Hermitian matrices, theDivide-and-conquer eigenvalue algorithmis more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[11] Recall that thegeometricmultiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, thenullspaceofλI−A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associatedgeneralized eigenspace(1st sense), which is the nullspace of the matrix(λI−A)kforany sufficiently largek. That is, it is the space ofgeneralized eigenvectors(first sense), where a generalized eigenvector is any vector whicheventuallybecomes 0 ifλI−Ais applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity. This usage should not be confused with thegeneralized eigenvalue problemdescribed below. Aconjugate eigenvectororconeigenvectoris a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called theconjugate eigenvalueorconeigenvalueof the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation isAv=λv∗.{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} ^{*}.}For example, in coherent electromagnetic scattering theory, the linear transformationArepresents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. Inoptics, the coordinate system is defined from the wave's viewpoint, known as theForward Scattering Alignment(FSA), and gives rise to a regular eigenvalue equation, whereas inradar, the coordinate system is defined from the radar's viewpoint, known as theBack Scattering Alignment(BSA), and gives rise to a coneigenvalue equation. Ageneralized eigenvalue problem(second sense) is the problem of finding a (nonzero) vectorvthat obeysAv=λBv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {B} \mathbf {v} }whereAandBare matrices. Ifvobeys this equation, with someλ, then we callvthegeneralized eigenvectorofAandB(in the second sense), andλis called thegeneralized eigenvalueofAandB(in the second sense) which corresponds to the generalized eigenvectorv. The possible values ofλmust obey the following equationdet(A−λB)=0.{\displaystyle \det(\mathbf {A} -\lambda \mathbf {B} )=0.} Ifnlinearly independent vectors{v1, …,vn}can be found, such that for everyi∈ {1, …,n},Avi=λiBvi, then we define the matricesPandDsuch thatP=[||v1⋯vn||]≡[(v1)1⋯(vn)1⋮⋮(v1)n⋯(vn)n]{\displaystyle P={\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}\equiv {\begin{bmatrix}(\mathbf {v} _{1})_{1}&\cdots &(\mathbf {v} _{n})_{1}\\\vdots &&\vdots \\(\mathbf {v} _{1})_{n}&\cdots &(\mathbf {v} _{n})_{n}\end{bmatrix}}}(D)ij={λi,ifi=j0,otherwise{\displaystyle (D)_{ij}={\begin{cases}\lambda _{i},&{\text{if }}i=j\\0,&{\text{otherwise}}\end{cases}}}Then the following equality holdsA=BPDP−1{\displaystyle \mathbf {A} =\mathbf {B} \mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}And the proof isAP=A[||v1⋯vn||]=[||Av1⋯Avn||]=[||λ1Bv1⋯λnBvn||]=[||Bv1⋯Bvn||]D=BPD{\displaystyle \mathbf {A} \mathbf {P} =\mathbf {A} {\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\A\mathbf {v} _{1}&\cdots &A\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\\lambda _{1}B\mathbf {v} _{1}&\cdots &\lambda _{n}B\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\B\mathbf {v} _{1}&\cdots &B\mathbf {v} _{n}\\|&&|\end{bmatrix}}\mathbf {D} =\mathbf {B} \mathbf {P} \mathbf {D} } And sincePis invertible, we multiply the equation from the right by its inverse, finishing the proof. The set of matrices of the formA−λB, whereλis a complex number, is called apencil; the termmatrix pencilcan also refer to the pair(A,B)of matrices.[14] IfBis invertible, then the original problem can be written in the formB−1Av=λv{\displaystyle \mathbf {B} ^{-1}\mathbf {A} \mathbf {v} =\lambda \mathbf {v} }which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important ifAandBareHermitian matrices, since in this caseB−1Ais not generally Hermitian and important properties of the solution are no longer apparent. IfAandBare both symmetric or Hermitian, andBis also apositive-definite matrix, the eigenvaluesλiare real and eigenvectorsv1andv2with distinct eigenvalues areB-orthogonal (v1*Bv2= 0).[15]In this case, eigenvectors can be chosen so that the matrixPdefined above satisfiesP∗BP=I{\displaystyle \mathbf {P} ^{*}\mathbf {B} \mathbf {P} =\mathbf {I} }orPP∗B=I,{\displaystyle \mathbf {P} \mathbf {P} ^{*}\mathbf {B} =\mathbf {I} ,}and there exists abasisof generalized eigenvectors (it is not adefectiveproblem).[14]This case is sometimes called aHermitian definite pencilordefinite pencil.[14]
https://en.wikipedia.org/wiki/Eigenvalue_decomposition
TheHilbert–Huang transform(HHT) is a way to decompose asignalinto so-called intrinsic mode functions (IMF) along with a trend, and obtaininstantaneous frequencydata. It is designed to work well for data that isnonstationaryandnonlinear. The Hilbert–Huang transform (HHT), aNASAdesignated name,[1]was proposed byNorden E. Huang. It is the result of the empirical mode decomposition (EMD) and theHilbert spectral analysis(HSA). The HHT uses the EMD method to decompose asignalinto so-calledintrinsic mode functions(IMF) with a trend, and applies the HSA method to the IMFs to obtaininstantaneous frequencydata. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzingnonstationaryandnonlineartime series data. The fundamental part of the HHT is theempirical mode decomposition(EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such asFourier transformandWavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF).[2] Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise).[3][4]EMD based smoothing algorithms have been widely used in seismic data processing, where high-quality seismic records are highly demanded.[5][6] Without leaving the time domain, EMD isadaptiveand highly efficient.[7]Since the decomposition is based on the local characteristic time scale of the data, it can be applied tononlinearandnonstationaryprocesses.[7] Anintrinsic mode function(IMF) is defined as a function that satisfies the following requirements: It represents a generally simpleoscillatorymode as a counterpart to the simpleharmonicfunction. By definition, an IMF is any function with the same number ofextremaand zero crossings, whose envelopes are symmetric with respect to zero.[7]This definition guarantees a well-behavedHilbert transformof the IMF. Hilbert spectral analysis(HSA) is a method for examining each IMF'sinstantaneous frequencyas functions of time. The final result is a frequency-time distribution of signal amplitude (or energy), designated as theHilbert spectrum, which permits the identification of localized features. The Intrinsic Mode Function (IMF) amplitude and frequency can vary with time and it must satisfy the rule below: The empirical mode decomposition (EMD) method is a necessary step to reduce any given data into a collection of intrinsic mode functions (IMF) to which theHilbert spectralanalysis can be applied. IMF represents asimple oscillatory modeas a counterpart to the simpleharmonicfunction, but it is much more general: instead of constant amplitude and frequency in a simpleharmoniccomponent, an IMF can have variable amplitude and frequency along the time axis. The procedure of extracting an IMF is called sifting. The sifting process is as follows: The upper and lower envelopes should cover all the data between them. Theirmeanism1. The difference between the data andm1is the first componenth1: Ideally,h1should satisfy the definition of an IMF, since the construction of h1described above should have made itsymmetricand having allmaximapositive and allminimanegative. After the first round of sifting, a crest may become a localmaximum. Newextremagenerated in this way actually reveal the proper modes lost in the initial examination. In the subsequent sifting process, h1can only be treated as a proto-IMF. In the next step,h1is treated as data: After repeated sifting up toktimes, h1becomes an IMF, that is Then,h1kis designated as the first IMF component of the data: The stoppage criterion determines the number of sifting steps to produce an IMF. Following are the four existing stoppage criterion: This criterion is proposed by Huang et al. (1998). It is similar to theCauchy convergence test, and we define a sum of the difference, SD, as This criterion is based on the so-called S-number, which is defined as the number of consecutive siftings for which the number of zero-crossings andextremaare equal or at most differing by one. Specifically, an S-number is pre-selected. The sifting process will stop only if, for S consecutive siftings, the numbers of zero-crossings and extrema stay the same, and are equal or at most differ by one. Proposed by Rilling,Flandrinand Gonçalvés, threshold method set two threshold values to guaranteeing globally small fluctuations in the meanwhile taking in account locally large excursions.[8] Proposed by Cheng, Yu and Yang, energy different tracking method utilized the assumption that the original signal is a composition of orthogonal signals, and calculate the energy based on the assumption. If the result of EMD is not an orthogonal basis of the original signal, the amount of energy will be different from the original energy.[9] Once a stoppage criterion is selected, the first IMF, c1, can be obtained. Overall, c1should contain the finest scale or the shortest period component of thesignal. We can, then, separate c1from the rest of the data byX(t)−c1=r1.{\displaystyle X(t)-c_{1}=r_{1}.\,}Since the residue, r1, still contains longer period variations in the data, it is treated as the new data and subjected to the same sifting process as described above. This procedure can be repeated for all the subsequent rj's, and the result is The sifting process finally stops when theresidue, rn, becomes amonotonic functionfrom which no more IMF can be extracted. From the above equations, we can induce that Thus, a decomposition of the data into n-empirical modes is achieved. The components of the EMD are usually physically meaningful, for the characteristic scales are defined by the physical data. Flandrin et al. (2003) and Wu and Huang (2004) have shown that the EMD is equivalent to a dyadic filter bank.[6][10] Having obtained the intrinsic mode function components, theinstantaneous frequencycan be computed using theHilbert transform. After performing the Hilbert transform on each IMF component, the original data can be expressed as the real part, Real, in the following form: In the above examples, all signals are one-dimensional signals, and in the case of two-dimensional signals, the Hilbert-Huang Transform can be applied for image and video processing in the following ways: Chen and Feng [2003] proposed a technique to improve the HHT procedure.[28]The authors noted that the EMD is limited in distinguishing different components innarrow-bandsignals. The narrow band may contain either (a) components that have adjacent frequencies or (b) components that are not adjacent in frequency but for which one of the components has a much higherenergyintensitythan the other components. The improved technique is based on beating-phenomenon waves. Datig and Schlurmann [2004][29]conducted a comprehensive study on the performance and limitations of HHT with particular applications toirregular water waves. The authors did extensive investigation into thespline interpolation. The authors discussed using additional points, both forward and backward, to determine better envelopes. They also performed aparametric studyon the proposed improvement and showed significant improvement in the overall EMD computations. The authors noted that HHT is capable of differentiating between time-variant components from any given data. Their study also showed that HHT was able to distinguish between riding and carrier waves. Huang and Wu [2008][30]reviewed applications of the Hilbert–Huang transformation emphasizing that the HHT theoretical basis is purely empirical, and noting that "one of the main drawbacks of EMD is mode mixing". They also outline outstanding open problems with HHT, which include: End effects of the EMD, Spline problems, Best IMF selection and uniqueness. Although the ensemble EMD (EEMD) may help mitigate the latter. End effect occurs at the beginning and end of the signal because there is no point before the first data point and after the last data point to be considered together. However, in most cases, these endpoints are not the extreme value of the signal. Therefore, when doing the EMD process of the HHT, the extreme envelope will diverge at the endpoints and cause significant errors. This error distorts the IMF waveform at its endpoints. Furthermore, the error in the decomposition result accumulates through each repetition of the sifting process.[31]When computing the instantaneous frequency and amplitude of IMFs, Fast Fourier Transform (FFT) result may cause Gibbs phenomenon and frequency leakage, leading to information loss. Here are several methods are proposed to solve the end effect in HHT: This method leverages the inherent variation trend of the signal to extend itself, resulting in extensions that closely resemble the characteristics of the original data. design and compute some needed parameters from the original signal for building a particular mathematical model. After that, the model predicts the trend of the two endpoints. Mode mixing problem happens during the EMD process. A straightforward implementation of the sifting procedure produces mode mixing due to IMF mode rectification. Specific signals may not be separated into the same IMFs every time. This problem makes it hard to implement feature extraction, model training, and pattern recognition since the feature is no longer fixed in one labeling index. Mode mixing problem can be avoided by including an intermittence test during the HHT process.[32] Source:[33] The masking method improves EMD by allowing for the separation of similar frequency components through the following steps: The optimal choice of amplitude depends on the frequencies Overall, the masking method enhances EMD by providing a means to prevent mode mixing, improving the accuracy and applicability of EMD in signal analysis Source:[34] EEMD adds finite amplitude white noise to the original signal. After that, decompose the signal into IMFs using EMD. The processing steps of EEMD are developed as follows: The effects of the decomposition using the EEMD are that the added white noise series cancel each other(or fill all the scale space uniformly). The noise also enables the EMD method to be a truly dyadic filter bank for any data, which means that a signal of a similar scale in a noisy data set could be contained in one IMF component, significantly reducing the chance of mode mixing. This approach preserves the physical uniqueness of decomposition and represents a major improvement over the EMD method.
https://en.wikipedia.org/wiki/Empirical_mode_decomposition
In the physics ofhydrodynamics, aglobal modeof a system is one in which the system executes coherentoscillationsin time. Suppose a quantityy(x,t){\displaystyle y(x,t)}which depends on spacex{\displaystyle x}and timet{\displaystyle t}is governed by somepartial differential equationwhich does not have an explicit dependence ont{\displaystyle t}. Then a global mode is a solution of this PDE of the formy(x,t)=y^(x)eiωt{\displaystyle y(x,t)={\hat {y}}(x)e^{i\omega t}}, for somefrequencyω{\displaystyle \omega }. Ifω{\displaystyle \omega }is complex, then the imaginary part corresponds to the mode exhibitingexponential growthorexponential decay. The concept of a global mode can be compared to that of anormal mode; the PDE may be thought of as adynamical systemof infinitely many equations coupled together. Global modes are used in thestability analysisofhydrodynamical systems.Philip Drazinintroduced the concept of a global mode in his 1974 paper, and gave a technique for finding the normal modes of a linear PDE problem in which the coefficients or geometry vary slowly inx{\displaystyle x}. This technique is based on theWKBJ approximation, which is a special case ofmultiple-scale analysis.[1]His method extends theBriggs–Bers technique, which gives a stability analysis for linear PDEs with constant coefficients.[2] Since Drazin's 1974 paper, other authors have studied more realistic problems in fluid dynamics using a global mode analysis. Such problems are often highlynonlinear, and attempts to analyse them have often relied on laboratory or numerical experiment.[2]Examples of global modes in practice include the oscillatorywakesproduced when fluid flows past an object, such as avortex street.
https://en.wikipedia.org/wiki/Global_mode
Anormal modeof adynamical systemis a pattern of motion in which all parts of the system movesinusoidallywith the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. These fixed frequencies of the normal modes of a system are known as itsnatural frequenciesorresonant frequencies. A physical object, such as a building, bridge, or molecule, has a set of normal modes and their natural frequencies that depend on its structure, materials and boundary conditions. The most general motion of a linear system is asuperpositionof its normal modes. The modes are normal in the sense that they can move independently, that is to say that an excitation of one mode will never cause motion of a different mode. In mathematical terms, normal modes areorthogonalto each other. In thewave theoryof physics and engineering, amodein adynamical systemis astanding wavestate of excitation, in which all the components of the system will be affected sinusoidally at a fixed frequency associated with that mode. Because no real system can perfectly fit under the standing wave framework, themodeconcept is taken as a general characterization of specific states of oscillation, thus treating the dynamic system in alinearfashion, in which linearsuperpositionof states can be performed. Typical examples include: The concept of normal modes also finds application in other dynamical systems, such asoptics,quantum mechanics,atmospheric dynamicsandmolecular dynamics. Most dynamical systems can be excited in several modes, possibly simultaneously. Each mode is characterized by one or several frequencies,[dubious–discuss]according to the modal variable field. For example, a vibrating rope in 2D space is defined by a single-frequency (1D axial displacement), but a vibrating rope in 3D space is defined by two frequencies (2D axial displacement). For a given amplitude on the modal variable, each mode will store a specific amount of energy because of the sinusoidal excitation. Thenormalordominantmode of a system with multiple modes will be the mode storing the minimum amount of energy for a given amplitude of the modal variable, or, equivalently, for a given stored amount of energy, the dominant mode will be the mode imposing the maximum amplitude of the modal variable. A mode of vibration is characterized by a modal frequency and a mode shape. It is numbered according to the number of half waves in the vibration. For example, if a vibrating beam with both ends pinned displayed a mode shape of half of a sine wave (one peak on the vibrating beam) it would be vibrating in mode 1. If it had a full sine wave (one peak and one trough) it would be vibrating in mode 2. In a system with two or more dimensions, such as the pictured disk, each dimension is given a mode number. Usingpolar coordinates, we have a radial coordinate and an angular coordinate. If one measured from the center outward along the radial coordinate one would encounter a full wave, so the mode number in the radial direction is 2. The other direction is trickier, because only half of the disk is considered due to the anti-symmetric (also calledskew-symmetry) nature of a disk's vibration in the angular direction. Thus, measuring 180° along the angular direction you would encounter a half wave, so the mode number in the angular direction is 1. So the mode number of the system is 2–1 or 1–2, depending on which coordinate is considered the "first" and which is considered the "second" coordinate (so it is important to always indicate which mode number matches with each coordinate direction). In linear systems each mode is entirely independent of all other modes. In general all modes have different frequencies (with lower modes having lower frequencies) and different mode shapes. In a one-dimensional system at a given mode the vibration will have nodes, or places where the displacement is always zero. These nodes correspond to points in the mode shape where the mode shape is zero. Since the vibration of a system is given by the mode shape multiplied by a time function, the displacement of the node points remain zero at all times. When expanded to a two dimensional system, these nodes become lines where the displacement is always zero. If you watch the animation above you will see two circles (one about halfway between the edge and center, and the other on the edge itself) and a straight line bisecting the disk, where the displacement is close to zero. In an idealized system these lines equal zero exactly, as shown to the right. In the analysis ofconservative systemswith small displacements from equilibrium, important inacoustics,molecular spectra, andelectrical circuits, the system can be transformed to new coordinates callednormal coordinates.Each normal coordinate corresponds to a single vibrational frequency of the system and the corresponding motion of the system is called the normal mode of vibration.[1]: 332 Consider two equal bodies (not affected by gravity), each ofmassm, attached to three springs, each withspring constantk. They are attached in the following manner, forming a system that is physically symmetric: where the edge points are fixed and cannot move. Letx1(t)denote the horizontaldisplacementof the left mass, andx2(t)denote the displacement of the right mass. Denoting acceleration (the secondderivativeofx(t)with respect to time) asx¨{\textstyle {\ddot {x}}},theequations of motionare: mx¨1=−kx1+k(x2−x1)=−2kx1+kx2mx¨2=−kx2+k(x1−x2)=−2kx2+kx1{\displaystyle {\begin{aligned}m{\ddot {x}}_{1}&=-kx_{1}+k(x_{2}-x_{1})=-2kx_{1}+kx_{2}\\m{\ddot {x}}_{2}&=-kx_{2}+k(x_{1}-x_{2})=-2kx_{2}+kx_{1}\end{aligned}}} Since we expect oscillatory motion of a normal mode (whereωis the same for both masses), we try: x1(t)=A1eiωtx2(t)=A2eiωt{\displaystyle {\begin{aligned}x_{1}(t)&=A_{1}e^{i\omega t}\\x_{2}(t)&=A_{2}e^{i\omega t}\end{aligned}}} Substituting these into the equations of motion gives us: −ω2mA1eiωt=−2kA1eiωt+kA2eiωt−ω2mA2eiωt=kA1eiωt−2kA2eiωt{\displaystyle {\begin{aligned}-\omega ^{2}mA_{1}e^{i\omega t}&=-2kA_{1}e^{i\omega t}+kA_{2}e^{i\omega t}\\-\omega ^{2}mA_{2}e^{i\omega t}&=kA_{1}e^{i\omega t}-2kA_{2}e^{i\omega t}\end{aligned}}} Omitting the exponential factor (because it is common to all terms) and simplifying yields: (ω2m−2k)A1+kA2=0kA1+(ω2m−2k)A2=0{\displaystyle {\begin{aligned}(\omega ^{2}m-2k)A_{1}+kA_{2}&=0\\kA_{1}+(\omega ^{2}m-2k)A_{2}&=0\end{aligned}}} And inmatrixrepresentation: [ω2m−2kkkω2m−2k](A1A2)=0{\displaystyle {\begin{bmatrix}\omega ^{2}m-2k&k\\k&\omega ^{2}m-2k\end{bmatrix}}{\begin{pmatrix}A_{1}\\A_{2}\end{pmatrix}}=0} If the matrix on the left is invertible, the unique solution is the trivial solution(A1,A2) = (x1,x2) = (0, 0). The non trivial solutions are to be found for those values ofωwhereby the matrix on the left issingular; i.e. is not invertible. It follows that thedeterminantof the matrix must be equal to 0, so: (ω2m−2k)2−k2=0{\displaystyle (\omega ^{2}m-2k)^{2}-k^{2}=0} Solving forω, the two positive solutions are: ω1=kmω2=3km{\displaystyle {\begin{aligned}\omega _{1}&={\sqrt {\frac {k}{m}}}\\\omega _{2}&={\sqrt {\frac {3k}{m}}}\end{aligned}}} Substitutingω1into the matrix and solving for(A1,A2), yields(1, 1). Substitutingω2results in(1, −1). (These vectors areeigenvectors, and the frequencies areeigenvalues.) The first normal mode is:η→1=(x11(t)x21(t))=c1(11)cos⁡(ω1t+φ1){\displaystyle {\vec {\eta }}_{1}={\begin{pmatrix}x_{1}^{1}(t)\\x_{2}^{1}(t)\end{pmatrix}}=c_{1}{\begin{pmatrix}1\\1\end{pmatrix}}\cos {(\omega _{1}t+\varphi _{1})}} Which corresponds to both masses moving in the same direction at the same time. This mode is called antisymmetric. The second normal mode is: η→2=(x12(t)x22(t))=c2(1−1)cos⁡(ω2t+φ2){\displaystyle {\vec {\eta }}_{2}={\begin{pmatrix}x_{1}^{2}(t)\\x_{2}^{2}(t)\end{pmatrix}}=c_{2}{\begin{pmatrix}1\\-1\end{pmatrix}}\cos {(\omega _{2}t+\varphi _{2})}} This corresponds to the masses moving in the opposite directions, while the center of mass remains stationary. This mode is called symmetric. The general solution is asuperpositionof thenormal modeswherec1,c2,φ1, andφ2are determined by theinitial conditionsof the problem. The process demonstrated here can be generalized and formulated using the formalism ofLagrangian mechanicsorHamiltonian mechanics. Astanding waveis a continuous form of normal mode. In a standing wave, all the space elements (i.e.(x,y,z)coordinates) are oscillating in the samefrequencyand inphase(reaching theequilibriumpoint together), but each has a different amplitude. The general form of a standing wave is: Ψ(t)=f(x,y,z)(Acos⁡(ωt)+Bsin⁡(ωt)){\displaystyle \Psi (t)=f(x,y,z)(A\cos(\omega t)+B\sin(\omega t))} wheref(x,y,z)represents the dependence of amplitude on location and the cosine/sine are the oscillations in time. Physically, standing waves are formed by theinterference(superposition) of waves and their reflections (although one may also say the opposite; that a moving wave is asuperpositionof standing waves). The geometric shape of the medium determines what would be the interference pattern, thus determines thef(x,y,z)form of the standing wave. This space-dependence is called anormal mode. Usually, for problems with continuous dependence on(x,y,z)there is no single or finite number of normal modes, but there are infinitely many normal modes. If the problem is bounded (i.e. it is defined on a finite section of space) there arecountably manynormal modes (usually numberedn= 1, 2, 3, ...). If the problem is not bounded, there is a continuous spectrum of normal modes. In any solid at any temperature, the primary particles (e.g. atoms or molecules) are not stationary, but rather vibrate about mean positions. In insulators the capacity of the solid to store thermal energy is due almost entirely to these vibrations. Many physical properties of the solid (e.g. modulus of elasticity) can be predicted given knowledge of the frequencies with which the particles vibrate. The simplest assumption (by Einstein) is that all the particles oscillate about their mean positions with the same natural frequencyν. This is equivalent to the assumption that all atoms vibrate independently with a frequencyν. Einstein also assumed that the allowed energy states of these oscillations are harmonics, or integral multiples ofhν. The spectrum of waveforms can be described mathematically using a Fourier series of sinusoidal density fluctuations (or thermalphonons). Debye subsequently recognized that each oscillator is intimately coupled to its neighboring oscillators at all times. Thus, by replacing Einstein's identical uncoupled oscillators with the same number of coupled oscillators, Debye correlated the elastic vibrations of a one-dimensional solid with the number of mathematically special modes of vibration of a stretched string (see figure). The pure tone of lowest pitch or frequency is referred to as the fundamental and the multiples of that frequency are called its harmonic overtones. He assigned to one of the oscillators the frequency of the fundamental vibration of the whole block of solid. He assigned to the remaining oscillators the frequencies of the harmonics of that fundamental, with the highest of all these frequencies being limited by the motion of the smallest primary unit. The normal modes of vibration of a crystal are in general superpositions of many overtones, each with an appropriate amplitude and phase. Longer wavelength (low frequency)phononsare exactly those acoustical vibrations which are considered in the theory of sound. Both longitudinal and transverse waves can be propagated through a solid, while, in general, only longitudinal waves are supported by fluids. In thelongitudinal mode, the displacement of particles from their positions of equilibrium coincides with the propagation direction of the wave. Mechanical longitudinal waves have been also referred to ascompression waves. Fortransverse modes, individual particles move perpendicular to the propagation of the wave. According to quantum theory, the mean energy of a normal vibrational mode of a crystalline solid with characteristic frequencyνis: E(ν)=12hν+hνehν/kT−1{\displaystyle E(\nu )={\frac {1}{2}}h\nu +{\frac {h\nu }{e^{h\nu /kT}-1}}} The term(1/2)hνrepresents the "zero-point energy", or the energy which an oscillator will have at absolute zero.E(ν)tends to the classic valuekTat high temperatures E(ν)=kT[1+112(hνkT)2+O(hνkT)4+⋯]{\displaystyle E(\nu )=kT\left[1+{\frac {1}{12}}\left({\frac {h\nu }{kT}}\right)^{2}+O\left({\frac {h\nu }{kT}}\right)^{4}+\cdots \right]} By knowing the thermodynamic formula, (∂S∂E)N,V=1T{\displaystyle \left({\frac {\partial S}{\partial E}}\right)_{N,V}={\frac {1}{T}}} the entropy per normal mode is: S(ν)=∫0TddTE(ν)dTT=E(ν)T−klog⁡(1−e−hνkT){\displaystyle {\begin{aligned}S\left(\nu \right)&=\int _{0}^{T}{\frac {d}{dT}}E\left(\nu \right){\frac {dT}{T}}\\[10pt]&={\frac {E\left(\nu \right)}{T}}-k\log \left(1-e^{-{\frac {h\nu }{kT}}}\right)\end{aligned}}} The free energy is: F(ν)=E−TS=kTlog⁡(1−e−hνkT){\displaystyle F(\nu )=E-TS=kT\log \left(1-e^{-{\frac {h\nu }{kT}}}\right)} which, forkT≫hν, tends to: F(ν)=kTlog⁡(hνkT){\displaystyle F(\nu )=kT\log \left({\frac {h\nu }{kT}}\right)} In order to calculate the internal energy and the specific heat, we must know the number of normal vibrational modes a frequency between the valuesνandν+dν. Allow this number to bef(ν)dν. Since the total number of normal modes is3N, the functionf(ν)is given by: ∫f(ν)dν=3N{\displaystyle \int f(\nu )\,d\nu =3N} The integration is performed over all frequencies of the crystal. Then the internal energyUwill be given by: U=∫f(ν)E(ν)dν{\displaystyle U=\int f(\nu )E(\nu )\,d\nu } Bound states inquantum mechanicsare analogous to modes. The waves in quantum systems are oscillations in probability amplitude rather than material displacement. The frequency of oscillation,f, relates to the mode energy byE=hfwherehis thePlanck constant. Thus a system like an atom consists of a linear combination of modes of definite energy. These energies are characteristic of the particular atom. The (complex) square of the probability amplitude at a point in space gives the probability of measuring an electron at that location. The spatial distribution of this probability is characteristic of the atom.[2]: I49–S5 Normal modes are generated in the Earth from long wavelengthseismic wavesfrom large earthquakes interfering to form standing waves. For an elastic, isotropic, homogeneous sphere, spheroidal, toroidal and radial (or breathing) modes arise. Spheroidal modes only involve P and SV waves (likeRayleigh waves) and depend on overtone numbernand angular orderlbut have degeneracy of azimuthal orderm. Increasinglconcentrates fundamental branch closer to surface and at largelthis tends to Rayleigh waves. Toroidal modes only involve SH waves (likeLove waves) and do not exist in fluid outer core. Radial modes are just a subset of spheroidal modes withl= 0. The degeneracy does not exist on Earth as it is broken by rotation, ellipticity and 3D heterogeneous velocity and density structure. It may be assumed that each mode can be isolated, the self-coupling approximation, or that many modes close in frequencyresonate, the cross-coupling approximation. Self-coupling will solely change the phase velocity and not the number of waves around a great circle, resulting in a stretching or shrinking of standing wave pattern. Modal cross-coupling occurs due to the rotation of the Earth, from aspherical elastic structure, or due to Earth's ellipticity and leads to a mixing of fundamental spheroidal and toroidal modes.
https://en.wikipedia.org/wiki/Normal_mode
Theproper orthogonal decompositionis anumerical methodthat enables a reduction in the complexity of computer intensive simulations such ascomputational fluid dynamicsandstructural analysis(likecrash simulations). Typically influid dynamicsandturbulences analysis, it is used to replace theNavier–Stokes equationsby simpler models to solve.[1] It belongs to a class of algorithms calledmodel order reduction(or in shortmodel reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field ofmachine learning. The main use of POD is todecomposea physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with theprincipal component analysisfromPearsonin the field of statistics, or thesingular value decompositionin linear algebra because it refers toeigenvalues and eigenvectorsof a physical field. In those domains, it is associated with the research of Karhunen[2]and Loève,[3]and theirKarhunen–Loève theorem. The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector fieldu(x, t)into a set of deterministic spatial functionsΦk(x) modulated by random time coefficientsak(t) so that: The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method[4]is averaging the samples over the space dimensionn, and correlating them with each other along the time samplesp: The next step is to compute thecovariance matrixC We then compute the eigenvalues and eigenvectors of C and we order them from the largest eigenvalue to the smallest. We obtain n eigenvalues λ1,...,λn and a set of n eigenvectors arranged as columns in an n × n matrix Φ:
https://en.wikipedia.org/wiki/Proper_orthogonal_decomposition
Inlinear algebra, thesingular value decomposition(SVD) is afactorizationof arealorcomplexmatrixinto a rotation, followed by a rescaling followed by another rotation. It generalizes theeigendecompositionof a squarenormal matrixwith an orthonormal eigenbasis to any⁠m×n{\displaystyle m\times n}⁠matrix. It is related to thepolar decomposition. Specifically, the singular value decomposition of anm×n{\displaystyle m\times n}complex matrix⁠M{\displaystyle \mathbf {M} }⁠is a factorization of the formM=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U\Sigma V^{*}} ,}where⁠U{\displaystyle \mathbf {U} }⁠is an⁠m×m{\displaystyle m\times m}⁠complexunitary matrix,Σ{\displaystyle \mathbf {\Sigma } }is anm×n{\displaystyle m\times n}rectangular diagonal matrixwith non-negative real numbers on the diagonal,⁠V{\displaystyle \mathbf {V} }⁠is ann×n{\displaystyle n\times n}complex unitary matrix, andV∗{\displaystyle \mathbf {V} ^{*}}is theconjugate transposeof⁠V{\displaystyle \mathbf {V} }⁠. Such decomposition always exists for any complex matrix. If⁠M{\displaystyle \mathbf {M} }⁠is real, then⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠can be guaranteed to be realorthogonalmatrices; in such contexts, the SVD is often denotedUΣVT.{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{\mathrm {T} }.} The diagonal entriesσi=Σii{\displaystyle \sigma _{i}=\Sigma _{ii}}ofΣ{\displaystyle \mathbf {\Sigma } }are uniquely determined by⁠M{\displaystyle \mathbf {M} }⁠and are known as thesingular valuesof⁠M{\displaystyle \mathbf {M} }⁠. The number of non-zero singular values is equal to therankof⁠M{\displaystyle \mathbf {M} }⁠. The columns of⁠U{\displaystyle \mathbf {U} }⁠and the columns of⁠V{\displaystyle \mathbf {V} }⁠are called left-singular vectors and right-singular vectors of⁠M{\displaystyle \mathbf {M} }⁠, respectively. They form two sets oforthonormal bases⁠u1,…,um{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{m}}⁠and⁠v1,…,vn,{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n},}⁠and if they are sorted so that the singular valuesσi{\displaystyle \sigma _{i}}with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as M=∑i=1rσiuivi∗,{\displaystyle \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*},} wherer≤min{m,n}{\displaystyle r\leq \min\{m,n\}}is the rank of⁠M.{\displaystyle \mathbf {M} .}⁠ The SVD is not unique. However, it is always possible to choose the decomposition such that the singular valuesΣii{\displaystyle \Sigma _{ii}}are in descending order. In this case,Σ{\displaystyle \mathbf {\Sigma } }(but not⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠) is uniquely determined by⁠M.{\displaystyle \mathbf {M} .}⁠ The term sometimes refers to thecompact SVD, a similar decomposition⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U\Sigma V} ^{*}}⁠in which⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is square diagonal of size⁠r×r,{\displaystyle r\times r,}⁠where⁠r≤min{m,n}{\displaystyle r\leq \min\{m,n\}}⁠is the rank of⁠M,{\displaystyle \mathbf {M} ,}⁠and has only the non-zero singular values. In this variant,⁠U{\displaystyle \mathbf {U} }⁠is an⁠m×r{\displaystyle m\times r}⁠semi-unitary matrixandV{\displaystyle \mathbf {V} }is an⁠n×r{\displaystyle n\times r}⁠semi-unitary matrix, such thatU∗U=V∗V=Ir.{\displaystyle \mathbf {U} ^{*}\mathbf {U} =\mathbf {V} ^{*}\mathbf {V} =\mathbf {I} _{r}.} Mathematical applications of the SVD include computing thepseudoinverse, matrix approximation, and determining the rank,range, andnull spaceof a matrix. The SVD is also extremely useful in many areas of science,engineering, andstatistics, such assignal processing,least squaresfitting of data, andprocess control. In the special case when⁠M{\displaystyle \mathbf {M} }⁠is an⁠m×m{\displaystyle m\times m}⁠realsquare matrix, the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be real⁠m×m{\displaystyle m\times m}⁠matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as⁠A,{\displaystyle \mathbf {A} ,}⁠as alinear transformation⁠x↦Ax{\displaystyle \mathbf {x} \mapsto \mathbf {Ax} }⁠of the space⁠Rm,{\displaystyle \mathbf {R} _{m},}⁠the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠representrotationsorreflectionof the space, while⁠Σ{\displaystyle \mathbf {\Sigma } }⁠represents thescalingof each coordinate⁠xi{\displaystyle \mathbf {x} _{i}}⁠by the factor⁠σi.{\displaystyle \sigma _{i}.}⁠Thus the SVD decomposition breaks down any linear transformation of⁠Rm{\displaystyle \mathbf {R} ^{m}}⁠into acompositionof three geometricaltransformations: a rotation or reflection(⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠),followed by a coordinate-by-coordinatescaling(⁠Σ{\displaystyle \mathbf {\Sigma } }⁠),followed by another rotation or reflection(⁠U{\displaystyle \mathbf {U} }⁠). In particular, if⁠M{\displaystyle \mathbf {M} }⁠has a positive determinant, then⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be both rotations with reflections, or both rotations without reflections.[citation needed]If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type. If the matrix⁠M{\displaystyle \mathbf {M} }⁠is real but not square, namely⁠m×n{\displaystyle m\times n}⁠with⁠m≠n,{\displaystyle m\neq n,}⁠it can be interpreted as a linear transformation from⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠to⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠Then⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠can be chosen to be rotations/reflections of⁠Rm{\displaystyle \mathbf {R} ^{m}}⁠and⁠Rn,{\displaystyle \mathbf {R} ^{n},}⁠respectively; and⁠Σ,{\displaystyle \mathbf {\Sigma } ,}⁠besides scaling the first⁠min{m,n}{\displaystyle \min\{m,n\}}⁠coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠into⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠ As shown in the figure, thesingular valuescan be interpreted as the magnitude of the semiaxes of anellipsein 2D. This concept can be generalized to⁠n{\displaystyle n}⁠-dimensionalEuclidean space, with the singular values of any⁠n×n{\displaystyle n\times n}⁠square matrixbeing viewed as the magnitude of the semiaxis of an⁠n{\displaystyle n}⁠-dimensionalellipsoid. Similarly, the singular values of any⁠m×n{\displaystyle m\times n}⁠matrix can be viewed as the magnitude of the semiaxis of an⁠n{\displaystyle n}⁠-dimensionalellipsoidin⁠m{\displaystyle m}⁠-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. Seebelowfor further details. Since⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are unitary, the columns of each of them form a set oforthonormal vectors, which can be regarded asbasis vectors. The matrix⁠M{\displaystyle \mathbf {M} }⁠maps the basis vector⁠Vi{\displaystyle \mathbf {V} _{i}}⁠to the stretched unit vector⁠σiUi.{\displaystyle \sigma _{i}\mathbf {U} _{i}.}⁠By the definition of a unitary matrix, the same is true for their conjugate transposes⁠U∗{\displaystyle \mathbf {U} ^{*}}⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠except the geometric interpretation of the singular values as stretches is lost. In short, the columns of⁠U,{\displaystyle \mathbf {U} ,}⁠⁠U∗,{\displaystyle \mathbf {U} ^{*},}⁠⁠V,{\displaystyle \mathbf {V} ,}⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠areorthonormal bases. When⁠M{\displaystyle \mathbf {M} }⁠is apositive-semidefiniteHermitian matrix,⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are both equal to the unitary matrix used to diagonalize⁠M.{\displaystyle \mathbf {M} .}⁠However, when⁠M{\displaystyle \mathbf {M} }⁠is not positive-semidefinite and Hermitian but stilldiagonalizable, itseigendecompositionand singular value decomposition are distinct. Because⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are unitary, we know that the columns⁠U1,…,Um{\displaystyle \mathbf {U} _{1},\ldots ,\mathbf {U} _{m}}⁠of⁠U{\displaystyle \mathbf {U} }⁠yield anorthonormal basisof⁠Km{\displaystyle K^{m}}⁠and the columns⁠V1,…,Vn{\displaystyle \mathbf {V} _{1},\ldots ,\mathbf {V} _{n}}⁠of⁠V{\displaystyle \mathbf {V} }⁠yield an orthonormal basis of⁠Kn{\displaystyle K^{n}}⁠(with respect to the standardscalar productson these spaces). Thelinear transformation T:{Kn→Kmx↦Mx{\displaystyle T:\left\{{\begin{aligned}K^{n}&\to K^{m}\\x&\mapsto \mathbf {M} x\end{aligned}}\right.} has a particularly simple description with respect to these orthonormal bases: we have T(Vi)=σiUi,i=1,…,min(m,n),{\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where⁠σi{\displaystyle \sigma _{i}}⁠is the⁠i{\displaystyle i}⁠-th diagonal entry of⁠Σ,{\displaystyle \mathbf {\Sigma } ,}⁠and⁠T(Vi)=0{\displaystyle T(\mathbf {V} _{i})=0}⁠for⁠i>min(m,n).{\displaystyle i>\min(m,n).}⁠ The geometric content of the SVD theorem can thus be summarized as follows: for every linear map⁠T:Kn→Km{\displaystyle T:K^{n}\to K^{m}}⁠one can find orthonormal bases of⁠Kn{\displaystyle K^{n}}⁠and⁠Km{\displaystyle K^{m}}⁠such that⁠T{\displaystyle T}⁠maps the⁠i{\displaystyle i}⁠-th basis vector of⁠Kn{\displaystyle K^{n}}⁠to a non-negative multiple of the⁠i{\displaystyle i}⁠-th basis vector of⁠Km,{\displaystyle K^{m},}⁠and sends the leftover basis vectors to zero. With respect to these bases, the map⁠T{\displaystyle T}⁠is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere⁠S{\displaystyle S}⁠of radius one in⁠Rn.{\displaystyle \mathbf {R} ^{n}.}⁠The linear map⁠T{\displaystyle T}⁠maps this sphere onto anellipsoidin⁠Rm.{\displaystyle \mathbf {R} ^{m}.}⁠Non-zero singular values are simply the lengths of thesemi-axesof this ellipsoid. Especially when⁠n=m,{\displaystyle n=m,}⁠and all the singular values are distinct and non-zero, the SVD of the linear map⁠T{\displaystyle T}⁠can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid⁠T(S){\displaystyle T(S)}⁠and specifically its axes; then consider the directions in⁠Rn{\displaystyle \mathbf {R} ^{n}}⁠sent by⁠T{\displaystyle T}⁠onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠sending these directions to the coordinate axes of⁠Rn.{\displaystyle \mathbf {R} ^{n}.}⁠On a second move, apply anendomorphism⁠D{\displaystyle \mathbf {D} }⁠diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of⁠T(S){\displaystyle T(S)}⁠as stretching coefficients. The composition⁠D∘V∗{\displaystyle \mathbf {D} \circ \mathbf {V} ^{*}}⁠then sends the unit-sphere onto an ellipsoid isometric to⁠T(S).{\displaystyle T(S).}⁠To define the third and last move, apply an isometry⁠U{\displaystyle \mathbf {U} }⁠to this ellipsoid to obtain⁠T(S).{\displaystyle T(S).}⁠As can be easily checked, the composition⁠U∘D∘V∗{\displaystyle \mathbf {U} \circ \mathbf {D} \circ \mathbf {V} ^{*}}⁠coincides with⁠T.{\displaystyle T.}⁠ Consider the⁠4×5{\displaystyle 4\times 5}⁠matrix M=[10002003000000002000]{\displaystyle \mathbf {M} ={\begin{bmatrix}1&0&0&0&2\\0&0&3&0&0\\0&0&0&0&0\\0&2&0&0&0\end{bmatrix}}} A singular value decomposition of this matrix is given by⁠UΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠ U=[0−100−1000000−100−10]Σ=[30000050000020000000]V∗=[00−100−0.2000−0.80−100000010−0.80000.2]{\displaystyle {\begin{aligned}\mathbf {U} &={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0&\color {Emerald}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0&\color {Emerald}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0&\color {Emerald}-1\\\color {Green}0&\color {Blue}0&\color {Cyan}-1&\color {Emerald}0\end{bmatrix}}\\[6pt]\mathbf {\Sigma } &={\begin{bmatrix}3&0&0&0&\color {Gray}{\mathit {0}}\\0&{\sqrt {5}}&0&0&\color {Gray}{\mathit {0}}\\0&0&2&0&\color {Gray}{\mathit {0}}\\0&0&0&\color {Red}\mathbf {0} &\color {Gray}{\mathit {0}}\end{bmatrix}}\\[6pt]\mathbf {V} ^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}0&\color {Orchid}0&\color {Orchid}0&\color {Orchid}1&\color {Orchid}0\\\color {Purple}-{\sqrt {0.8}}&\color {Purple}0&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.2}}\end{bmatrix}}\end{aligned}}} The scaling matrix⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠areunitary, multiplying by their respective conjugate transposes yieldsidentity matrices, as shown below. In this case, because⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are real valued, each is anorthogonal matrix. UU∗=[1000010000100001]=I4VV∗=[1000001000001000001000001]=I5{\displaystyle {\begin{aligned}\mathbf {U} \mathbf {U} ^{*}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}=\mathbf {I} _{4}\\[6pt]\mathbf {V} \mathbf {V} ^{*}&={\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}}=\mathbf {I} _{5}\end{aligned}}} This particular singular value decomposition is not unique. For instance, we can keep⁠U{\displaystyle \mathbf {U} }⁠and⁠Σ{\displaystyle \mathbf {\Sigma } }⁠the same, but change the last two rows of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠such that V∗=[00−100−0.2000−0.80−10000.4000.5−0.1−0.4000.50.1]{\displaystyle \mathbf {V} ^{*}={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}{\sqrt {0.4}}&\color {Orchid}0&\color {Orchid}0&\color {Orchid}{\sqrt {0.5}}&\color {Orchid}-{\sqrt {0.1}}\\\color {Purple}-{\sqrt {0.4}}&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.5}}&\color {Purple}{\sqrt {0.1}}\end{bmatrix}}} and get an equally valid singular value decomposition. As the matrix⁠M{\displaystyle \mathbf {M} }⁠has rank 3, it has only 3 nonzero singular values. In taking the product⁠UΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠, the final column of⁠U{\displaystyle \mathbf {U} }⁠and the final two rows of⁠V∗{\displaystyle \mathbf {V^{*}} }⁠are multiplied by zero, so have no effect on the matrix product, and can be replaced by any unit vectors which are orthogonal to the first three and to each-other. Thecompact SVD,⁠M=UrΣrVr∗{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}}⁠, eliminates these superfluous rows, columns, and singular values: Ur=[0−10−10000000−1]Σr=[300050002]Vr∗=[00−100−0.2000−0.80−1000]{\displaystyle {\begin{aligned}\mathbf {U} _{r}&={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}-1\end{bmatrix}}\\[6pt]\mathbf {\Sigma } _{r}&={\begin{bmatrix}3&0&0\\0&{\sqrt {5}}&0\\0&0&2\end{bmatrix}}\\[6pt]\mathbf {V} _{r}^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\end{bmatrix}}\end{aligned}}} A non-negative real number⁠σ{\displaystyle \sigma }⁠is asingular valuefor⁠M{\displaystyle \mathbf {M} }⁠if and only if there exist unit-length vectors⁠u{\displaystyle \mathbf {u} }⁠in⁠Km{\displaystyle K^{m}}⁠and⁠v{\displaystyle \mathbf {v} }⁠in⁠Kn{\displaystyle K^{n}}⁠such that Mv=σu,M∗u=σv.{\displaystyle {\begin{aligned}\mathbf {Mv} &=\sigma \mathbf {u} ,\\[3mu]\mathbf {M} ^{*}\mathbf {u} &=\sigma \mathbf {v} .\end{aligned}}} The vectors⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠are calledleft-singularandright-singular vectorsfor⁠σ,{\displaystyle \sigma ,}⁠respectively. In any singular value decomposition M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} the diagonal entries of⁠Σ{\displaystyle \mathbf {\Sigma } }⁠are equal to the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠The first⁠p=min(m,n){\displaystyle p=\min(m,n)}⁠columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that: A singular value for which we can find two left (or right) singular vectors that are linearly independent is calleddegenerate. If⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠u2{\displaystyle \mathbf {u} _{2}}⁠are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠corresponding to diagonal elements of⁠Σ{\displaystyle \mathbf {\Sigma } }⁠all with the same value⁠σ.{\displaystyle \sigma .}⁠ As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in thecokernelandkernel, respectively, of⁠M,{\displaystyle \mathbf {M} ,}⁠which by therank–nullity theoremcannot be the same dimension if⁠m≠n.{\displaystyle m\neq n.}⁠Even if all singular values are nonzero, if⁠m>n{\displaystyle m>n}⁠then the cokernel is nontrivial, in which case⁠U{\displaystyle \mathbf {U} }⁠is padded with⁠m−n{\displaystyle m-n}⁠orthogonal vectors from the cokernel. Conversely, if⁠m<n,{\displaystyle m<n,}⁠then⁠V{\displaystyle \mathbf {V} }⁠is padded by⁠n−m{\displaystyle n-m}⁠orthogonal vectors from the kernel. However, if the singular value of⁠0{\displaystyle 0}⁠exists, the extra columns of⁠U{\displaystyle \mathbf {U} }⁠or⁠V{\displaystyle \mathbf {V} }⁠already appear as left or right-singular vectors. Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor⁠eiφ{\displaystyle e^{i\varphi }}⁠(for the real case up to a sign). Consequently, if all singular values of a square matrix⁠M{\displaystyle \mathbf {M} }⁠are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of⁠U{\displaystyle \mathbf {U} }⁠by a unit-phase factor and simultaneous multiplication of the corresponding column of⁠V{\displaystyle \mathbf {V} }⁠by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠spanning the kernel and cokernel, respectively, of⁠M.{\displaystyle \mathbf {M} .}⁠ The singular value decomposition is very general in the sense that it can be applied to any⁠m×n{\displaystyle m\times n}⁠matrix, whereaseigenvalue decompositioncan only be applied to squarediagonalizable matrices. Nevertheless, the two decompositions are related. If⁠M{\displaystyle \mathbf {M} }⁠has SVD⁠M=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}⁠the following two relations hold: M∗M=VΣ∗U∗UΣV∗=V(Σ∗Σ)V∗,MM∗=UΣV∗VΣ∗U∗=U(ΣΣ∗)U∗.{\displaystyle {\begin{aligned}\mathbf {M} ^{*}\mathbf {M} &=\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}\,\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}=\mathbf {V} (\mathbf {\Sigma } ^{*}\mathbf {\Sigma } )\mathbf {V} ^{*},\\[3mu]\mathbf {M} \mathbf {M} ^{*}&=\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}\,\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}=\mathbf {U} (\mathbf {\Sigma } \mathbf {\Sigma } ^{*})\mathbf {U} ^{*}.\end{aligned}}} The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently: In the special case of⁠M{\displaystyle \mathbf {M} }⁠being anormal matrix, and thus also square, thespectral theoremensures that it can beunitarilydiagonalizedusing a basis ofeigenvectors, and thus decomposed as⁠M=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}⁠for some unitary matrix⁠U{\displaystyle \mathbf {U} }⁠and diagonal matrix⁠D{\displaystyle \mathbf {D} }⁠with complex elements⁠σi{\displaystyle \sigma _{i}}⁠along the diagonal. When⁠M{\displaystyle \mathbf {M} }⁠ispositive semi-definite,⁠σi{\displaystyle \sigma _{i}}⁠will be non-negative real numbers so that the decomposition⁠M=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}⁠is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase⁠eiφ{\displaystyle e^{i\varphi }}⁠of each⁠σi{\displaystyle \sigma _{i}}⁠to either its corresponding⁠Vi{\displaystyle \mathbf {V} _{i}}⁠or⁠Ui.{\displaystyle \mathbf {U} _{i}.}⁠The natural connection of the SVD to non-normal matrices is through thepolar decompositiontheorem:⁠M=SR,{\displaystyle \mathbf {M} =\mathbf {S} \mathbf {R} ,}⁠where⁠S=UΣU∗{\displaystyle \mathbf {S} =\mathbf {U} \mathbf {\Sigma } \mathbf {U} ^{*}}⁠is positive semidefinite and normal, and⁠R=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}⁠is unitary. Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of⁠M,{\displaystyle \mathbf {M} ,}⁠while related, differ: the eigenvalue decomposition is⁠M=UDU−1,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{-1},}⁠where⁠U{\displaystyle \mathbf {U} }⁠is not necessarily unitary and⁠D{\displaystyle \mathbf {D} }⁠is not necessarily positive semi-definite, while the SVD is⁠M=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}⁠where⁠Σ{\displaystyle \mathbf {\Sigma } }⁠is diagonal and positive semi-definite, and⁠U{\displaystyle \mathbf {U} }⁠and⁠V{\displaystyle \mathbf {V} }⁠are unitary matrices that are not necessarily related except through the matrix⁠M.{\displaystyle \mathbf {M} .}⁠While onlynon-defectivesquare matrices have an eigenvalue decomposition, any⁠m×n{\displaystyle m\times n}⁠matrix has a SVD. The singular value decomposition can be used for computing thepseudoinverseof a matrix. The pseudoinverse of the matrix⁠M{\displaystyle \mathbf {M} }⁠with singular value decomposition⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠is M+=VΣ+U∗,{\displaystyle \mathbf {M} ^{+}=\mathbf {V} {\boldsymbol {\Sigma }}^{+}\mathbf {U} ^{\ast },} whereΣ+{\displaystyle {\boldsymbol {\Sigma }}^{+}}is the pseudoinverse ofΣ{\displaystyle {\boldsymbol {\Sigma }}}, which is formed by replacing every non-zero diagonal entry by itsreciprocaland transposing the resulting matrix. The pseudoinverse is one way to solvelinear least squaresproblems. A set ofhomogeneous linear equationscan be written as⁠Ax=0{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {0} }⁠for a matrix⁠A{\displaystyle \mathbf {A} }⁠and vector⁠x.{\displaystyle \mathbf {x} .}⁠A typical situation is that⁠A{\displaystyle \mathbf {A} }⁠is known and a non-zero⁠x{\displaystyle \mathbf {x} }⁠is to be determined which satisfies the equation. Such an⁠x{\displaystyle \mathbf {x} }⁠belongs to⁠A{\displaystyle \mathbf {A} }⁠'snull spaceand is sometimes called a (right) null vector of⁠A.{\displaystyle \mathbf {A} .}⁠The vector⁠x{\displaystyle \mathbf {x} }⁠can be characterized as a right-singular vector corresponding to a singular value of⁠A{\displaystyle \mathbf {A} }⁠that is zero. This observation means that if⁠A{\displaystyle \mathbf {A} }⁠is asquare matrixand has no vanishing singular value, the equation has no non-zero⁠x{\displaystyle \mathbf {x} }⁠as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero⁠x{\displaystyle \mathbf {x} }⁠satisfying⁠x∗A=0{\displaystyle \mathbf {x} ^{*}\mathbf {A} =\mathbf {0} }⁠with⁠x∗{\displaystyle \mathbf {x} ^{*}}⁠denoting the conjugate transpose of⁠x,{\displaystyle \mathbf {x} ,}⁠is called a left null vector of⁠A.{\displaystyle \mathbf {A} .}⁠ Atotal least squaresproblem seeks the vector⁠x{\displaystyle \mathbf {x} }⁠that minimizes the2-normof a vector⁠Ax{\displaystyle \mathbf {A} \mathbf {x} }⁠under the constraint‖x‖=1.{\displaystyle \|\mathbf {x} \|=1.}The solution turns out to be the right-singular vector of⁠A{\displaystyle \mathbf {A} }⁠corresponding to the smallest singular value. Another application of the SVD is that it provides an explicit representation of therangeandnull spaceof a matrix⁠M.{\displaystyle \mathbf {M} .}⁠The right-singular vectors corresponding to vanishing singular values of⁠M{\displaystyle \mathbf {M} }⁠span the null space of⁠M{\displaystyle \mathbf {M} }⁠and the left-singular vectors corresponding to the non-zero singular values of⁠M{\displaystyle \mathbf {M} }⁠span the range of⁠M.{\displaystyle \mathbf {M} .}⁠For example, in the aboveexamplethe null space is spanned by the last row of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠and the range is spanned by the first three columns of⁠U.{\displaystyle \mathbf {U} .}⁠ As a consequence, therankof⁠M{\displaystyle \mathbf {M} }⁠equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements inΣ{\displaystyle \mathbf {\Sigma } }. In numerical linear algebra the singular values can be used to determine theeffective rankof a matrix, asrounding errormay lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. Some practical applications need to solve the problem ofapproximatinga matrix⁠M{\displaystyle \mathbf {M} }⁠with another matrixM~{\displaystyle {\tilde {\mathbf {M} }}}, said to betruncated, which has a specific rank⁠r{\displaystyle r}⁠. In the case that the approximation is based on minimizing theFrobenius normof the difference between⁠M{\displaystyle \mathbf {M} }⁠and⁠M~{\displaystyle {\tilde {\mathbf {M} }}}⁠under the constraint thatrank⁡(M~)=r,{\displaystyle \operatorname {rank} {\bigl (}{\tilde {\mathbf {M} }}{\bigr )}=r,}it turns out that the solution is given by the SVD of⁠M,{\displaystyle \mathbf {M} ,}⁠namely M~=UΣ~V∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} {\tilde {\mathbf {\Sigma } }}\mathbf {V} ^{*},} whereΣ~{\displaystyle {\tilde {\mathbf {\Sigma } }}}is the same matrix asΣ{\displaystyle \mathbf {\Sigma } }except that it contains only the⁠r{\displaystyle r}⁠largest singular values (the other singular values are replaced by zero). This is known as theEckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; seeStewart 1993). One practical consequence of the low-rank approximation given by SVD is that a greyscale image represented as anm×n{\displaystyle m\times n}matrixA{\displaystyle A}, can be efficiently represented by keeping the firstk{\displaystyle k}singular values and corresponding vectors. The truncated decomposition Ak=UkΣkVkT{\displaystyle A_{k}=\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{T}} gives an image which minimizes theFrobenius errorcompared to the original image. Thus, the task becomes finding a close approximationAk{\displaystyle A_{k}}that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. StoringAk{\displaystyle A_{k}}requires onlyk(n+m+1){\displaystyle k(n+m+1)}numbers compared tonm{\displaystyle nm}. This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix. Since the singular values of most natural images decay quickly, most of their variance is often captured by a smallk{\displaystyle k}. For a 1528 × 1225 greyscale image, we can achieve a relative error of.7%{\displaystyle .7\%}with as little ask=100{\displaystyle k=100}.[1]In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such asJPEG. The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix⁠A{\displaystyle \mathbf {A} }⁠can be written as anouter productof two vectors⁠A=u⊗v,{\displaystyle \mathbf {A} =\mathbf {u} \otimes \mathbf {v} ,}⁠or, in coordinates,⁠Aij=uivj.{\displaystyle A_{ij}=u_{i}v_{j}.}⁠Specifically, the matrix⁠M{\displaystyle \mathbf {M} }⁠can be decomposed as, M=∑iAi=∑iσiUi⊗Vi.{\displaystyle \mathbf {M} =\sum _{i}\mathbf {A} _{i}=\sum _{i}\sigma _{i}\mathbf {U} _{i}\otimes \mathbf {V} _{i}.} Here⁠Ui{\displaystyle \mathbf {U} _{i}}⁠and⁠Vi{\displaystyle \mathbf {V} _{i}}⁠are the⁠i{\displaystyle i}⁠-th columns of the corresponding SVD matrices,⁠σi{\displaystyle \sigma _{i}}⁠are the ordered singular values, and each⁠Ai{\displaystyle \mathbf {A} _{i}}⁠is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero⁠σi{\displaystyle \sigma _{i}}⁠is exactly the rank of the matrix.[citation needed]Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described[2]by aGabor filterin the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example,reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of⁠U{\displaystyle \mathbf {U} }⁠in the SVD factorization is then a Gabor while the first column of⁠V{\displaystyle \mathbf {V} }⁠represents the time modulation (or vice versa). One may then define an index of separability α=σ12∑iσi2,{\displaystyle \alpha ={\frac {\sigma _{1}^{2}}{\sum _{i}\sigma _{i}^{2}}},} which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.[3] It is possible to use the SVD of a square matrix⁠A{\displaystyle \mathbf {A} }⁠to determine theorthogonal matrix⁠O{\displaystyle \mathbf {O} }⁠closest to⁠A.{\displaystyle \mathbf {A} .}⁠The closeness of fit is measured by theFrobenius normof⁠O−A.{\displaystyle \mathbf {O} -\mathbf {A} .}⁠The solution is the product⁠UV∗.{\displaystyle \mathbf {U} \mathbf {V} ^{*}.}⁠[4]This intuitively makes sense because an orthogonal matrix would have the decomposition⁠UIV∗{\displaystyle \mathbf {U} \mathbf {I} \mathbf {V} ^{*}}⁠where⁠I{\displaystyle \mathbf {I} }⁠is the identity matrix, so that if⁠A=UΣV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠then the product⁠A=UV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {V} ^{*}}⁠amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix⁠R=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}⁠of the Polar DecompositionM=RP=P′R{\displaystyle \mathbf {M} =\mathbf {R} \mathbf {P} =\mathbf {P} '\mathbf {R} }in either order of stretch and rotation, as described above. A similar problem, with interesting applications inshape analysis, is theorthogonal Procrustes problem, which consists of finding an orthogonal matrix⁠O{\displaystyle \mathbf {O} }⁠which most closely maps⁠A{\displaystyle \mathbf {A} }⁠to⁠B.{\displaystyle \mathbf {B} .}⁠Specifically, O=argminΩ‖AΩ−B‖Fsubject toΩTΩ=I,{\displaystyle \mathbf {O} ={\underset {\Omega }{\operatorname {argmin} }}\|\mathbf {A} {\boldsymbol {\Omega }}-\mathbf {B} \|_{F}\quad {\text{subject to}}\quad {\boldsymbol {\Omega }}^{\operatorname {T} }{\boldsymbol {\Omega }}=\mathbf {I} ,} where‖⋅‖F{\displaystyle \|\cdot \|_{F}}denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrixM=ATB{\displaystyle \mathbf {M} =\mathbf {A} ^{\operatorname {T} }\mathbf {B} }. TheKabsch algorithm(calledWahba's problemin other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. The SVD can be used to construct the principal components[5]inprincipal component analysisas follows: LetX∈RN×p{\displaystyle \mathbf {X} \in \mathbb {R} ^{N\times p}}be a data matrix where each of theN{\displaystyle N}rows is a (feature-wise) mean-centered observation, each of dimensionp{\displaystyle p}. The SVD ofX{\displaystyle \mathbf {X} }is:X=VΣU∗{\displaystyle \mathbf {X} =\mathbf {V} {\boldsymbol {\Sigma }}\mathbf {U} ^{\ast }} From the same reference,[6]we see thatVΣ{\displaystyle \mathbf {V} {\boldsymbol {\Sigma }}}contains the scores of the rows ofX{\displaystyle \mathbf {X} }(i.e. each observation), andU{\displaystyle \mathbf {U} }is the matrix whose columns are principal component loading vectors. The SVD and pseudoinverse have been successfully applied tosignal processing,[7]image processing[8]andbig data(e.g., in genomic signal processing).[9][10][11][12] The SVD is also applied extensively to the study of linearinverse problemsand is useful in the analysis of regularization methods such as that ofTikhonov. It is widely used in statistics, where it is related toprincipal component analysisand tocorrespondence analysis, and insignal processingandpattern recognition. It is also used in output-onlymodal analysis, where the non-scaledmode shapescan be determined from the singular vectors. Yet another usage islatent semantic indexingin natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number"κ:=σmax/σmin{\displaystyle \kappa :=\sigma _{\text{max}}/\sigma _{\text{min}}}. It often controls the error rate or convergence rate of a given computational scheme on such systems.[13][14] The SVD also plays a crucial role in the field ofquantum information, in a form often referred to as theSchmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to beentangled: if the rank of theΣ{\displaystyle \mathbf {\Sigma } }matrix is larger than one. One application of SVD to rather large matrices is innumerical weather prediction, whereLanczos methodsare used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate anensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled withradial basis functionsto interpolate solutions to three-dimensional unsteady flow problems.[15] Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO.[16]SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used inrecommender systemsto predict people's item ratings.[17]Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.[18] Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to diseaseoutbreakdetection.[19]A combination of SVD andhigher-order SVDalso has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) indisease surveillance.[20] Inastrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design[21]andorbital station-keeping.[22] The SVD can be used to measure the similarity between real-valued matrices.[23]By measuring the angles between the singular vectors, the inherent two-dimensional structure of matrices is accounted for. This method was shown to outperformcosine similarityandFrobenius normin most cases, including brain activity measurements fromneuroscienceexperiments. An eigenvalue⁠λ{\displaystyle \lambda }⁠of a matrix⁠M{\displaystyle \mathbf {M} }⁠is characterized by the algebraic relation⁠Mu=λu.{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} .}⁠When⁠M{\displaystyle \mathbf {M} }⁠isHermitian, a variational characterization is also available. Let⁠M{\displaystyle \mathbf {M} }⁠be a real⁠n×n{\displaystyle n\times n}⁠symmetric matrix. Define f:{Rn→Rx↦xTMx{\displaystyle f:\left\{{\begin{aligned}\mathbb {R} ^{n}&\to \mathbb {R} \\\mathbf {x} &\mapsto \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} \end{aligned}}\right.} By theextreme value theorem, this continuous function attains a maximum at some⁠u{\displaystyle \mathbf {u} }⁠when restricted to the unit sphere{‖x‖=1}.{\displaystyle \{\|\mathbf {x} \|=1\}.}By theLagrange multiplierstheorem,⁠u{\displaystyle \mathbf {u} }⁠necessarily satisfies ∇uTMu−λ⋅∇uTu=0{\displaystyle \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {u} -\lambda \cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} =0} for some real number⁠λ.{\displaystyle \lambda .}⁠The nabla symbol,⁠∇{\displaystyle \nabla }⁠, is thedeloperator (differentiation with respect to⁠x{\displaystyle \mathbf {x} }⁠).Using the symmetry of⁠M{\displaystyle \mathbf {M} }⁠we obtain ∇xTMx−λ⋅∇xTx=2(M−λI)x.{\displaystyle \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} -\lambda \cdot \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {x} =2(\mathbf {M} -\lambda \mathbf {I} )\mathbf {x} .} Therefore⁠Mu=λu,{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} ,}⁠so⁠u{\displaystyle \mathbf {u} }⁠is a unit length eigenvector of⁠M.{\displaystyle \mathbf {M} .}⁠For every unit length eigenvector⁠v{\displaystyle \mathbf {v} }⁠of⁠M{\displaystyle \mathbf {M} }⁠its eigenvalue is⁠f(v),{\displaystyle f(\mathbf {v} ),}⁠so⁠λ{\displaystyle \lambda }⁠is the largest eigenvalue of⁠M.{\displaystyle \mathbf {M} .}⁠The same calculation performed on the orthogonal complement of⁠u{\displaystyle \mathbf {u} }⁠gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there⁠f(x)=x∗Mx{\displaystyle f(\mathbf {x} )=\mathbf {x} ^{*}\mathbf {M} \mathbf {x} }⁠is a real-valued function of⁠2n{\displaystyle 2n}⁠real variables. Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of⁠M{\displaystyle \mathbf {M} }⁠is no longer required. This section gives these two arguments for existence of singular value decomposition. LetM{\displaystyle \mathbf {M} }be an⁠m×n{\displaystyle m\times n}⁠complex matrix. SinceM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }is positive semi-definite and Hermitian, by thespectral theorem, there exists an⁠n×n{\displaystyle n\times n}⁠unitary matrixV{\displaystyle \mathbf {V} }such that V∗M∗MV=D¯=[D000],{\displaystyle \mathbf {V} ^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} ={\bar {\mathbf {D} }}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}},} whereD{\displaystyle \mathbf {D} }is diagonal and positive definite, of dimensionℓ×ℓ{\displaystyle \ell \times \ell }, withℓ{\displaystyle \ell }the number of non-zero eigenvalues ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }(which can be shown to verifyℓ≤min(n,m){\displaystyle \ell \leq \min(n,m)}). Note thatV{\displaystyle \mathbf {V} }is here by definition a matrix whosei{\displaystyle i}-th column is thei{\displaystyle i}-th eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }, corresponding to the eigenvalueD¯ii{\displaystyle {\bar {\mathbf {D} }}_{ii}}. Moreover, thej{\displaystyle j}-th column ofV{\displaystyle \mathbf {V} }, forj>ℓ{\displaystyle j>\ell }, is an eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with eigenvalueD¯jj=0{\displaystyle {\bar {\mathbf {D} }}_{jj}=0}. This can be expressed by writingV{\displaystyle \mathbf {V} }asV=[V1V2]{\displaystyle \mathbf {V} ={\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}}, where the columns ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}therefore contain the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting ofV{\displaystyle \mathbf {V} }, the equation becomes: [V1∗V2∗]M∗M[V1V2]=[V1∗M∗MV1V1∗M∗MV2V2∗M∗MV1V2∗M∗MV2]=[D000].{\displaystyle {\begin{bmatrix}\mathbf {V} _{1}^{*}\\\mathbf {V} _{2}^{*}\end{bmatrix}}\mathbf {M} ^{*}\mathbf {M} \,{\begin{bmatrix}\mathbf {V} _{1}&\!\!\mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\\\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}}.} This implies that V1∗M∗MV1=D,V2∗M∗MV2=0.{\displaystyle \mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}=\mathbf {D} ,\quad \mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .} Moreover, the second equation impliesMV2=0{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} }.[24]Finally, the unitary-ness ofV{\displaystyle \mathbf {V} }translates, in terms ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}, into the following conditions: V1∗V1=I1,V2∗V2=I2,V1V1∗+V2V2∗=I12,{\displaystyle {\begin{aligned}\mathbf {V} _{1}^{*}\mathbf {V} _{1}&=\mathbf {I} _{1},\\\mathbf {V} _{2}^{*}\mathbf {V} _{2}&=\mathbf {I} _{2},\\\mathbf {V} _{1}\mathbf {V} _{1}^{*}+\mathbf {V} _{2}\mathbf {V} _{2}^{*}&=\mathbf {I} _{12},\end{aligned}}} where the subscripts on the identity matrices are used to remark that they are of different dimensions. Let us now define U1=MV1D−12.{\displaystyle \mathbf {U} _{1}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}.} Then, U1D12V1∗=MV1D−12D12V1∗=M(I−V2V2∗)=M−(MV2)V2∗=M,{\displaystyle \mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} (\mathbf {I} -\mathbf {V} _{2}\mathbf {V} _{2}^{*})=\mathbf {M} -(\mathbf {M} \mathbf {V} _{2})\mathbf {V} _{2}^{*}=\mathbf {M} ,} sinceMV2=0.{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} .}This can be also seen as immediate consequence of the fact thatMV1V1∗=M{\displaystyle \mathbf {M} \mathbf {V} _{1}\mathbf {V} _{1}^{*}=\mathbf {M} }. This is equivalent to the observation that if{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is the set of eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-vanishing eigenvalues{λi}i=1ℓ{\displaystyle \{\lambda _{i}\}_{i=1}^{\ell }}, then{Mvi}i=1ℓ{\displaystyle \{\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is a set of orthogonal vectors, and{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}is a (generally not complete) set oforthonormalvectors. This matches with the matrix formalism used above denoting withV1{\displaystyle \mathbf {V} _{1}}the matrix whose columns are{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}, withV2{\displaystyle \mathbf {V} _{2}}the matrix whose columns are the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with vanishing eigenvalue, andU1{\displaystyle \mathbf {U} _{1}}the matrix whose columns are the vectors{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}. We see that this is almost the desired result, except thatU1{\displaystyle \mathbf {U} _{1}}andV1{\displaystyle \mathbf {V} _{1}}are in general not unitary, since they might not be square. However, we do know that the number of rows ofU1{\displaystyle \mathbf {U} _{1}}is no smaller than the number of columns, since the dimensions ofD{\displaystyle \mathbf {D} }is no greater thanm{\displaystyle m}andn{\displaystyle n}. Also, since U1∗U1=D−12V1∗M∗MV1D−12=D−12DD−12=I1,{\displaystyle \mathbf {U} _{1}^{*}\mathbf {U} _{1}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} \mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {I_{1}} ,} the columns inU1{\displaystyle \mathbf {U} _{1}}are orthonormal and can be extended to an orthonormal basis. This means that we can chooseU2{\displaystyle \mathbf {U} _{2}}such thatU=[U1U2]{\displaystyle \mathbf {U} ={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}}is unitary. For⁠V1{\displaystyle \mathbf {V} _{1}}⁠we already have⁠V2{\displaystyle \mathbf {V} _{2}}⁠to make it unitary. Now, define Σ=[[D12000]0],{\displaystyle \mathbf {\Sigma } ={\begin{bmatrix}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}},} where extra zero rows are addedor removedto make the number of zero rows equal the number of columns of⁠U2,{\displaystyle \mathbf {U} _{2},}⁠and hence the overall dimensions ofΣ{\displaystyle \mathbf {\Sigma } }equal tom×n{\displaystyle m\times n}. Then [U1U2][[D12000]0][V1V2]∗=[U1U2][D12V1∗0]=U1D12V1∗=M,{\displaystyle {\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}{\begin{bmatrix}\mathbf {} D^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}}{\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}^{*}={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}\\0\end{bmatrix}}=\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} ,} which is the desired result: M=UΣV∗.{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}.} Notice the argument could begin with diagonalizing⁠MM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}⁠rather than⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠(This shows directly that⁠MM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}⁠and⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠have the same non-zero eigenvalues). The singular values can also be characterized as the maxima of⁠uTMv,{\displaystyle \mathbf {u} ^{\mathrm {T} }\mathbf {M} \mathbf {v} ,}⁠considered as a function of⁠u{\displaystyle \mathbf {u} }⁠and⁠v,{\displaystyle \mathbf {v} ,}⁠over particular subspaces. The singular vectors are the values of⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠where these maxima are attained. Let⁠M{\displaystyle \mathbf {M} }⁠denote an⁠m×n{\displaystyle m\times n}⁠matrix with real entries. Let⁠Sk−1{\displaystyle S^{k-1}}⁠be the unit(k−1){\displaystyle (k-1)}-sphere inRk{\displaystyle \mathbb {R} ^{k}}, and defineσ(u,v)=uTMv,{\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )=\mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} ,}u∈Sm−1,{\displaystyle \mathbf {u} \in S^{m-1},}v∈Sn−1.{\displaystyle \mathbf {v} \in S^{n-1}.} Consider the function⁠σ{\displaystyle \sigma }⁠restricted to⁠Sm−1×Sn−1.{\displaystyle S^{m-1}\times S^{n-1}.}⁠Since both⁠Sm−1{\displaystyle S^{m-1}}⁠and⁠Sn−1{\displaystyle S^{n-1}}⁠arecompactsets, theirproductis also compact. Furthermore, since⁠σ{\displaystyle \sigma }⁠is continuous, it attains a largest value for at least one pair of vectors⁠u{\displaystyle \mathbf {u} }⁠in⁠Sm−1{\displaystyle S^{m-1}}⁠and⁠v{\displaystyle \mathbf {v} }⁠in⁠Sn−1.{\displaystyle S^{n-1}.}⁠This largest value is denoted⁠σ1{\displaystyle \sigma _{1}}⁠and the corresponding vectors are denoted⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1.{\displaystyle \mathbf {v} _{1}.}⁠Since⁠σ1{\displaystyle \sigma _{1}}⁠is the largest value of⁠σ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}⁠it must be non-negative. If it were negative, changing the sign of either⁠u1{\displaystyle \mathbf {u} _{1}}⁠or⁠v1{\displaystyle \mathbf {v} _{1}}⁠would make it positive and therefore larger. Statement.⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1{\displaystyle \mathbf {v} _{1}}⁠are left and right-singular vectors of⁠M{\displaystyle \mathbf {M} }⁠with corresponding singular value⁠σ1.{\displaystyle \sigma _{1}.}⁠ Proof.Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation: ∇σ=∇uTMv−λ1⋅∇uTu−λ2⋅∇vTv{\displaystyle \nabla \sigma =\nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} -\lambda _{1}\cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} -\lambda _{2}\cdot \nabla \mathbf {v} ^{\operatorname {T} }\mathbf {v} } After some algebra, this becomes Mv1=2λ1u1+0,MTu1=0+2λ2v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=2\lambda _{1}\mathbf {u} _{1}+0,\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=0+2\lambda _{2}\mathbf {v} _{1}.\end{aligned}}} Multiplying the first equation from left by⁠u1T{\displaystyle \mathbf {u} _{1}^{\textrm {T}}}⁠and the second equation from left by⁠v1T{\displaystyle \mathbf {v} _{1}^{\textrm {T}}}⁠and taking‖u‖=‖v‖=1{\displaystyle \|\mathbf {u} \|=\|\mathbf {v} \|=1}into account gives σ1=2λ1=2λ2.{\displaystyle \sigma _{1}=2\lambda _{1}=2\lambda _{2}.} Plugging this into the pair of equations above, we have Mv1=σ1u1,MTu1=σ1v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=\sigma _{1}\mathbf {u} _{1},\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=\sigma _{1}\mathbf {v} _{1}.\end{aligned}}} This proves the statement. More singular vectors and singular values can be found by maximizing⁠σ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}⁠over normalized⁠u{\displaystyle \mathbf {u} }⁠and⁠v{\displaystyle \mathbf {v} }⁠which are orthogonal to⁠u1{\displaystyle \mathbf {u} _{1}}⁠and⁠v1,{\displaystyle \mathbf {v} _{1},}⁠respectively. The passage from real to complex is similar to the eigenvalue case. One-sided Jacobi algorithm is an iterative algorithm,[25]where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as aJacobi rotation, M←MJ(p,q,θ),{\displaystyle M\leftarrow MJ(p,q,\theta ),} where the angleθ{\displaystyle \theta }of the Jacobi rotation matrixJ(p,q,θ){\displaystyle J(p,q,\theta )}is chosen such that after the rotation the columns with numbersp{\displaystyle p}andq{\displaystyle q}become orthogonal. The indices(p,q){\displaystyle (p,q)}are swept cyclically,(p=1…m,q=p+1…m){\displaystyle (p=1\dots m,q=p+1\dots m)}, wherem{\displaystyle m}is the number of columns. After the algorithm has converged, the singular value decompositionM=USVT{\displaystyle M=USV^{T}}is recovered as follows: the matrixV{\displaystyle V}is the accumulation of Jacobi rotation matrices, the matrixU{\displaystyle U}is given bynormalisingthe columns of the transformed matrixM{\displaystyle M}, and the singular values are given as the norms of the columns of the transformed matrixM{\displaystyle M}. Two-sided Jacobi SVD algorithm—a generalization of theJacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square theQR decompositionis performed first and then the algorithm is applied to theR{\displaystyle R}matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying aGivens rotationto symmetrize the pair of elements and then applying aJacobi transformationto zero them, M←JTGMJ{\displaystyle M\leftarrow J^{T}GMJ} whereG{\displaystyle G}is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and whereJ{\displaystyle J}is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements. After the algorithm has converged the resulting diagonal matrix contains the singular values. The matricesU{\displaystyle U}andV{\displaystyle V}are accumulated as follows:U←UGTJ{\displaystyle U\leftarrow UG^{T}J},V←VJ{\displaystyle V\leftarrow VJ}. The singular value decomposition can be computed using the following observations: The SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is typically computed by a two-step procedure. In the first step, the matrix is reduced to abidiagonal matrix. This takesorder⁠O(mn2){\displaystyle O(mn^{2})}⁠floating-point operations (flop), assuming that⁠m≥n.{\displaystyle m\geq n.}⁠The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with aniterative method(as witheigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like themachine epsilon. If this precision is considered constant, then the second step takes⁠O(n){\displaystyle O(n)}⁠iterations, each costing⁠O(n){\displaystyle O(n)}⁠flops. Thus, the first step is more expensive, and the overall cost is⁠O(mn2){\displaystyle O(mn^{2})}⁠flops (Trefethen & Bau III 1997, Lecture 31). The first step can be done usingHouseholder reflectionsfor a cost of⁠4mn2−4n3/3{\displaystyle 4mn^{2}-4n^{3}/3}⁠flops, assuming that only the singular values are needed and not the singular vectors. If⁠m{\displaystyle m}⁠is much larger than⁠n{\displaystyle n}⁠then it is advantageous to first reduce the matrix⁠M{\displaystyle \mathbf {M} }⁠to a triangular matrix with theQR decompositionand then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is⁠2mn2+2n3{\displaystyle 2mn^{2}+2n^{3}}⁠flops (Trefethen & Bau III 1997, Lecture 31). The second step can be done by a variant of theQR algorithmfor the computation of eigenvalues, which was first described byGolub & Kahan (1965). TheLAPACKsubroutine DBDSQR[26]implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD[27]routine for the computation of the singular value decomposition. The same algorithm is implemented in theGNU Scientific Library(GSL). The GSL also offers an alternative method that uses a one-sidedJacobi orthogonalizationin step 2 (GSL Team 2007). This method computes the SVD of the bidiagonal matrix by solving a sequence of⁠2×2{\displaystyle 2\times 2}⁠SVD problems, similar to how theJacobi eigenvalue algorithmsolves a sequence of⁠2×2{\displaystyle 2\times 2}⁠eigenvalue methods (Golub & Van Loan 1996, §8.6.3). Yet another method for step 2 uses the idea ofdivide-and-conquer eigenvalue algorithms(Trefethen & Bau III 1997, Lecture 31). There is an alternative way that does not explicitly use the eigenvalue decomposition.[28]Usually the singular value problem of a matrix⁠M{\displaystyle \mathbf {M} }⁠is converted into an equivalent symmetric eigenvalue problem such as⁠MM∗,{\displaystyle \mathbf {M} \mathbf {M} ^{*},}⁠⁠M∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}⁠or [0MM∗0].{\displaystyle {\begin{bmatrix}\mathbf {0} &\mathbf {M} \\\mathbf {M} ^{*}&\mathbf {0} \end{bmatrix}}.} The approaches that use eigenvalue decompositions are based on theQR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between theQR decompositionand theLQ decompositionto find the real diagonalHermitian matrices. TheQR decompositiongives⁠M⇒QR{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {R} }⁠and theLQ decompositionof⁠R{\displaystyle \mathbf {R} }⁠gives⁠R⇒LP∗.{\displaystyle \mathbf {R} \Rightarrow \mathbf {L} \mathbf {P} ^{*}.}⁠Thus, at every iteration, we have⁠M⇒QLP∗,{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {L} \mathbf {P} ^{*},}⁠update⁠M⇐L{\displaystyle \mathbf {M} \Leftarrow \mathbf {L} }⁠and repeat the orthogonalizations. Eventually,[clarification needed]this iteration betweenQR decompositionandLQ decompositionproduces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD. The singular values of a⁠2×2{\displaystyle 2\times 2}⁠matrix can be found analytically. Let the matrix beM=z0I+z1σ1+z2σ2+z3σ3{\displaystyle \mathbf {M} =z_{0}\mathbf {I} +z_{1}\sigma _{1}+z_{2}\sigma _{2}+z_{3}\sigma _{3}} wherezi∈C{\displaystyle z_{i}\in \mathbb {C} }are complex numbers that parameterize the matrix,⁠I{\displaystyle \mathbf {I} }⁠is the identity matrix, andσi{\displaystyle \sigma _{i}}denote thePauli matrices. Then its two singular values are given by σ±=|z0|2+|z1|2+|z2|2+|z3|2±(|z0|2+|z1|2+|z2|2+|z3|2)2−|z02−z12−z22−z32|2=|z0|2+|z1|2+|z2|2+|z3|2±2(Re⁡z0z1∗)2+(Re⁡z0z2∗)2+(Re⁡z0z3∗)2+(Im⁡z1z2∗)2+(Im⁡z2z3∗)2+(Im⁡z3z1∗)2{\displaystyle {\begin{aligned}\sigma _{\pm }&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm {\sqrt {{\bigl (}|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}{\bigr )}^{2}-|z_{0}^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}|^{2}}}}}\\&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm 2{\sqrt {(\operatorname {Re} z_{0}z_{1}^{*})^{2}+(\operatorname {Re} z_{0}z_{2}^{*})^{2}+(\operatorname {Re} z_{0}z_{3}^{*})^{2}+(\operatorname {Im} z_{1}z_{2}^{*})^{2}+(\operatorname {Im} z_{2}z_{3}^{*})^{2}+(\operatorname {Im} z_{3}z_{1}^{*})^{2}}}}}\end{aligned}}} In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an⁠m×n{\displaystyle m\times n}⁠matrix⁠M{\displaystyle \mathbf {M} }⁠of rank⁠r{\displaystyle r}⁠: The thin, or economy-sized, SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is given by[29] M=UkΣkVk∗,{\displaystyle \mathbf {M} =\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{*},} wherek=min(m,n),{\displaystyle k=\min(m,n),}the matrices⁠Uk{\displaystyle \mathbf {U} _{k}}⁠and⁠Vk{\displaystyle \mathbf {V} _{k}}⁠contain only the first⁠k{\displaystyle k}⁠columns of⁠U{\displaystyle \mathbf {U} }⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠and⁠Σk{\displaystyle \mathbf {\Sigma } _{k}}⁠contains only the first⁠k{\displaystyle k}⁠singular values from⁠Σ.{\displaystyle \mathbf {\Sigma } .}⁠The matrix⁠Uk{\displaystyle \mathbf {U} _{k}}⁠is thus⁠m×k,{\displaystyle m\times k,}⁠⁠Σk{\displaystyle \mathbf {\Sigma } _{k}}⁠is⁠k×k{\displaystyle k\times k}⁠diagonal, and⁠Vk∗{\displaystyle \mathbf {V} _{k}^{*}}⁠is⁠k×n.{\displaystyle k\times n.}⁠ The thin SVD uses significantly less space and computation time if⁠k≪max(m,n).{\displaystyle k\ll \max(m,n).}⁠The first stage in its calculation will usually be aQR decompositionof⁠M,{\displaystyle \mathbf {M} ,}⁠which can make for a significantly quicker calculation in this case. The compact SVD of a matrix⁠M{\displaystyle \mathbf {M} }⁠is given by M=UrΣrVr∗.{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}.} Only the⁠r{\displaystyle r}⁠column vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠r{\displaystyle r}⁠row vectors of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠corresponding to the non-zero singular values⁠Σr{\displaystyle \mathbf {\Sigma } _{r}}⁠are calculated. The remaining vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠are not calculated. This is quicker and more economical than the thin SVD if⁠r≪min(m,n).{\displaystyle r\ll \min(m,n).}⁠The matrix⁠Ur{\displaystyle \mathbf {U} _{r}}⁠is thus⁠m×r,{\displaystyle m\times r,}⁠⁠Σr{\displaystyle \mathbf {\Sigma } _{r}}⁠is⁠r×r{\displaystyle r\times r}⁠diagonal, and⁠Vr∗{\displaystyle \mathbf {V} _{r}^{*}}⁠is⁠r×n.{\displaystyle r\times n.}⁠ In many applications the number⁠r{\displaystyle r}⁠of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only⁠t≪r{\displaystyle t\ll r}⁠non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix⁠M,{\displaystyle \mathbf {M} ,}⁠but rather provides the optimallow-rank matrix approximation⁠M~{\displaystyle {\tilde {\mathbf {M} }}}⁠by any matrix of a fixed rank⁠t{\displaystyle t}⁠ M~=UtΣtVt∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} _{t}\mathbf {\Sigma } _{t}\mathbf {V} _{t}^{*},} where matrix⁠Ut{\displaystyle \mathbf {U} _{t}}⁠is⁠m×t,{\displaystyle m\times t,}⁠⁠Σt{\displaystyle \mathbf {\Sigma } _{t}}⁠is⁠t×t{\displaystyle t\times t}⁠diagonal, and⁠Vt∗{\displaystyle \mathbf {V} _{t}^{*}}⁠is⁠t×n.{\displaystyle t\times n.}⁠Only the⁠t{\displaystyle t}⁠column vectors of⁠U{\displaystyle \mathbf {U} }⁠and⁠t{\displaystyle t}⁠row vectors of⁠V∗{\displaystyle \mathbf {V} ^{*}}⁠corresponding to the⁠t{\displaystyle t}⁠largest singular values⁠Σt{\displaystyle \mathbf {\Sigma } _{t}}⁠are calculated. This can be much quicker and more economical than the compact SVD if⁠t≪r,{\displaystyle t\ll r,}⁠but requires a completely different toolset of numerical solvers. In applications that require an approximation to theMoore–Penrose inverseof the matrix⁠M,{\displaystyle \mathbf {M} ,}⁠the smallest singular values of⁠M{\displaystyle \mathbf {M} }⁠are of interest, which are more challenging to compute compared to the largest ones. Truncated SVD is employed inlatent semantic indexing.[30] The sum of the⁠k{\displaystyle k}⁠largest singular values of⁠M{\displaystyle \mathbf {M} }⁠is amatrix norm, theKy Fan⁠k{\displaystyle k}⁠-norm of⁠M.{\displaystyle \mathbf {M} .}⁠[31] The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as theoperator normof⁠M{\displaystyle \mathbf {M} }⁠as a linear operator with respect to the Euclidean norms of⁠Km{\displaystyle K^{m}}⁠and⁠Kn.{\displaystyle K^{n}.}⁠In other words, the Ky Fan 1-norm is the operator norm induced by the standardℓ2{\displaystyle \ell ^{2}}Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator⁠M{\displaystyle \mathbf {M} }⁠on (possibly infinite-dimensional) Hilbert spaces ‖M‖=‖M∗M‖12{\displaystyle \|\mathbf {M} \|=\|\mathbf {M} ^{*}\mathbf {M} \|^{\frac {1}{2}}} But, in the matrix case,⁠(M∗M)1/2{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}⁠is anormal matrix, so‖M∗M‖1/2{\displaystyle \|\mathbf {M} ^{*}\mathbf {M} \|^{1/2}}is the largest eigenvalue of⁠(M∗M)1/2,{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2},}⁠i.e. the largest singular value of⁠M.{\displaystyle \mathbf {M} .}⁠ The last of the Ky Fan norms, the sum of all singular values, is thetrace norm(also known as the 'nuclear norm'), defined by‖M‖=Tr⁡(M∗M)1/2{\displaystyle \|\mathbf {M} \|=\operatorname {Tr} (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}(the eigenvalues of⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠are the squares of the singular values). The singular values are related to another norm on the space of operators. Consider theHilbert–Schmidtinner product on the⁠n×n{\displaystyle n\times n}⁠matrices, defined by ⟨M,N⟩=tr⁡(N∗M).{\displaystyle \langle \mathbf {M} ,\mathbf {N} \rangle =\operatorname {tr} \left(\mathbf {N} ^{*}\mathbf {M} \right).} So the induced norm is ‖M‖=⟨M,M⟩=tr⁡(M∗M).{\displaystyle \|\mathbf {M} \|={\sqrt {\langle \mathbf {M} ,\mathbf {M} \rangle }}={\sqrt {\operatorname {tr} \left(\mathbf {M} ^{*}\mathbf {M} \right)}}.} Since the trace is invariant under unitary equivalence, this shows ‖M‖=|∑iσi2{\displaystyle \|\mathbf {M} \|={\sqrt {{\vphantom {\bigg |}}\sum _{i}\sigma _{i}^{2}}}} where⁠σi{\displaystyle \sigma _{i}}⁠are the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠This is called theFrobenius norm,Schatten 2-norm, orHilbert–Schmidt normof⁠M.{\displaystyle \mathbf {M} .}⁠Direct calculation shows that the Frobenius norm of⁠M=(mij){\displaystyle \mathbf {M} =(m_{ij})}⁠coincides with: |∑ij|mij|2.{\displaystyle {\sqrt {{\vphantom {\bigg |}}\sum _{ij}|m_{ij}|^{2}}}.} In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of theSchatten norm. The singular values of a matrix⁠A{\displaystyle \mathbf {A} }⁠are uniquely defined and are invariant with respect to left and/or right unitary transformations of⁠A.{\displaystyle \mathbf {A} .}⁠In other words, the singular values of⁠UAV,{\displaystyle \mathbf {U} \mathbf {A} \mathbf {V} ,}⁠for unitary matrices⁠U{\displaystyle \mathbf {U} }⁠and⁠V,{\displaystyle \mathbf {V} ,}⁠are equal to the singular values of⁠A.{\displaystyle \mathbf {A} .}⁠This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD,[32]is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of⁠A.{\displaystyle \mathbf {A} .}⁠In other words, the singular values of⁠DAE,{\displaystyle \mathbf {D} \mathbf {A} \mathbf {E} ,}⁠for invertible diagonal matrices⁠D{\displaystyle \mathbf {D} }⁠and⁠E,{\displaystyle \mathbf {E} ,}⁠are equal to the singular values of⁠A.{\displaystyle \mathbf {A} .}⁠This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. The factorization⁠M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}⁠can be extended to abounded operator⁠M{\displaystyle \mathbf {M} }⁠on a separable Hilbert space⁠H.{\displaystyle H.}⁠Namely, for any bounded operator⁠M,{\displaystyle \mathbf {M} ,}⁠there exist apartial isometry⁠U,{\displaystyle \mathbf {U} ,}⁠a unitary⁠V,{\displaystyle \mathbf {V} ,}⁠a measure space⁠(X,μ),{\displaystyle (X,\mu ),}⁠and a non-negative measurable⁠f{\displaystyle f}⁠such that M=UTfV∗{\displaystyle \mathbf {M} =\mathbf {U} T_{f}\mathbf {V} ^{*}} where⁠Tf{\displaystyle T_{f}}⁠is themultiplication by⁠f{\displaystyle f}⁠on⁠L2(X,μ).{\displaystyle L^{2}(X,\mu ).}⁠ This can be shown by mimicking the linear algebraic argument for the matrix case above.⁠VTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}⁠is the unique positive square root of⁠M∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}⁠as given by theBorel functional calculusforself-adjoint operators. The reason why⁠U{\displaystyle \mathbf {U} }⁠need not be unitary is that, unlike the finite-dimensional case, given an isometry⁠U1{\displaystyle U_{1}}⁠with nontrivial kernel, a suitable⁠U2{\displaystyle U_{2}}⁠may not be found such that [U1U2]{\displaystyle {\begin{bmatrix}U_{1}\\U_{2}\end{bmatrix}}} is a unitary operator. As for matrices, the singular value factorization is equivalent to thepolar decompositionfor operators: we can simply write M=UV∗⋅VTfV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {V} ^{*}\cdot \mathbf {V} T_{f}\mathbf {V} ^{*}} and notice that⁠UV∗{\displaystyle \mathbf {U} \mathbf {V} ^{*}}⁠is still a partial isometry while⁠VTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}⁠is positive. The notion of singular values and left/right-singular vectors can be extended tocompact operator on Hilbert spaceas they have a discrete spectrum. If⁠T{\displaystyle T}⁠is compact, every non-zero⁠λ{\displaystyle \lambda }⁠in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If⁠M{\displaystyle \mathbf {M} }⁠is compact, so is⁠M∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }⁠. Applying the diagonalization result, the unitary image of its positive square root⁠Tf{\displaystyle T_{f}}⁠has a set of orthonormal eigenvectors⁠{ei}{\displaystyle \{e_{i}\}}⁠corresponding to strictly positive eigenvalues⁠{σi}{\displaystyle \{\sigma _{i}\}}⁠. For any⁠ψ{\displaystyle \psi }⁠in⁠H,{\displaystyle H,}⁠ Mψ=UTfV∗ψ=∑i⟨UTfV∗ψ,Uei⟩Uei=∑iσi⟨ψ,Vei⟩Uei,{\displaystyle \mathbf {M} \psi =\mathbf {U} T_{f}\mathbf {V} ^{*}\psi =\sum _{i}\left\langle \mathbf {U} T_{f}\mathbf {V} ^{*}\psi ,\mathbf {U} e_{i}\right\rangle \mathbf {U} e_{i}=\sum _{i}\sigma _{i}\left\langle \psi ,\mathbf {V} e_{i}\right\rangle \mathbf {U} e_{i},} where the series converges in the norm topology on⁠H.{\displaystyle H.}⁠Notice how this resembles the expression from the finite-dimensional case.⁠σi{\displaystyle \sigma _{i}}⁠are called the singular values of⁠M.{\displaystyle \mathbf {M} .}⁠⁠{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}⁠(resp.⁠{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}⁠) can be considered the left-singular (resp. right-singular) vectors of⁠M.{\displaystyle \mathbf {M} .}⁠ Compact operators on a Hilbert space are the closure offinite-rank operatorsin the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: The singular value decomposition was originally developed bydifferential geometers, who wished to determine whether a realbilinear formcould be made equal to another by independent orthogonal transformations of the two spaces it acts on.Eugenio BeltramiandCamille Jordandiscovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form acomplete setofinvariantsfor bilinear forms under orthogonal substitutions.James Joseph Sylvesteralso arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values thecanonical multipliersof the matrix⁠A.{\displaystyle \mathbf {A} .}⁠The fourth mathematician to discover the singular value decomposition independently isAutonnein 1915, who arrived at it via thepolar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be byCarl EckartandGale J. Youngin 1936;[33]they saw it as a generalization of theprincipal axistransformation forHermitian matrices. In 1907,Erhard Schmidtdefined an analog of singular values forintegral operators(which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed byÉmile Picardin 1910, who is the first to call the numbersσk{\displaystyle \sigma _{k}}singular values(or in French,valeurs singulières). Practical methods for computing the SVD date back toKogbetliantzin 1954–1955 andHestenesin 1958,[34]resembling closely theJacobi eigenvalue algorithm, which uses plane rotations orGivens rotations. However, these were replaced by the method ofGene GolubandWilliam Kahanpublished in 1965,[35]which usesHouseholder transformationsor reflections. In 1970, Golub andChristian Reinsch[36]published a variant of the Golub/Kahan algorithm that is still the one most-used today.
https://en.wikipedia.org/wiki/Singular-value_decomposition
Thehistory ofanthropometryincludes its use as an early tool ofanthropology, use for identification, use for the purposes of understanding human physical variation inpaleoanthropologyand in various attempts to correlate physical with racial and psychological traits. At various points in history, certain anthropometrics have been cited by advocates ofdiscriminationandeugenicsoften as a part of somesocial movementor throughpseudoscientific claims. In 1716Louis-Jean-Marie Daubenton, who wrote many essays oncomparative anatomyfor theAcadémie française, published hisMemoir on the Different Positions of theOccipital Foramenin Man and Animals(Mémoire sur les différences de la situation du grand trou occipital dans l'homme et dans les animaux). Six years laterPieter Camper(1722–1789), distinguished both as an artist and as an anatomist, published some lectures that laid the foundation of much work. Camper invented the "facial angle," a measure meant to determineintelligenceamong various species. According to this technique, a "facial angle" was formed by drawing two lines: one horizontally from thenostrilto theear; and the other perpendicularly from the advancing part of the upperjawboneto the most prominent part of theforehead. Camper's measurements of facial angle were first made to compare the skulls of men with those of other animals. Camper claimed that antique statues presented an angle of 90°, Europeans of 80°, Central Africans of 70° and the orangutan of 58°. Swedish professor of anatomyAnders Retzius(1796–1860) first used thecephalic indexinphysical anthropologyto classify ancient human remains found in Europe. He classed skulls in three main categories; "dolichocephalic" (from theAncient Greekkephalê"head", anddolikhos"long and thin"), "brachycephalic" (short and broad) and "mesocephalic" (intermediate length and width). Scientific research was continued byÉtienne Geoffroy Saint-Hilaire(1772–1844) andPaul Broca(1824–1880), founder of the Anthropological Society in France in 1859. Paleoanthropologists still rely upon craniofacial anthropometry to identify species in the study of fossilized hominid bones. Specimens ofHomo erectusand athletic specimens ofHomo sapiens, for example, are virtually identical from the neck down but their skulls can easily be told apart. Samuel George Morton(1799–1851), whose two major monographs were theCrania Americana(1839),An Inquiry into the Distinctive Characteristics of the Aboriginal Race of AmericaandCrania Aegyptiaca(1844) concluded that theancient Egyptianswere not Negroid but Caucasoid and that Caucasians and Negroes were already distinct three thousand years ago. SinceThe Bibleindicated thatNoah's Arkhad washed up onMount Araratonly a thousand years before this Noah's sons could not account for every race on earth. According to Morton's theory ofpolygenismthe races had been separate from the start.[1]Josiah C. NottandGeorge Gliddoncarried Morton's ideas further.[2]Charles Darwin, who thought thesingle-origin hypothesisessential toevolutionary theory, opposed Nott and Gliddon in his 1871The Descent of Man, arguing formonogenism. In 1856, workers found in a limestone quarry the skull of aNeanderthalhominid male, thinking it to be the remains of a bear. They gave the material to amateur naturalistJohann Karl Fuhlrottwho turned the fossils over to anatomistHermann Schaaffhausen. The discovery was jointly announced in 1857, giving rise to the discipline ofpaleoanthropology. By comparing skeletons of apes to man,T. H. Huxley(1825–1895) backed upCharles Darwin'stheory of evolution, first expressed inOn the Origin of Species(1859). He also developed the "Pithecometra principle," which stated that man and ape were descended from a common ancestor. Eugène Dubois' (1858–1940) discovery in 1891 in Indonesia of the "Java Man", the first specimen ofHomo erectusto be discovered, demonstrated mankind's deep ancestry outside Europe.Ernst Haeckel(1834–1919) became famous for his "recapitulation theory", according to which each individual mirrors the evolution of the whole species during his life. Intelligence testingwas compared with anthropometrics.Samuel George Morton(1799–1851) collected hundreds of human skulls from all over the world and started trying to find a way to classify them according to some logical criterion. Morton claimed that he could judge intellectual capacity bycranial capacity. A large skull meant a largebrainand high intellectual capacity; a small skull indicated a small brain and decreased intellectual capacity. Modern science has since confirmed that there is a correlation between cranium size (measured in various ways) and intelligence as measured by IQ tests, although it is a weak correlation at about 0.2. Today, brain volume as measured with MRI scanners also find a correlation between brain size and intelligence at about 0.4.[4] Craniometrywas also used inphrenology, which purported to determine character, personality traits, and criminality on the basis of the shape of the head. At the turn of the 19th century,Franz Joseph Gall(1758–1822) developed "cranioscopy" (Ancient Greekkranion"skull",scopos"vision"), a method to determine the personality and development of mental and moral faculties on the basis of the external shape of the skull. Cranioscopy was later renamed phrenology (phrenos: mind,logos: study) by his studentJohann Spurzheim(1776–1832), who wrote extensively on "Drs. Gall and Spurzheim'sphysiognomicalSystem." These all claimed the ability to predict traits or intelligence and were intensively practised in the 19th and the first part of the 20th century. During the 1940s anthropometry was used byWilliam Sheldonwhen evaluating hissomatotypes, according to which characteristics of the body can be translated into characteristics of the mind. Inspired byCesare Lombroso'scriminal anthropology, he also believed thatcriminalitycould be predicted according to the body type. A basically anthropometric division ofbody typesinto the categoriesendomorphic,ectomorphicandmesomorphicderived from Sheldon'ssomatotypetheories is today popular among people doingweight training.[citation needed] In 1883, FrenchmanAlphonse Bertillonintroduced a system ofidentificationthat was named after him. The "Bertillonage" system was based on the finding that several measures of physical features, such as the dimensions of bony structures in the body, remain fairly constant throughout adult life. Bertillon concluded that when these measurements were made and recorded systematically, every individual would be distinguishable.[5]Bertillon's goal was a way of identifyingrecidivists("repeat offenders"). Previously police could only record general descriptions. Photography of criminals had become commonplace, but there was no easy way to sort the many thousands of photographs except by name. Bertillon's hope was that, through the use of measurements, a set of identifying numbers could be entered into a filing system installed in a single cabinet. The system involved 10 measurements;height,stretch(distance from leftshouldertomiddle fingerof raised right arm),bust(torsofrom head to seat when seated),head length(crown to forehead) andhead widthtemple to temple)widthofcheeks, and "lengths" of therightear, theleftfoot,middle finger, andcubit(elbow to tip of middle finger). It was possible, by exhaustion, to sort the cards on which these details were recorded (together with a photograph) until a small number produced the measurements of the individual sought, independently of name. The system was soon adapted to police methods: it prevented impersonation and could demonstrate wrongdoing.[6] Bertillonage was before long represented in Paris by a collection of some 100,000 cards and became popular in several other countries' justice systems. England followed suit when in 1894, a committee sent to Paris to investigate the methods and its results reported favorably on the use of measurements for primary classification and recommended also the partial adoption of the system offinger printssuggested byFrancis Galton, then in use inBengal, where measurements were abandoned in 1897 after the fingerprint system was adopted throughout British India. Three years later England followed suit, and, as the result of a fresh inquiry ordered by the Home Office, relied upon fingerprints alone.[5] Bertillonage exhibited certain defects and was gradually supplanted by the system offingerprintsand, latterly,genetics. Bertillon originally measured variables he thought were independent – such as forearm length and leg length – but Galton had realized that both were the result of a single causal variable (in this case, stature) and developed the statistical concept ofcorrelation. Other complications were: it was difficult to tell whether or not individuals arrested were first-time offenders; instruments employed were costly and liable to break down; skilled measurers were needed; errors were frequent and all but irremediable; and it was necessary to repeat measurements three times to arrive at a mean result.[5] Physiognomy claimed a correlation between physical features (especially facial features) and character traits. It was made famous byCesare Lombroso(1835–1909), the founder ofanthropological criminology, who claimed to be able to scientifically identify links between the nature of a crime and the personality or physical appearance of the offender. The originator of the concept of a "born criminal" and arguing in favor ofbiological determinism, Lombroso tried to recognize criminals by measurements of their bodies. He concluded that skull and facial features were clues to genetic criminality and that these features could be measured with craniometers and calipers with the results developed into quantitative research. A few of the 14 identified traits of a criminal included largejaws, forward projection of jaw, low sloping forehead; highcheekbones, flattened or upturned nose; handle-shapedears; hawk-likenosesor fleshylips; hard shifty eyes; scanty beard or baldness; insensitivity to pain; long arms, and so on. Phylogeography is the science of identifying and tracking majorhuman migrations, especially in prehistoric times. Linguistics can follow the movement of languages and archaeology can follow the movement of artefact styles but neither can tell whether a culture's spread was due to a source population's physically migrating or to a destination population's simply copying the technology and learning the language. Anthropometry was used extensively by anthropologists studying human and racial origins: some attempted racial differentiation andclassification, often seeking ways in which certain races were inferior to others.[7][8]Nott translatedArthur de Gobineau'sAn Essay on the Inequality of the Human Races(1853–1855), a founding work of racial segregationism that made three main divisions between races, based not on colour but on climatic conditions and geographic location, and privileged the "Aryan" race. Science has tested many theories aligning race and personality, which have been current sinceBoulainvilliers(1658–1722) contrasted theFrançais(French people), alleged descendants of the NordicFranks, and members of thearistocracy, to theThird Estate, considered to be indigenousGallo-Romanpeople subordinated byright of conquest. François Bernier,Carl Linnaeusand Blumenbach had examined multiple observable human characteristics in search of a typology. Bernier based his racial classification on physical type which included hair shape, nose shape and skin color. Linnaeus based a similar racial classification scheme. As anthropologists gained access to methods of skull measure they developed racial classification based on skull shape. Theories ofscientific racismbecame popular, one prominent figure beingGeorges Vacher de Lapouge(1854–1936), who inL'Aryen et son rôle social("TheAryanand his social role", 1899) dividedhumanityinto various, hierarchized, different "races", spanning from the "Aryanwhite race, dolichocephalic" to the "brachycephalic" (short and broad-headed) race. Between these Vacher de Lapouge identified the "Homo europaeus(Teutonic, Protestant, etc.), the "Homo alpinus" (Auvergnat,Turkish, etc.) and the "Homo mediterraneus" (Napolitano,Andalus, etc.). "Homo africanus" (Congo, Florida) was excluded from discussion. His racial classification ("Teutonic", "Alpine" and "Mediterranean") was also used byWilliam Z. Ripley(1867–1941) who, inThe Races of Europe(1899), made a map ofEuropeaccording to the cephalic index of its inhabitants. Vacher de Lapouge became one of the leading inspirations ofNaziantisemitismandNazi ideology.[9]Nazi Germanyrelied on anthropometric measurements to distinguishAryansfromJewsand many forms of anthropometry were used for the advocacy ofeugenics. During the 1920s and 1930s, though, members of the school ofcultural anthropologyofFranz Boasbegan to use anthropometric approaches to discredit the concept of fixed biological race. Boas used the cephalic index to show the influence of environmental factors. Researches on skulls and skeletons eventually helped liberate 19th century European science from itsethnocentricbias.[10]This school of physical anthropology generally went into decline during the 1940s. Several studies have demonstrated correlations between race and brain size, with varying results. In some studies, Caucasians were reported to have larger brains than other racial groups, whereas in recent studies and reanalysis of previous studies, East Asians were reported as having larger brains and skulls. More common among the studies was the report that Africans had smaller skulls than either Caucasians or East Asians. Criticisms have been raised against a number of these studies regarding questionable methods. InCrania AmericanaMorton claimed that Caucasians had the biggest brains, averaging 87 cubic inches, Indians were in the middle with an average of 82 cubic inches and Negroes had the smallest brains with an average of 78 cubic inches.[1]In 1873Paul Broca(1824–1880) found the same pattern described by Samuel Morton'sCrania Americanaby weighing brains atautopsy. Other historical studies alleging a Black–White difference in brain size include Bean (1906), Mall, (1909), Pearl, (1934) and Vint (1934). But in GermanyRudolf Virchow's study led him to denounce "Nordic mysticism" in the 1885 Anthropology Congress inKarlsruhe.Josef Kollmann, a collaborator of Virchow, stated in the same congress that the people of Europe, be them German, Italian, English or French, belonged to a "mixture of various races," furthermore declaring that the "results of craniology" led to "struggle against any theory concerning the superiority of this or that European race".[11]Virchow later rejected measure of skulls as legitimate means oftaxonomy.Paul Kretschmerquoted an 1892 discussion with him concerning these criticisms, also citingAurel von Törok's 1895 work, who basically proclaimed the failure of craniometry.[11] Stephen Jay Gould (1941–2002) claimed Samuel Morton had fudged data and "overpacked" the skulls.[12]A subsequent study by John Michael concluded that "[c]ontrary to Gould's interpretation... Morton's research was conducted with integrity."[13]In 2011 physical anthropologists at the university of, which owns Morton's collection, published a study that concluded that "Morton did not manipulate his data to support his preconceptions, contra Gould." They identified and remeasured half of the skulls used in Morton's reports, finding that in only 2% of cases did Morton's measurements differ significantly from their own and that these errors either were random or gave a larger than accurate volume to African skulls, the reverse of the bias that Dr. Gould imputed to Morton.[14]Difference in brain size, however, does not necessarily imply differences in intelligence: women tend to have smaller brains than men yet have more neural complexity and loading in certain areas of the brain.[15][16]This claim has been criticized by, among others, John S. Michael, who reported in 1988 that Morton's analysis was "conducted with integrity" while Gould's criticism was "mistaken".[17] Similar claims were previously made by Ho et al. (1980), who measured 1,261 brains at autopsy, and Beals et al. (1984), who measured approximately 20,000 skulls, finding the sameEast Asian→European→Africanpattern but warning against using the findings as indicative of racial traits, "If one merely lists such means by geographical region or race, causes of similarity by genogroup and ecotype are hopelessly confounded".[18][19]Rushton's findings have been criticized for confusing African-Americans with equatorial Africans, who generally have smaller craniums as people from hot climates often have slightly smaller crania.[20]He also compared equatorial Africans from the poorest and least educated areas of Africa with Asians from the wealthiest, most educated areas and colder climates.[20]According to Z. Z. Cernovsky Rushton's own study[21]shows that the average cranial capacity of North American blacks is similar to that of Caucasians from comparable climatic zones,[20]though a previous work by Rushton showed appreciable differences in cranial capacity between North Americans of different race.[22]This is consistent with the findings of Z. Z. Cernovsky that people from different climates tend to have minor differences in brain size. Observable craniofacial differences included: head shape (mesocephalic, brachycephalic, dolichocephalic) breadth of nasal aperture, nasal root height, sagittal crest appearance, jaw thickness, brow ridge size and forehead slope. Using this skull-based categorization, German philosopher Christoph Meiners in his The Outline of History of Mankind (1785) identified three racial groups: Ripley'sThe Races of Europewas rewritten in 1939 by Harvard physical anthropologistCarleton S. Coon. Coon, a 20th-century craniofacial anthropometrist, used the technique for hisThe Origin of Races(New York: Knopf, 1962). Because of the inconsistencies in the old three-part system (Caucasoid, Mongoloid, Negroid), Coon adopted a five-part scheme. He defined "Caucasoid" as a pattern of skull measurements and other phenotypical characteristics typical of populations inEurope,Central Asia,South Asia,West Asia,North Africa, and Northeast Africa (Ethiopia, andSomalia). He discarded the term "Negroid" as misleading since it implies skin tone, which is found at low latitudes around the globe and is a product of adaptation, and defined skulls typical ofsub-Saharan Africaas "Congoid" and those ofSouthern Africaas "Capoid". Finally, he split "Australoid" from "Mongoloid" along a line roughly similar to the modern distinction between sinodonts in the north and sundadonts in the south. He argued that these races had developed independently of each other over the past half-million years, developing into Homo Sapiens at different periods of time, resulting in different levels of civilization. This raised considerable controversy and led theAmerican Anthropological Associationto reject his approach without mentioning him by name.[23] InThe Races of Europe(1939) Coon classified Caucasoids into racial sub-groups named after regions or archaeological sites such as Brünn, Borreby, Alpine, Ladogan, East Baltic, Neo-Danubian, Lappish, Mediterranean, Atlanto-Mediterranean, Irano-Afghan, Nordic, Hallstatt, Keltic, Tronder, Dinaric, Noric and Armenoid. This typological view of race, however, was starting to be seen as out-of-date at the time of publication. Coon eventually resigned from theAmerican Association of Physical Anthropologists, while some of his other works were discounted because he would not agree with the evidence brought forward byFranz Boas,Stephen Jay Gould,Richard Lewontin,Leonard Liebermanand others.[24] The concept of biologically distinct races has been rendered obsolete by modern genetics.[25]Different methods of categorizing humans yield different groups, making them non-concordant.[26][27][obsolete source]Neither will the craniofacial method pin-point geographic origins reliably, due to variation in skulls within a geographic region. About one-third of "white" Americans have detectable African DNA markers,[28][29][obsolete source]and about five percent of "black" Americans have no detectable "negroid" traits at all, craniofacial or genetic.[30][obsolete source]Given three Americans who self-identify and are socially accepted as white, black and Hispanic, and given that they have precisely the same Afro-European mix of ancestries (one African great-grandparent), there is no objective test that will identify their group membership without an interview.[31][obsolete source][32][33][obsolete source]
https://en.wikipedia.org/wiki/Craniofacial_anthropometry
Human physical appearanceis the outwardphenotypeor look of human beings. There are functionally infinite variations in human phenotypes, though society reduces the variability to distinct categories. The physical appearance of humans, in particular those attributes which are regarded as important forphysical attractiveness, are believed byanthropologiststo affect the development of personality significantly andsocial relations. Many humans are acutely sensitive to their physical appearance.[1]Some differences in human appearance aregenetic, others are the result ofage,lifestyleordisease, and many are the result of personaladornment. Some people have linked some differences withethnicity, such as skeletal shape,prognathismor elongated stride. Different cultures place different degrees of emphasis on physical appearance and its importance to social status and other phenomena. Various aspects are considered relevant to the physical appearance of humans. Humans are distributed across the globe except for Antarctica and form a variable species. In adults, the average weight varies from around 40 kg (88 pounds) for the smallest and most lightly built tropical people to around 80 kg (176 pounds) for the heavier northern peoples.[2]Size also varies between the sexes, with thesexual dimorphismin humans being more pronounced than that ofchimpanzees, but less than the dimorphism found in gorillas.[3]The colouration of skin, hair and eyes also varies considerably, with darker pigmentation dominating in tropical climates and lighter in polar regions. The following are non-exhaustive lists of causes and kinds of variations which are completely or partially unintentional. Examples of unintentional causes ofvariationin body appearance: Examples of generalanatomicaloranthropometricvariations: Examples of variations of specific body parts: There are alsobody and skin unconventional variationssuch asamputationsorscars.
https://en.wikipedia.org/wiki/Human_appearance
Source separation,blind signal separation(BSS) orblind source separation, is the separation of a set of sourcesignalsfrom a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. It is most commonly applied indigital signal processingand involves the analysis of mixtures ofsignals; the objective is to recover the original component signals from a mixture signal. The classical example of a source separation problem is thecocktail party problem, where a number of people are talking simultaneously in a room (for example, at acocktail party), and a listener is trying to follow one of the discussions. The human brain can handle this sort of auditory source separation problem, but it is a difficult problem in digital signal processing. This problem is in general highlyunderdetermined, but useful solutions can be derived under a surprising variety of conditions. Much of the early literature in this field focuses on the separation of temporal signals such as audio. However, blind signal separation is now routinely performed onmultidimensional data, such asimagesandtensors, which may involve no time dimension whatsoever. Several approaches have been proposed for the solution of this problem but development is currently still very much in progress. Some of the more successful approaches areprincipal components analysisandindependent component analysis, which work well when there are no delays or echoes present; that is, the problem is simplified a great deal. The field ofcomputational auditory scene analysisattempts to achieve auditory source separation using an approach that is based on human hearing. The human brain must also solve this problem in real time. In human perception this ability is commonly referred to asauditory scene analysisor thecocktail party effect. At a cocktail party, there is a group of people talking at the same time. You have multiple microphones picking up mixed signals, but you want to isolate the speech of a single person. BSS can be used to separate the individual sources by using mixed signals. In the presence of noise, dedicated optimization criteria need to be used. Figure 2 shows the basic concept of BSS. The individual source signals are shown as well as the mixed signals which are received signals. BSS is used to separate the mixed signals with only knowing mixed signals and nothing about original signal or how they were mixed. The separated signals are only approximations of the source signals. The separated images, were separated usingPythonand theShogun toolboxusing Joint Approximation Diagonalization of Eigen-matrices (JADE) algorithm which is based onindependent component analysis, ICA.[1]This toolbox method can be used with multi-dimensions but for an easy visual aspect images(2-D) were used. One of the practical applications being researched in this area ismedical imagingof the brain withmagnetoencephalography(MEG). This kind of imaging involves careful measurements ofmagnetic fieldsoutside the head which yield an accurate 3D-picture of the interior of the head. However, external sources ofelectromagnetic fields, such as a wristwatch on the subject's arm, will significantly degrade the accuracy of the measurement. Applying source separation techniques on the measured signals can help remove undesired artifacts from the signal. Inelectroencephalogram(EEG) andmagnetoencephalography(MEG), the interference from muscle activity masks the desired signal from brain activity. BSS, however, can be used to separate the two so an accurate representation of brain activity may be achieved.[2][3] Another application is the separation ofmusicalsignals. For a stereo mix of relatively simple signals it is now possible to make a fairly accurate separation, although someartifactsremain. Other applications:[2] The set of individual source signals,s(t)=(s1(t),…,sn(t))T{\displaystyle s(t)=(s_{1}(t),\dots ,s_{n}(t))^{T}}, is 'mixed' using a matrix,A=[aij]∈Rm×n{\displaystyle A=[a_{ij}]\in \mathbb {R} ^{m\times n}}, to produce a set of 'mixed' signals,x(t)=(x1(t),…,xm(t))T{\displaystyle x(t)=(x_{1}(t),\dots ,x_{m}(t))^{T}}, as follows. Usually,n{\displaystyle n}is equal tom{\displaystyle m}. Ifm>n{\displaystyle m>n}, then the system of equations is overdetermined and thus can be unmixed using a conventional linear method. Ifn>m{\displaystyle n>m}, the system is underdetermined and a non-linear method must be employed to recover the unmixed signals. The signals themselves can be multidimensional. x(t)=A⋅s(t){\displaystyle x(t)=A\cdot s(t)} The above equation is effectively 'inverted' as follows. Blind source separation separates the set of mixed signals,x(t){\displaystyle x(t)}, through the determination of an 'unmixing' matrix,B=[Bij]∈Rn×m{\displaystyle B=[B_{ij}]\in \mathbb {R} ^{n\times m}}, to 'recover' an approximation of the original signals,y(t)=(y1(t),…,yn(t))T{\displaystyle y(t)=(y_{1}(t),\dots ,y_{n}(t))^{T}}.[4][5][2] y(t)=B⋅x(t){\displaystyle y(t)=B\cdot x(t)} Since the chief difficulty of the problem is its underdetermination, methods for blind source separation generally seek to narrow the set of possible solutions in a way that is unlikely to exclude the desired solution. In one approach, exemplified byprincipalandindependentcomponent analysis, one seeks source signals that are minimallycorrelatedor maximallyindependentin a probabilistic orinformation-theoreticsense. A second approach, exemplified bynonnegative matrix factorization, is to impose structural constraints on the source signals. These structural constraints may be derived from a generative model of the signal, but are more commonly heuristics justified by good empirical performance. A common theme in the second approach is to impose some kind of low-complexity constraint on the signal, such assparsityin somebasisfor the signal space. This approach can be particularly effective if one requires not the whole signal, but merely its most salient features. There are different methods of blind signal separation:
https://en.wikipedia.org/wiki/Blind_signal_separation
Digital image processingis the use of adigital computerto processdigital imagesthrough analgorithm.[1][2]As a subcategory or field ofdigital signal processing, digital image processing has many advantages overanalog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up ofnoiseanddistortionduring processing. Since images are defined over two dimensions (perhaps more), digital image processing may be modeled in the form ofmultidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers;[3]second, the development of mathematics (especially the creation and improvement ofdiscrete mathematics theory);[4]and third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.[5] Many of the techniques ofdigital imageprocessing, or digital picture processing as it often was called, were developed in the 1960s, atBell Laboratories, theJet Propulsion Laboratory,Massachusetts Institute of Technology,University of Maryland, and a few other research facilities, with application tosatellite imagery,wire-photostandards conversion,medical imaging,videophone,character recognition, and photograph enhancement.[6]The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the Moon.[7] The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such astelevision standards conversion. Asgeneral-purpose computersbecame faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest. The basis for modernimage sensorsismetal–oxide–semiconductor(MOS) technology,[8]invented at Bell Labs between 1955 and 1960,[9][10][11][12][13][14]This led to the development of digitalsemiconductorimage sensors, including thecharge-coupled device(CCD) and later theCMOS sensor.[8] The charge-coupled device was invented byWillard S. BoyleandGeorge E. Smithat Bell Labs in 1969.[15]While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tinyMOS capacitor. As it was fairly straightforward tofabricatea series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[8]The CCD is a semiconductor circuit that was later used in the firstdigital video camerasfortelevision broadcasting.[16] TheNMOSactive-pixel sensor(APS) was invented byOlympusin Japan during the mid-1980s. This was enabled by advances in MOSsemiconductor device fabrication, withMOSFET scalingreaching smallermicron and then sub-micronlevels.[17][18]The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[19]TheCMOSactive-pixel sensor (CMOS sensor) was later developed byEric Fossum's team at theNASAJet Propulsion Laboratoryin 1993.[20]By 2007, sales of CMOS sensors had surpassed CCD sensors.[21] MOS image sensors are widely used inoptical mousetechnology. The first optical mouse, invented byRichard F. LyonatXeroxin 1980, used a5μmNMOSintegrated circuitsensor chip.[22][23]Since the first commercial optical mouse, theIntelliMouseintroduced in 1999, most optical mouse devices use CMOS sensors.[24][25] An important development in digitalimage compressiontechnology was thediscrete cosine transform(DCT), alossy compressiontechnique first proposed byNasir Ahmedin 1972.[26]DCT compression became the basis forJPEG, which was introduced by theJoint Photographic Experts Groupin 1992.[27]JPEG compresses images down to much smaller file sizes, and has become the most widely usedimage file formaton theInternet.[28]Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation ofdigital imagesanddigital photos,[29]with several billion JPEG images produced every day as of 2015[update].[30] Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression.[31][32]JPEG 2000image compression is used by theDICOMstandard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, calledJPIP, to enable efficient streaming of theJPEG 2000compressed image data.[33] Electronicsignal processingwas revolutionized by the wide adoption ofMOS technologyin the 1970s.[34]MOS integrated circuittechnology was the basis for the first single-chipmicroprocessorsandmicrocontrollersin the early 1970s,[35]and then the first single-chipdigital signal processor(DSP) chips in the late 1970s.[36][37]DSP chips have since been widely used in digital image processing.[36] Thediscrete cosine transform(DCT)image compressionalgorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used forencoding, decoding,video coding,audio coding,multiplexing, control signals,signaling,analog-to-digital conversion, formattingluminanceand color differences, and color formats such asYUV444andYUV411. DCTs are also used for encoding operations such asmotion estimation,motion compensation,inter-frameprediction,quantization, perceptual weighting,entropy encoding, variable encoding, andmotion vectors, and decoding operations such as the inverse operation between different color formats (YIQ,YUVandRGB) for display purposes. DCTs are also commonly used forhigh-definition television(HDTV) encoder/decoder chips.[38] Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analogue means. In particular, digital image processing is a concrete application of, and a practical technology based on: Some techniques which are used in digital image processing include: Digital filters are used to blur and sharpen digital images. Filtering can be performed by: The following examples show both methods:[40] image = checkerboard F = Fourier Transform of image Show Image: log(1+Absolute Value(F)) Images are typically padded before being transformed to the Fourier space, thehighpass filteredimages below illustrate the consequences of different padding techniques: Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding. MATLAB example for spatial domain highpass filtering. Affine transformationsenable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:[40] To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image,[x,y], wherexandyare the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image. However, to allow transformations that require translation transformations, 3-dimensionalhomogeneous coordinatesare needed. The third dimension is usually set to a non-zero constant, usually1, so that the new coordinate is[x,y, 1]. This allows the coordinate vector to be multiplied by a 3×3 matrix, enabling translation shifts. Thus, the third dimension, i.e. the constant1, allows translation. Because matrix multiplication isassociative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector[x,y, 1]in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix. For example, 2-dimensional coordinates only permit rotation about the origin(0, 0). But 3-dimensional homogeneous coordinates can be used to first translate any point to(0, 0), then perform the rotation, and lastly translate the origin(0, 0)back to the original point (the opposite of the first translation). These three affine transformations can be combined into a single matrix—thus allowing rotation around any point in the image.[41] Mathematical morphology(MM) is a nonlinear image processing framework that analyzes shapes within images by probing local pixel neighborhoods using a small, predefined function called astructuring element. In the context of grayscale images, MM is especially useful for denoising throughdilationanderosion—primitive operators that can be combined to build more complex filters. Suppose we have: Here,S{\displaystyle {\mathcal {S}}}defines the neighborhood of relative coordinates(m,n){\displaystyle (m,n)}over which local operations are computed. The values ofB(m,n){\displaystyle B(m,n)}bias the image during dilation and erosion. (f⊕B)(i,j)=max(m,n)∈S{f(i+m,j+n)+B(m,n)}.{\displaystyle (f\oplus B)(i,j)=\max _{(m,n)\in {\mathcal {S}}}{\Bigl \{}f(i+m,j+n)+B(m,n){\Bigr \}}.} (f⊕B)(1,1)=max(f(0,0)+B(−1,−1),45+1;f(1,0)+B(0,−1),50+2;f(2,0)+B(1,−1),65+1;f(0,1)+B(−1,0),40+2;f(1,1)+B(0,0),60+1;f(2,1)+B(1,0),55+1;f(0,2)+B(−1,1),25+1;f(1,2)+B(0,1),15+0;f(2,2)+B(1,1)5+3)=66.{\displaystyle {\begin{aligned}(f\oplus B)(1,1)=\max \!{\Bigl (}&f(0,0)+B(-1,-1),&\;45+1;&\\&f(1,0)+B(0,-1),&\;50+2;&\\&f(2,0)+B(1,-1),&\;65+1;&\\&f(0,1)+B(-1,0),&\;40+2;&\\&f(1,1)+B(0,0),&\;60+1;&\\&f(2,1)+B(1,0),&\;55+1;&\\&f(0,2)+B(-1,1),&\;25+1;&\\&f(1,2)+B(0,1),&\;15+0;&\\&f(2,2)+B(1,1)&\;5+3{\Bigr )}=66.\end{aligned}}} (f⊖B)(i,j)=min(m,n)∈S{f(i+m,j+n)−B(m,n)}.{\displaystyle (f\ominus B)(i,j)=\min _{(m,n)\in {\mathcal {S}}}{\Bigl \{}f(i+m,j+n)-B(m,n){\Bigr \}}.} (f⊖B)(1,1)=min(f(0,0)−B(−1,−1),45−1;f(1,0)−B(0,−1),50−2;f(2,0)−B(1,−1),65−1;f(0,1)−B(−1,0),40−2;f(1,1)−B(0,0),60−1;f(2,1)−B(1,0),55−1;f(0,2)−B(−1,1),25−1;f(1,2)−B(0,1),15−0;f(2,2)−B(1,1)5−3)=2.{\displaystyle {\begin{aligned}(f\ominus B)(1,1)=\min \!{\Bigl (}&f(0,0)-B(-1,-1),&\;45-1;&\\&f(1,0)-B(0,-1),&\;50-2;&\\&f(2,0)-B(1,-1),&\;65-1;&\\&f(0,1)-B(-1,0),&\;40-2;&\\&f(1,1)-B(0,0),&\;60-1;&\\&f(2,1)-B(1,0),&\;55-1;&\\&f(0,2)-B(-1,1),&\;25-1;&\\&f(1,2)-B(0,1),&\;15-0;&\\&f(2,2)-B(1,1)&\;5-3{\Bigr )}=2.\end{aligned}}} After applying dilation tof{\displaystyle f}:[45506540665525155]{\displaystyle {\begin{bmatrix}45&50&65\\40&66&55\\25&15&5\end{bmatrix}}} After applying erosion tof{\displaystyle f}:[4550654025525155]{\displaystyle {\begin{bmatrix}45&50&65\\40&2&55\\25&15&5\end{bmatrix}}} MM operations, such asopeningandclosing, are composite processes that utilize both dilation and erosion to modify the structure of an image. These operations are particularly useful for tasks such as noise removal, shape smoothing, and object separation. For example, applying opening to an imagef{\displaystyle f}with a structuring elementB{\displaystyle B}would first reduce small details (through erosion) and then restore the main shapes (through dilation). This ensures that unwanted noise is removed without significantly altering the size or shape of larger objects. For instance, applying closing to the same imagef{\displaystyle f}would fill in small gaps within objects, such as connecting breaks in thin lines or closing small holes, while ensuring that the surrounding areas are not significantly affected. Both opening and closing can be visualized as ways of refining the structure of an image: opening simplifies and removes small, unnecessary details, while closing consolidates and connects objects to form more cohesive structures. Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from theirimage sensorinto acolor-correctedimage in a standardimage file format. Additional post processing techniques increase edge sharpness or color saturation to create more naturally looking images. Westworld(1973) was the first feature film to use the digital image processing topixellatephotography to simulate an android's point of view.[42]Image processing is also vastly used to produce thechroma keyeffect that replaces the background of actors with natural or artistic scenery. Face detectioncan be implemented withmathematical morphology, thediscrete cosine transform(DCT), and horizontalprojection. General method with feature-based method The feature-based method of face detection is using skin tone, edge detection, face shape, and feature of a face (like eyes, mouth, etc.) to achieve face detection. The skin tone, face shape, and all the unique elements that only the human face have can be described as features. Process explanation Image quality can be influenced by camera vibration, over-exposure, gray level distribution too centralized, and noise, etc. For example, noise problem can be solved bysmoothingmethod while gray level distribution problem can be improved byhistogram equalization. Smoothingmethod In drawing, if there is some dissatisfied color, taking some color around dissatisfied color and averaging them. This is an easy way to think of Smoothing method. Smoothing method can be implemented with mask andconvolution. Take the small image and mask for instance as below. image is[256531461283027322]{\displaystyle {\begin{bmatrix}2&5&6&5\\3&1&4&6\\1&28&30&2\\7&3&2&2\end{bmatrix}}} mask is[1/91/91/91/91/91/91/91/91/9]{\displaystyle {\begin{bmatrix}1/9&1/9&1/9\\1/9&1/9&1/9\\1/9&1/9&1/9\end{bmatrix}}} After convolution and smoothing, image is[25653910619927322]{\displaystyle {\begin{bmatrix}2&5&6&5\\3&9&10&6\\1&9&9&2\\7&3&2&2\end{bmatrix}}} Observing image[1, 1], image[1, 2], image[2, 1], and image[2, 2]. The original image pixel is 1, 4, 28, 30. After smoothing mask, the pixel becomes 9, 10, 9, 9 respectively. new image[1, 1] =19{\displaystyle {\tfrac {1}{9}}}* (image[0,0]+image[0,1]+image[0,2]+image[1,0]+image[1,1]+image[1,2]+image[2,0]+image[2,1]+image[2,2]) new image[1, 1] = floor(19{\displaystyle {\tfrac {1}{9}}}* (2+5+6+3+1+4+1+28+30)) = 9 new image[1, 2] = floor({19{\displaystyle {\tfrac {1}{9}}}* (5+6+5+1+4+6+28+30+2)) = 10 new image[2, 1] = floor(19{\displaystyle {\tfrac {1}{9}}}* (3+1+4+1+28+30+7+3+2)) = 9 new image[2, 2] = floor(19{\displaystyle {\tfrac {1}{9}}}* (1+4+6+28+30+2+3+2+2)) = 9 Gray Level Histogram method Generally, given a gray level histogram from an image as below. Changing the histogram to uniform distribution from an image is usually what we calledhistogram equalization. In discrete time, the area of gray level histogram is∑i=0kH(pi){\displaystyle \sum _{i=0}^{k}H(p_{i})}(see figure 1) while the area of uniform distribution is∑i=0kG(qi){\displaystyle \sum _{i=0}^{k}G(q_{i})}(see figure 2). It is clear that the area will not change, so∑i=0kH(pi)=∑i=0kG(qi){\displaystyle \sum _{i=0}^{k}H(p_{i})=\sum _{i=0}^{k}G(q_{i})}. From the uniform distribution, the probability ofqi{\displaystyle q_{i}}isN2qk−q0{\displaystyle {\tfrac {N^{2}}{q_{k}-q_{0}}}}while the0<i<k{\displaystyle 0<i<k} In continuous time, the equation is∫q0qN2qk−q0ds=∫p0pH(s)ds{\displaystyle \displaystyle \int _{q_{0}}^{q}{\tfrac {N^{2}}{q_{k}-q_{0}}}ds=\displaystyle \int _{p_{0}}^{p}H(s)ds}. Moreover, based on the definition of a function, the Gray level histogram method is like finding a functionf{\displaystyle f}that satisfies f(p)=q. with Matlab, salt & pepper with 0.01 parameter is addedto the original image in order to create a noisy image.
https://en.wikipedia.org/wiki/Image_processing
Algebraic statisticsis the use ofalgebrato advancestatistics. Algebra has been useful forexperimental design,parameter estimation, andhypothesis testing. Traditionally, algebraic statistics has been associated with the design of experiments andmultivariate analysis(especiallytime series). In recent years, the term "algebraic statistics" has been sometimes restricted, sometimes being used to label the use ofalgebraic geometryandcommutative algebrain statistics. In the past, statisticians have used algebra to advance research in statistics. Some algebraic statistics led to the development of new topics in algebra and combinatorics, such asassociation schemes. For example,Ronald A. Fisher,Henry B. Mann, andRosemary A. BaileyappliedAbelian groupsto thedesign of experiments. Experimental designs were also studied withaffine geometryoverfinite fieldsand then with the introduction ofassociation schemesbyR. C. Bose.Orthogonal arrayswere introduced byC. R. Raoalso for experimental designs. [relevant?] Invariant measuresonlocally compact groupshave long been used instatistical theory, particularly inmultivariate analysis.Beurling'sfactorization theoremand much of the work on (abstract)harmonic analysissought better understanding of theWolddecompositionofstationary stochastic processes, which is important intime seriesstatistics. Encompassing previous results on probability theory on algebraic structures,Ulf Grenanderdeveloped a theory of "abstract inference". Grenander's abstract inference and histheory of patternsare useful forspatial statisticsandimage analysis; these theories rely onlattice theory. Partially ordered vector spacesandvector latticesare used throughout statistical theory.Garrett Birkhoffmetrized the positive cone usingHilbert's projective metricand provedJentsch's theoremusing thecontraction mappingtheorem.[1]Birkhoff's results have been used formaximum entropyestimation(which can be viewed aslinear programmingininfinite dimensions) byJonathan Borweinand colleagues. Vector latticesandconical measureswere introduced intostatistical decision theorybyLucien Le Cam. In recent years, the term "algebraic statistics" has been used more restrictively, to label the use ofalgebraic geometryandcommutative algebrato study problems related todiscrete random variableswith finite state spaces. Commutative algebra and algebraic geometry have applications in statistics because many commonly used classes of discrete random variables can be viewed asalgebraic varieties. Consider arandom variableXwhich can take on the values 0, 1, 2. Such a variable is completely characterized by the three probabilities and these numbers satisfy Conversely, any three such numbers unambiguously specify a random variable, so we can identify the random variableXwith the tuple (p0,p1,p2)∈R3. Now supposeXis abinomial random variablewith parameterqandn = 2, i.e.Xrepresents the number of successes when repeating a certain experiment two times, where each experiment has an individual success probability ofq. Then and it is not hard to show that the tuples (p0,p1,p2) which arise in this way are precisely the ones satisfying The latter is apolynomial equationdefining an algebraic variety (or surface) inR3, and this variety, when intersected with thesimplexgiven by yields a piece of analgebraic curvewhich may be identified with the set of all 3-state Bernoulli variables. Determining the parameterqamounts to locating one point on this curve; testing the hypothesis that a given variableXisBernoulliamounts to testing whether a certain point lies on that curve or not. Algebraic geometry has also recently found applications tostatistical learning theory, including ageneralizationof theAkaike information criteriontosingular statistical models.[2]
https://en.wikipedia.org/wiki/Algebraic_statistics
In statistics,combinatorial data analysis(CDA) is the study of data sets where the order in which objects are arranged is important. CDA can be used either to determine how well a givencombinatorialconstruct reflects the observed data, or to search for a suitable combinatorial construct that does fit the data.[1][2][3] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Combinatorial_data_analysis
Computational anatomyis an interdisciplinary field ofbiologyfocused on quantitative investigation and modelling of anatomical shapes variability.[1][2]It involves the development and application of mathematical, statistical and data-analytical methods for modelling and simulation of biological structures. The field is broadly defined and includes foundations inanatomy,applied mathematicsandpure mathematics,machine learning,computational mechanics,computational science,biological imaging,neuroscience,physics,probability, andstatistics; it also has strong connections withfluid mechanicsandgeometric mechanics. Additionally, it complements newer, interdisciplinary fields likebioinformaticsandneuroinformaticsin the sense that its interpretation uses metadata derived from the original sensor imaging modalities (of whichmagnetic resonance imagingis one example). It focuses on the anatomical structures being imaged, rather than the medical imaging devices. It is similar in spirit to the history ofcomputational linguistics, a discipline that focuses on the linguistic structures rather than thesensoracting as thetransmissionand communication media. In computational anatomy, thediffeomorphismgroup is used to study different coordinate systems viacoordinate transformationsas generated via theLagrangian and Eulerian velocities of flowinR3{\displaystyle {\mathbb {R} }^{3}}. Theflows between coordinates in computational anatomyare constrained to begeodesic flowssatisfyingthe principle of least action for the Kinetic energy of the flow. The kinetic energy is defined through aSobolev smoothnessnorm with strictly more than two generalized,square-integrablederivatives for each component of the flow velocity, which guarantees that the flows inR3{\displaystyle \mathbb {R} ^{3}}are diffeomorphisms.[3]It also implies that thediffeomorphic shape momentumtaken pointwise satisfying theEuler–Lagrange equation for geodesicsis determined by its neighbors through spatial derivatives on the velocity field. This separates the discipline from the case ofincompressible fluids[4]for which momentum is a pointwise function of velocity. Computational anatomy intersects the study ofRiemannian manifoldsand nonlinearglobal analysis, where groups of diffeomorphisms are the central focus. Emerging high-dimensional theories of shape[5]are central to many studies in computational anatomy, as are questions emerging from the fledgling field ofshape statistics. The metric structures in computational anatomy are related in spirit tomorphometrics, with the distinction that Computational anatomy focuses on an infinite-dimensional space ofcoordinate systemstransformed by adiffeomorphism, hence the central use of the terminologydiffeomorphometry, the metric space study of coordinate systems via diffeomorphisms. At computational anatomy's heart is the comparison of shape by recognizing in one shape the other. This connects it toD'Arcy Wentworth Thompson's developmentsOn Growth and Formwhich has led to scientific explanations ofmorphogenesis, the process by whichpatternsare formed inbiology.Albrecht Durer's Four Books on Human Proportion were arguably the earliest works on computational anatomy.[6][7][8]The efforts ofNoam Chomskyin his pioneering ofcomputational linguisticsinspired the original formulation of computational anatomy as a generative model of shape and form from exemplars acted upon via transformations.[9] Due to the availability of dense 3D measurements via technologies such asmagnetic resonance imaging(MRI), computational anatomy has emerged as a subfield ofmedical imagingandbioengineeringfor extracting anatomical coordinate systems at the morphome scale in 3D. The spirit of this discipline shares strong overlap with areas such ascomputer visionandkinematicsofrigid bodies, where objects are studied by analysing thegroupsresponsible for the movement in question. Computational anatomy departs from computer vision with its focus on rigid motions, as the infinite-dimensional diffeomorphism group is central to the analysis of Biological shapes. It is a branch of the image analysis and pattern theory school at Brown University[10]pioneered byUlf Grenander. In Grenander's general metricpattern theory, making spaces ofpatternsinto ametric spaceis one of the fundamental operations since being able to cluster and recognize anatomical configurations often requires a metric of close and far between shapes. Thediffeomorphometry metric[11]of computational anatomy measures how far two diffeomorphic changes of coordinates are from each other, which in turn induces ametric on the shapes and imagesindexed to them. The models of metric pattern theory,[12][13]in particular group action on the orbit of shapes and forms is a central tool to the formal definitions in computational anatomy. Computational anatomy is the study of shape and form at themorphomeorgross anatomymillimeter, ormorphologyscale, focusing on the study of sub-manifoldsofR3,{\displaystyle {\mathbb {R} }^{3},}points, curves surfaces and subvolumes of human anatomy. An early modern computational neuro-anatomist was David Van Essen[14]performing some of the early physical unfoldings of the human brain based on printing of a human cortex and cutting.Jean Talairach'spublication ofTalairach coordinatesis an important milestone at the morphome scale demonstrating the fundamental basis of local coordinate systems in studying neuroanatomy and therefore the clear link tocharts of differential geometry. Concurrently, virtual mapping in computational anatomy across high resolution dense image coordinates was already happening inRuzena Bajcy's[15]and Fred Bookstein's[16]earliest developments based oncomputed axial tomographyandmagnetic resonance imagery. The earliest introduction of the use of flows of diffeomorphisms for transformation of coordinate systems in image analysis and medical imaging was by Christensen, Joshi, Miller, and Rabbitt.[17][18][19] The first formalization of computational anatomy as an orbit of exemplar templates underdiffeomorphismgroup actionwas in the original lecture given by Grenander and Miller with that title in May 1997 at the 50th Anniversary of the Division of Applied Mathematics at Brown University,[20]and subsequent publication.[9]This was the basis for the strong departure from much of the previous work on advanced methods forspatial normalizationandimage registrationwhich were historically built on notions of addition and basis expansion. The structure preserving transformations central to the modern field of Computational Anatomy,homeomorphismsanddiffeomorphismscarry smooth submanifolds smoothly. They are generated viaLagrangian and Eulerian flowswhich satisfy a law of composition of functions forming the group property, but are not additive. The original model of computational anatomy was as the triple,(G,M,P),{\displaystyle ({\mathcal {G}},{\mathcal {M}},{\mathcal {P}})\ ,}the groupg∈G{\displaystyle g\in {\mathcal {G}}}, the orbit of shapes and formsm∈M{\displaystyle m\in {\mathcal {M}}}, and the probability lawsP{\displaystyle P}which encode the variations of the objects in the orbit. The template or collection of templates are elements in the orbitmtemp∈M{\displaystyle m_{\mathrm {temp} }\in {\mathcal {M}}}of shapes. The Lagrangian and Hamiltonian formulations of the equations of motion of computational anatomy took off post 1997 with several pivotal meetings including the 1997 Luminy meeting[21]organized by the Azencott[22]school atEcole-Normale Cachanon the "Mathematics of Shape Recognition" and the 1998 Trimestre atInstitute Henri Poincaréorganized byDavid Mumford"Questions Mathématiques en Traitement du Signal et de l'Image" which catalyzed the Hopkins-Brown-ENS Cachan groups and subsequent developments and connections of computational anatomy to developments in global analysis. The developments in computational anatomy included the establishment of the Sobolev smoothness conditions on the diffeomorphometry metric to insure existence of solutions ofvariationalproblems in the space of diffeomorphisms,[23][24]the derivation of the Euler–Lagrange equations characterizing geodesics through the group and associated conservation laws,[25][26][27]the demonstration of the metric properties of the right invariant metric,[28]the demonstration that the Euler–Lagrange equations have a well-posed initial value problem with unique solutions for all time,[29]and with the first results on sectional curvatures for the diffeomorphometry metric in landmarked spaces.[30]Following the Los Alamos meeting in 2002,[31]Joshi's[32]original large deformation singularLandmarksolutions in computational anatomy were connected to peakedsolitonsorpeakons[33]as solutions for theCamassa–Holmequation. Subsequently, connections were made between computational anatomy's Euler–Lagrange equations for momentum densities for the right-invariant metric satisfying Sobolev smoothness toVladimir Arnold's[4]characterization of theEuler equationfor incompressible flows as describing geodesics in the group of volume preserving diffeomorphisms.[34][35]The first algorithms, generally termed LDDMM for large deformation diffeomorphic mapping for computing connections between landmarks in volumes[32][36][37]and spherical manifolds,[38]curves,[39]currents and surfaces,[40][41][42]volumes,[43]tensors,[44]varifolds,[45]and time-series[46][47][48]have followed. These contributions of computational anatomy to the global analysis associated to the infinite dimensional manifolds of subgroups of the diffeomorphism group is far from trivial. The original idea of doing differential geometry, curvature and geodesics on infinite dimensional manifolds goes back toBernhard Riemann'sHabilitation(Ueber die Hypothesen, welche der Geometrie zu Grunde liegen[49][50]); the key modern book laying the foundations of such ideas in global analysis are from Michor.[51] The applications within medical imaging of computational anatomy continued to flourish after two organized meetings at theInstitute for Pure and Applied Mathematicsconferences[52][53]atUniversity of California, Los Angeles. Computational anatomy has been useful in creating accurate models of the atrophy of the human brain at the morphome scale, as well as Cardiac templates,[54]as well as in modeling biological systems.[55]Since the late 1990s, computational anatomy has become an important part of developing emerging technologies for the field of medical imaging. Digital atlases are a fundamental part of modern Medical-school education[56][57]and in neuroimaging research at the morphome scale.[58][59]Atlas based methods and virtual textbooks[60]which accommodate variations as in deformable templates are at the center of many neuro-image analysis platforms including Freesurfer,[61]FSL,[62]MRIStudio,[63]SPM.[64]Diffeomorphic registration,[18]introduced in the 1990s, is now an important player with existing codes bases organized around ANTS,[65]DARTEL,[66]DEMONS,[67]LDDMM,[68]StationaryLDDMM,[69]FastLDDMM,[70]are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images.Voxel-based morphometryis an important technology built on many of these principles. The model of human anatomy is a deformable template, an orbit of exemplars under group action. Deformable template models have been central to Grenander's metric pattern theory, accounting for typicality via templates, and accounting for variability via transformation of the template. An orbit under group action as the representation of the deformable template is a classic formulation from differential geometry. The space of shapes are denotedm∈M{\displaystyle m\in {\mathcal {M}}}, with thegroup(G,∘){\displaystyle ({\mathcal {G}},\circ )}with law of composition∘{\displaystyle \circ }; the action of the group on shapes is denotedg⋅m{\displaystyle g\cdot m}, where the action of the groupg⋅m∈M,m∈M{\displaystyle g\cdot m\in {\mathcal {M}},m\in {\mathcal {M}}}is defined to satisfy The orbitM{\displaystyle {\mathcal {M}}}of the template becomes the space of all shapes,M≐{m=g⋅mtemp,g∈G}{\displaystyle {\mathcal {M}}\doteq \{m=g\cdot m_{\mathrm {temp} },g\in {\mathcal {G}}\}}, being homogenous under the action of the elements ofG{\displaystyle {\mathcal {G}}}. The orbit model of computational anatomy is an abstract algebra – to be compared tolinear algebra– since the groups act nonlinearly on the shapes. This is a generalization of the classical models of linear algebra, in which the set of finite dimensionalRn{\displaystyle {\mathbb {R} }^{n}}vectors are replaced by the finite-dimensional anatomical submanifolds (points, curves, surfaces and volumes) and images of them, and then×n{\displaystyle n\times n}matrices of linear algebra are replaced by coordinate transformations based on linear and affine groups and the more general high-dimensional diffeomorphism groups. The central objects are shapes or forms in computational anatomy, one set of examples being the 0,1,2,3-dimensional submanifolds ofR3{\displaystyle {\mathbb {R} }^{3}}, a second set of examples being images generated viamedical imagingsuch as viamagnetic resonance imaging(MRI) andfunctional magnetic resonance imaging. The 0-dimensional manifolds are landmarks or fiducial points; 1-dimensional manifolds are curves such as sulcal and gyral curves in the brain; 2-dimensional manifolds correspond to boundaries of substructures in anatomy such as the subcortical structures of themidbrainor the gyral surface of theneocortex; subvolumes correspond to subregions of the human body, theheart, thethalamus, the kidney. The landmarksX≐{x1,…,xn}⊂R3∈M{\displaystyle X\doteq \{x_{1},\dots ,x_{n}\}\subset {\mathbb {R} }^{3}\in {\mathcal {M}}}are a collections of points with no other structure, delineating important fiducials within human shape and form (see associated landmarked image). The sub-manifoldshapes such as surfacesX⊂R3∈M{\displaystyle X\subset {\mathbb {R} }^{3}\in {\mathcal {M}}}are collections of points modeled as parametrized by a local chart orimmersionm:U⊂R1,2→R3{\displaystyle m:U\subset {\mathbb {R} }^{1,2}\rightarrow {\mathbb {R} }^{3}},m(u),u∈U{\displaystyle m(u),u\in U}(see Figure showing shapes as mesh surfaces). The images such as MR images or DTI imagesI∈M{\displaystyle I\in {\mathcal {M}}}, and are dense functionsI(x),x∈X⊂R1,2,3{\displaystyle I(x),x\in X\subset {\mathbb {R} }^{1,2,3}}are scalars, vectors, and matrices (see Figure showing scalar image). Groupsandgroup actionsare familiar to the Engineering community with the universal popularization and standardization oflinear algebraas a basic model for analyzingsignals and systemsinmechanical engineering,electrical engineeringandapplied mathematics. In linear algebra the matrix groups (matrices with inverses) are the central structure, with group action defined by the usual definition ofA{\displaystyle A}as ann×n{\displaystyle n\times n}matrix, acting onx∈Rn{\displaystyle x\in {\mathbb {R} }^{n}}asn×1{\displaystyle n\times 1}vectors; the orbit in linear-algebra is the set ofn{\displaystyle n}-vectors given byy=A⋅x∈Rn{\displaystyle y=A\cdot x\in {\mathbb {R} }^{n}}, which is a group action of the matrices through the orbit ofRn{\displaystyle {\mathbb {R} }^{n}}. The central group in computational anatomy defined on volumes inR3{\displaystyle {\mathbb {R} }^{3}}are thediffeomorphismsG≐Diff{\displaystyle {\mathcal {G}}\doteq \operatorname {Diff} }which are mappings with 3-componentsφ(⋅)=(φ1(⋅),φ2(⋅),φ3(⋅)){\displaystyle \varphi (\cdot )=(\varphi _{1}(\cdot ),\varphi _{2}(\cdot ),\varphi _{3}(\cdot ))}, law of composition of functionsφ∘φ′(⋅)≐φ(φ′(⋅)){\displaystyle \varphi \circ \varphi ^{\prime }(\cdot )\doteq \varphi (\varphi ^{\prime }(\cdot ))}, with inverseφ∘φ−1(⋅)=φ(φ−1(⋅))=id{\displaystyle \varphi \circ \varphi ^{-1}(\cdot )=\varphi (\varphi ^{-1}(\cdot ))={\rm {id}}}. Most popular are scalar images,I(x),x∈R3{\displaystyle I(x),x\in {\mathbb {R} }^{3}}, with action on the right via the inverse. For sub-manifoldsX⊂R3∈M{\displaystyle X\subset {\mathbb {R} }^{3}\in {\mathcal {M}}}, parametrized by a chart orimmersionm(u),u∈U{\displaystyle m(u),u\in U}, the diffeomorphic action the flow of the position Severalgroup actions in computational anatomyhave been defined.[citation needed] For the study ofrigid bodykinematics, the low-dimensional matrixLie groupshave been the central focus. The matrix groups are low-dimensional mappings, which are diffeomorphisms that provide one-to-one correspondences between coordinate systems, with a smooth inverse. Thematrix groupof rotations and scales can be generated via a closed form finite-dimensional matrices which are solution of simple ordinary differential equations with solutions given by the matrix exponential. For the study of deformable shape in computational anatomy, a more general diffeomorphism group has been the group of choice, which is the infinite dimensional analogue. The high-dimensional diffeomorphism groups used in Computational Anatomy are generated via smooth flowsφt,t∈[0,1]{\displaystyle \varphi _{t},t\in [0,1]}which satisfy theLagrangian and Eulerian specification of the flow fieldsas first introduced in,[17][19][71]satisfying the ordinary differential equation: withv≐(v1,v2,v3){\displaystyle v\doteq (v_{1},v_{2},v_{3})}the vector fields onR3{\displaystyle {\mathbb {R} }^{3}}termed theEulerianvelocity of the particles at positionφ{\displaystyle \varphi }of the flow. The vector fields are functions in a function space, modelled as a smoothHilbertspace of high-dimension, with the Jacobian of the flowDφ≐(∂φi∂xj){\displaystyle \ D\varphi \doteq \left({\frac {\partial \varphi _{i}}{\partial x_{j}}}\right)}a high-dimensional field in a function space as well, rather than a low-dimensional matrix as in the matrix groups. Flows were first introduced[72][73]for large deformations in image matching;φ˙t(x){\displaystyle {\dot {\varphi }}_{t}(x)}is the instantaneous velocity of particlex{\displaystyle x}at timet{\displaystyle t}. The inverseφt−1,t∈[0,1]{\displaystyle \varphi _{t}^{-1},t\in [0,1]}required for the group is defined on the Eulerian vector-field withadvectiveinverse flow The group of diffeomorphisms is very big. To ensure smooth flows of diffeomorphisms avoidingshock-like solutionsfor the inverse, the vector fields must be at least 1-time continuously differentiable in space.[74][75]For diffeomorphisms onR3{\displaystyle {\mathbb {R} }^{3}}, vector fields are modelled as elements of the Hilbert space(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}using theSobolevembedding theorems so that each element has strictly greater than 2 generalized square-integrable spatial derivatives (thusvi∈H03,i=1,2,3,{\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,}is sufficient), yielding 1-time continuously differentiable functions.[74][75] The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: where‖v‖V2≐∫XAv⋅vdx,v∈V,{\displaystyle \|v\|_{V}^{2}\doteq \int _{X}Av\cdot v\,dx,\ v\in V\ ,}with the linear operatorA{\displaystyle A}mapping to the dual spaceA:V↦V∗{\displaystyle A:V\mapsto V^{*}}, with the integral calculated by integration by parts whenAv∈V∗{\displaystyle Av\in V^{*}}is a generalized function in the dual space. The modelling approach used in computational anatomy enforces a continuous differentiability condition on the vector fields by modelling the space of vector fields(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}as areproducing kernel Hilbert space(RKHS), with the norm defined by a 1-1, differential operatorA:V→V∗{\displaystyle A:V\rightarrow V^{*}}, Green's inverseK=A−1{\displaystyle K=A^{-1}}. The norm of the Hilbert space is induced by the differential operator. Forσ(v)≐Av∈V∗{\displaystyle \sigma (v)\doteq Av\in V^{*}}a generalized function or distribution, define the linear form as(σ|w)≐∫R3∑iwi(x)σi(dx){\displaystyle (\sigma |w)\doteq \int _{{\mathbb {R} }^{3}}\sum _{i}w_{i}(x)\sigma _{i}(dx)}. This determines the norm on(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}according to SinceA{\displaystyle A}is a differential operator, finiteness of the norm-square(Av|v)<∞{\displaystyle (Av|v)<\infty }includes derivatives from the differential operator implying smoothness of the vector fields.TheSobolev embeddingtheorem arguments were made in[74][75]demonstrating that 1-continuous derivative is required for smooth flows. For proper choice ofA{\displaystyle A}then(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}is an RKHS with the operatorK=A−1:V∗→V{\displaystyle K=A^{-1}:V^{*}\rightarrow V}termed theGreen'soperator generated from theGreen's function(scalar case) for the vector field case. The Green's kernels associated to the differential operator smooths since the kernelk(⋅,⋅){\displaystyle k(\cdot ,\cdot )}is continuously differentiable in both variables implying Whenσ≐μdx{\displaystyle \sigma \doteq \mu \,dx}, a vector density,(σ∣v)≐∫v⋅μdx.{\displaystyle (\sigma \mid v)\doteq \int v\cdot \mu \,dx.} The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation.[28][76][77][78][79][80]The diffeomorphometry metric measures how close and far two shapes or images are from each other; the metric length is the shortest length of the flow which carries one coordinate system into the other. Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images do not form a vector space. In theRiemannian orbit model of computational anatomy, diffeomorphisms acting on the formsφ⋅m∈M,φ∈DiffV,m∈M{\displaystyle \varphi \cdot m\in {\mathcal {M}},\varphi \in \operatorname {Diff} _{V},m\in {\mathcal {M}}}do not act linearly. There are many ways to define metrics, and for the sets associated to shapes theHausdorff metricis another. The method we use to induce theRiemannian metricis used to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is calleddiffeomorphometry. Define the distance on the group of diffeomorphisms this is the right-invariant metric of diffeomorphometry,[11][28]invariant to reparameterization of space since for allφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}, The distance on shapes and forms,[81]dM:M×M→R+{\displaystyle d_{\mathcal {M}}:{\mathcal {M}}\times {\mathcal {M}}\rightarrow \mathbb {R} ^{+}}, the images[28]are denoted with the orbit asI∈I{\displaystyle I\in {\mathcal {I}}}and metric,dI{\displaystyle ,d_{\mathcal {I}}}. In classical mechanics the evolution of physical systems is described by solutions to the Euler–Lagrange equations associated to theLeast-action principleofHamilton. This is a standard way, for example of obtainingNewton's laws of motionof free particles. More generally, the Euler–Lagrange equations can be derived for systems ofgeneralized coordinates. The Euler–Lagrange equation in computational anatomy describes the geodesic shortest path flows between coordinate systems of the diffeomorphism metric. In computational anatomy the generalized coordinates are the flow of the diffeomorphism and its Lagrangian velocityφ,φ˙{\displaystyle \varphi ,{\dot {\varphi }}}, the two related via the Eulerian velocityv≐φ˙∘φ−1{\displaystyle v\doteq {\dot {\varphi }}\circ \varphi ^{-1}}.Hamilton's principlefor generating the Euler–Lagrange equation requires the action integral on the Lagrangian given by the Lagrangian is given by the kinetic energy: In computational anatomy,Av{\displaystyle Av}was first called theEulerian or diffeomorphic shape momentum[82]since when integrated against Eulerian velocityv{\displaystyle v}gives energy density, and since there is aconservation of diffeomorphic shape momentumwhich holds. The operatorA{\displaystyle A}is the generalizedmoment of inertiaor inertial operator. Classical calculation of the Euler–Lagrange equation fromHamilton's principlerequires the perturbation of the Lagrangian on the vector field in the kinetic energy with respect to first order perturbation of the flow. This requires adjustment by theLie bracket of vector field, given by operatoradv:w∈V↦V{\displaystyle ad_{v}:w\in V\mapsto V}which involves the Jacobian given by Defining the adjointadv∗:V∗→V∗,{\displaystyle ad_{v}^{*}:V^{*}\rightarrow V^{*},}then the first order variation gives the Eulerian shape momentumAv∈V∗{\displaystyle Av\in V^{*}}satisfying the generalized equation: meaning for all smoothw∈V,{\displaystyle w\in V,} Computational anatomy is the study of the motions of submanifolds, points, curves, surfaces and volumes. Momentum associated to points, curves and surfaces are all singular, implying the momentum is concentrated on subsets ofR3{\displaystyle {\mathbb {R} }^{3}}which are dimension≤2{\displaystyle \leq 2}inLebesgue measure. In such cases, the energy is still well defined(Avt∣vt){\displaystyle (Av_{t}\mid v_{t})}since althoughAvt{\displaystyle Av_{t}}is a generalized function, the vector fields are smooth and the Eulerian momentum is understood via its action on smooth functions. The perfect illustration of this is even when it is a superposition of delta-diracs, the velocity of the coordinates in the entire volume move smoothly. The Euler–Lagrange equation (EL-general) on diffeomorphisms for generalized functionsAv∈V∗{\displaystyle Av\in V^{*}}was derived in.[83]InRiemannian Metric and Lie-Bracket Interpretation of the Euler–Lagrange Equation on Geodesicsderivations are provided in terms of the adjoint operator and the Lie bracket for the group of diffeomorphisms. It has come to be called EPDiff equation for diffeomorphisms connecting to the Euler-Poincare method having been studied in the context of the inertial operatorA=identity{\displaystyle A=identity}for incompressible, divergence free, fluids.[35][84] For the momentum density case(Avt∣w)=∫Xμt⋅wdx{\displaystyle (Av_{t}\mid w)=\int _{X}\mu _{t}\cdot w\,dx}, then Euler–Lagrange equation has a classical solution: The Euler–Lagrange equation on diffeomorphisms, classically defined for momentum densities first appeared in[85]for medical image analysis. In medical imaging and computational anatomy, positioning and coordinatizing shapes are fundamental operations; the system for positioning anatomical coordinates and shapes built on the metric and the Euler–Lagrange equation a geodesic positioning system as first explicated in Miller Trouve and Younes.[11]Solving the geodesic from the initial conditionv0{\displaystyle v_{0}}is termed theRiemannian-exponential,a mappingExpid⁡(⋅):V→DiffV{\displaystyle \operatorname {Exp} _{\rm {id}}(\cdot ):V\to \operatorname {Diff} _{V}}at identity to the group. The Riemannian exponential satisfiesExpid⁡(v0)=φ1{\displaystyle \operatorname {Exp} _{\rm {id}}(v_{0})=\varphi _{1}}for initial conditionφ˙0=v0{\displaystyle {\dot {\varphi }}_{0}=v_{0}}, vector field dynamicsφ˙t=vt∘φt,t∈[0,1]{\displaystyle {\dot {\varphi }}_{t}=v_{t}\circ \varphi _{t},t\in [0,1]}, Computing the flowv0{\displaystyle v_{0}}onto coordinatesRiemannian logarithm,[11][81]mappingLogid(⋅):DiffV→V{\displaystyle Log_{\rm {id}}(\cdot ):\operatorname {Diff} _{V}\to V}at identity fromφ{\displaystyle \varphi }to vector fieldv0∈V{\displaystyle v_{0}\in V}; Extended to the entire group they become These are inverses of each other for unique solutions of Logarithm; the first is calledgeodesic positioning, the lattergeodesic coordinates(seeexponential map, Riemannian geometryfor the finite dimensional version). Thegeodesic metricis a local flattening of the Riemannian coordinate system (see figure). In computational anatomy the diffeomorphisms are used to push the coordinate systems, and the vector fields are used as the control within the anatomical orbit or morphological space. The model is that of a dynamical system, the flow of coordinatest↦φt∈DiffV{\displaystyle t\mapsto \varphi _{t}\in \operatorname {Diff} _{V}}and the control the vector fieldt↦vt∈V{\displaystyle t\mapsto v_{t}\in V}related viaφ˙t=vt⋅φt,φ0=id.{\displaystyle {\dot {\varphi }}_{t}=v_{t}\cdot \varphi _{t},\varphi _{0}={\rm {id}}.}The Hamiltonian view[81][86][87][88][89]reparameterizes the momentum distributionAv∈V∗{\displaystyle Av\in V^{*}}in terms of theconjugate momentumorcanonical momentum, introduced as a Lagrange multiplierp:φ˙↦(p∣φ˙){\displaystyle p:{\dot {\varphi }}\mapsto (p\mid {\dot {\varphi }})}constraining the Lagrangian velocityφ˙t=vt∘φt{\displaystyle {\dot {\varphi }}_{t}=v_{t}\circ \varphi _{t}}.accordingly: This function is the extended Hamiltonian. ThePontryagin maximum principle[81]gives the optimizing vector field which determines the geodesic flow satisfyingφ˙t=vt∘φt,φ0=id,{\displaystyle {\dot {\varphi }}_{t}=v_{t}\circ \varphi _{t},\varphi _{0}={\rm {id}},}as well as the reduced Hamiltonian The Lagrange multiplier in its action as a linear form has its own inner product of the canonical momentum acting on the velocity of the flow which is dependent on the shape, e.g. for landmarks a sum, for surfaces a surface integral, and. for volumes it is a volume integral with respect todx{\displaystyle dx}onR3{\displaystyle {\mathbb {R} }^{3}}. In all cases the Greens kernels carry weights which are the canonical momentum evolving according to an ordinary differential equation which corresponds to EL but is the geodesic reparameterization in canonical momentum. The optimizing vector field is given by with dynamics of canonical momentum reparameterizing the vector field along the geodesic Whereas the vector fields are extended across the entire background space ofR3{\displaystyle {\mathbb {R} }^{3}}, the geodesic flows associated to the submanifolds has Eulerian shape momentum which evolves as a generalized functionAvt∈V∗{\displaystyle Av_{t}\in V^{*}}concentrated to the submanifolds. For landmarks[90][91][92]thegeodesics have Eulerian shape momentum which are a superposition of delta distributionstravelling with the finite numbers of particles; the diffeomorphic flow of coordinates have velocities in the range of weighted Green's Kernels. For surfaces, the momentum is asurface integral of delta distributionstravelling with the surface.[11] The geodesics connecting coordinate systems satisfyingEL-generalhave stationarity of the Lagrangian. The Hamiltonian is given by the extremum along the patht∈[0,1]{\displaystyle t\in [0,1]},H(φ,p)=maxvH(φ,p,v){\displaystyle H(\varphi ,p)=\max _{v}H(\varphi ,p,v)}, equalling theLagrangian-kinetic-energyand is stationary alongEL-general. Defining the geodesic velocity at the identityv0=arg⁡maxvH(φ0,p0,v){\displaystyle v_{0}=\arg \max _{v}H(\varphi _{0},p_{0},v)}, then along the geodesic The stationarity of the Hamiltonian demonstrates the interpretation of the Lagrange multiplier as momentum; integrated against velocityφ˙{\displaystyle {\dot {\varphi }}}gives energy density. The canonical momentum has many names. Inoptimal control, the flowsφ{\displaystyle \varphi }is interpreted as the state, andp{\displaystyle p}is interpreted as conjugate state, or conjugate momentum.[93]The geodesi of EL implies specification of the vector fieldsv0{\displaystyle v_{0}}or Eulerian momentumAv0{\displaystyle Av_{0}}att=0{\displaystyle t=0}, or specification of canonical momentump0{\displaystyle p_{0}}determines the flow. In computational anatomy the submanifolds are pointsets, curves, surfaces and subvolumes which are the basic primitives. The geodesic flows between the submanifolds determine the distance, and form the basic measuring and transporting tools ofdiffeomorphometry. Att=0{\displaystyle t=0}the geodesic has vector fieldv0=Kp0{\displaystyle v_{0}=Kp_{0}}determined by the conjugate momentum and the Green's kernel of the inertial operator defining the Eulerian momentumK=A−1{\displaystyle K=A^{-1}}. The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: Given the least-action there is a natural definition of momentum associated to generalized coordinates; the quantity acting against velocity gives energy. The field has studied two forms, the momentum associated to the Eulerian vector field termedEulerian diffeomorphic shape momentum, and the momentum associated to the initial coordinates or canonical coordinates termedcanonical diffeomorphic shape momentum. Each has a conservation law. The conservation of momentum goes hand in hand with theEL-general. In computational anatomy,Av{\displaystyle Av}is the Eulerianmomentumsince when integrated against Eulerian velocityv{\displaystyle v}gives energy density; operatorA{\displaystyle A}the generalizedmoment of inertiaor inertial operator which acting on the Eulerian velocity gives momentum which is conserved along the geodesic: Conservation of Eulerian shape momentum was shown in[94]and follows fromEL-general; conservation of canonical momentum was shown in[81] The proof follow from definingwt=((Dφt)w)∘φt−1{\displaystyle w_{t}=((D\varphi _{t})w)\circ \varphi _{t}^{-1}},ddtwt=(Dvt)wt−(Dwt)vt{\displaystyle {\frac {d}{dt}}w_{t}=(Dv_{t})w_{t}-(Dw_{t})v_{t}}implyingddt(Avt∣((Dφt)w)∘φt−1)=(ddtAvt∣((Dφt)w)∘φt−1)+(Avt∣ddt((Dφt)w)∘φt−1)=(ddtAvt∣wt)+(Avt∣(Dvt)wt−(Dwt)vt)=0.{\displaystyle {\frac {d}{dt}}(Av_{t}\mid ((D\varphi _{t})w)\circ \varphi _{t}^{-1})=({\frac {d}{dt}}Av_{t}\mid ((D\varphi _{t})w)\circ \varphi _{t}^{-1})+(Av_{t}\mid {\frac {d}{dt}}((D\varphi _{t})w)\circ \varphi _{t}^{-1})=({\frac {d}{dt}}Av_{t}\mid w_{t})+(Av_{t}\mid (Dv_{t})w_{t}-(Dw_{t})v_{t})=0.} The proof on canonical momentum is shown fromp˙t=−(Dvt)|φtTpt{\displaystyle {\dot {p}}_{t}=-(Dv_{t})_{|_{\varphi _{t}}}^{T}p_{t}}: Construction of diffeomorphic correspondences between shapes calculates the initial vector field coordinatesv0∈V{\displaystyle v_{0}\in V}and associated weights on the Greens kernelsp0{\displaystyle p_{0}}. These initial coordinates are determined by matching of shapes, calledlarge-deformation diffeomorphic metric mapping (LDDMM).LDDMM has been solved for landmarks with and without correspondence[32][95][96][97][98]and for dense image matchings.[99][100]curves,[101]surfaces,[41][102]dense vector[103]and tensor[104]imagery, and varifolds removing orientation.[105]LDDMM calculates geodesic flows of theEL-generalonto target coordinates, adding to the action integral12∫01∫XAvt⋅vtdxdt{\displaystyle {\frac {1}{2}}\int _{0}^{1}\int _{X}Av_{t}\cdot v_{t}\,dx\,dt}an endpoint matching conditionE:φ1→R+{\displaystyle E:\varphi _{1}\rightarrow R^{+}}measuring the correspondence of elements in the orbit under coordinate system transformation. Existence of solutions were examined for image matching.[24]The solution of the variational problem satisfies theEL-generalfort∈[0,1){\displaystyle t\in [0,1)}with boundary condition. Conservation fromEL-generalextends the B.C. att=1{\displaystyle t=1}to the rest of the patht∈[0,1){\displaystyle t\in [0,1)}. The inexact matching problem with the endpoint matching termE(φ1){\displaystyle E(\varphi _{1})}has several alternative forms. One of the key ideas of the stationarity of the Hamiltonian along the geodesic solution is the integrated running cost reduces to initial cost att= 0, geodesics of theEL-generalare determined by their initial conditionv0{\displaystyle v_{0}}. The running cost is reduced to the initial cost determined byv0=Kp0{\displaystyle v_{0}=Kp_{0}}ofKernel-Surf.-Land.-Geodesics. The matching problem explicitly indexed to initial conditionv0{\displaystyle v_{0}}is called shooting, which can also be reparamerized via the conjugate momentump0{\displaystyle p_{0}}. Dense image matching has a long history now with the earliest efforts[106][107]exploiting a small deformation framework. Large deformations began in the early 1990s,[18][19]with the first existence to solutions to the variational problem for flows of diffeomorphisms for dense image matching established in.[24]Beg solved via one of the earliest LDDMM algorithms based on solving the variational matching with endpoint defined by the dense imagery with respect to the vector fields, taking variations with respect to the vector fields.[99]Another solution for dense image matching reparameterizes the optimization problem in terms of the stateqt≐I∘φt−1,q0=I{\displaystyle q_{t}\doteq I\circ \varphi _{t}^{-1},q_{0}=I}giving the solution in terms of the infinitesimal action defined by theadvectionequation.[11][27][100] For Beg's LDDMM, denote the ImageI(x),x∈X{\displaystyle I(x),x\in X}with group actionφ⋅I≐I∘φ−1{\displaystyle \varphi \cdot I\doteq I\circ \varphi ^{-1}}. Viewing this as an optimal control problem, the state of the system is the diffeomorphic flow of coordinatesφt,t∈[0,1]{\displaystyle \varphi _{t},t\in [0,1]}, with the dynamics relating the controlvt,t∈[0,1]{\displaystyle v_{t},t\in [0,1]}to the state given byφ˙=v∘φ{\displaystyle {\dot {\varphi }}=v\circ \varphi }. The endpoint matching conditionE(φ1)≐‖I∘φ1−1−I′‖2{\displaystyle E(\varphi _{1})\doteq \|I\circ \varphi _{1}^{-1}-I^{\prime }\|^{2}}gives the variational problem Beg's iterativeLDDMM algorithmhas fixed points which satisfy the necessary optimizer conditions. The iterative algorithm is given inBeg's LDDMM algorithm for dense image matching. Denote the ImageI(x),x∈X{\displaystyle I(x),x\in X}, with stateqt≐I∘φt−1{\displaystyle q_{t}\doteq I\circ \varphi _{t}^{-1}}and the dynamics related state and control given by theadvective termq˙t=−∇qt⋅vt{\displaystyle {\dot {q}}_{t}=-\nabla q_{t}\cdot v_{t}}. The endpointE(q1)≐‖q1−I′‖2{\displaystyle E(q_{1})\doteq \|q_{1}-I^{\prime }\|^{2}}gives the variational problem Viallard's iterativeHamiltonian LDDMMhas fixed points which satisfy the necessary optimizer conditions. Dense LDDMM tensor matching[104][108]takes the images as 3x1 vectors and 3x3 tensors solving the variational problem matching between coordinate system based on the principle eigenvectors of thediffusion tensor MRIimage (DTI) denotedM(x),x∈R3{\displaystyle M(x),x\in {\mathbb {R} }^{3}}consisting of the3×3{\displaystyle 3\times 3}-tensor at every voxel. Several of the group actions defined based on the Frobeniusmatrix normbetween square matrices‖A‖F2≐trace⁡ATA{\displaystyle \|A\|_{F}^{2}\doteq \operatorname {trace} A^{T}A}. Shown in the accompanying figure is a DTI image illustrated via its color map depicting the eigenvector orientations of the DTI matrix at each voxel with color determined by the orientation of the directions. Denote the3×3{\displaystyle 3\times 3}tensor imageM(x),x∈R3{\displaystyle M(x),x\in {\mathbb {R} }^{3}}with eigen-elements{λi(x),ei(x),i=1,2,3}{\displaystyle \{\lambda _{i}(x),e_{i}(x),i=1,2,3\}},λ1≥λ2≥λ3{\displaystyle \lambda _{1}\geq \lambda _{2}\geq \lambda _{3}}. Coordinate system transformation based on DTI imaging has exploited two actionsone based on the principle eigen-vector or entire matrix. LDDMM matching based on the principal eigenvector of the diffusion tensor matrix takes the imageI(x),x∈R3{\displaystyle I(x),x\in {\mathbb {R} }^{3}}as a unit vector field defined by the first eigenvector. The group action becomes LDDMM matching based on the entire tensor matrix has group action becomesφ⋅M=(λ1e^1e^1T+λ2e^2e^2T+λ3e^3e^3T)∘φ−1,{\displaystyle \varphi \cdot M=(\lambda _{1}{\hat {e}}_{1}{\hat {e}}_{1}^{T}+\lambda _{2}{\hat {e}}_{2}{\hat {e}}_{2}^{T}+\lambda _{3}{\hat {e}}_{3}{\hat {e}}_{3}^{T})\circ \varphi ^{-1},}transformed eigenvectors The variational problem matching onto the principal eigenvector or the matrix is describedLDDMM Tensor Image Matching. High angular resolution diffusion imaging (HARDI) addresses the well-known limitation of DTI, that is, DTI can only reveal one dominant fiber orientation at each location. HARDI measures diffusion alongn{\displaystyle n}uniformly distributed directions on the sphere and can characterize more complex fiber geometries. HARDI can be used to reconstruct anorientation distribution function(ODF) that characterizes the angular profile of the diffusion probability density function of water molecules. The ODF is a function defined on a unit sphere,S2{\displaystyle {\mathbb {S} }^{2}}. Dense LDDMM ODF matching[109]takes the HARDI data as ODF at each voxel and solves the LDDMM variational problem in the space of ODF. In the field ofinformation geometry,[110]the space of ODF forms a Riemannian manifold with the Fisher-Rao metric. For the purpose of LDDMM ODF mapping, the square-root representation is chosen because it is one of the most efficient representations found to date as the various Riemannian operations, such as geodesics, exponential maps, and logarithm maps, are available in closed form. In the following, denote square-root ODF (ODF{\displaystyle {\sqrt {\text{ODF}}}}) asψ(s){\displaystyle \psi ({\bf {s}})}, whereψ(s){\displaystyle \psi ({\bf {s}})}is non-negative to ensure uniqueness and∫s∈S2ψ2(s)ds=1{\displaystyle \int _{{\bf {s}}\in {\mathbb {S} }^{2}}\psi ^{2}({\bf {s}})\,d{\bf {s}}=1}. The variational problem for matching assumes that two ODF volumes can be generated from one to another via flows of diffeomorphismsφt{\displaystyle \varphi _{t}}, which are solutions of ordinary differential equationsφ˙t=vt(φt),t∈[0,1],{\displaystyle {\dot {\varphi }}_{t}=v_{t}(\varphi _{t}),t\in [0,1],}starting from the identity mapφ0=id{\displaystyle \varphi _{0}={\rm {id}}}. Denote the action of the diffeomorphism on template asφ1⋅ψtemp(s,x){\displaystyle \varphi _{1}\cdot \psi _{\mathrm {temp} }({\bf {s}},x)},s∈S2{\displaystyle {\bf {s}}\in {\mathbb {S} }^{2}},x∈X{\displaystyle x\in X}are respectively the coordinates of the unit sphere,S2{\displaystyle {{\mathbb {S} }^{2}}}and the image domain, with the target indexed similarly,ψtarg(s,x){\displaystyle \psi _{\mathrm {targ} }({\bf {s}},x)},s∈S2{\displaystyle {\bf {s}}\in {\mathbb {S} }^{2}},x∈X{\displaystyle x\in X}. The group action of the diffeomorphism on the template is given according to where(Dφ1){\displaystyle (D\varphi _{1})}is the Jacobian of the affine-transformed ODF and is defined as This group action of diffeomorphisms on ODF reorients the ODF and reflects changes in both the magnitude ofψ{\displaystyle \psi }and the sampling directions ofs{\displaystyle {\bf {s}}}due to affine transformation. It guarantees that the volume fraction of fibers oriented toward a small patch must remain the same after the patch is transformed. The LDDMM variational problem is defined as where the logarithm ofψ1,ψ2∈Ψ{\displaystyle \psi _{1},\psi _{2}\in \Psi }is defined as where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is the normaldot productbetween points in the sphere under theL2{\displaystyle \mathrm {L} ^{2}}metric. This LDDMM-ODF mapping algorithm has been widely used to study brain white matter degeneration in aging, Alzheimer's disease, and vascular dementia.[111]The brain white matter atlas generated based on ODF is constructed via Bayesian estimation.[112]Regression analysis on ODF is developed in the ODF manifold space in.[113] The principle mode of variation represented by the orbit model is change of coordinates. For setting in which pairs of images are not related by diffeomorphisms but have photometric variation or image variation not represented by the template,active appearance modellinghas been introduced, originally by Edwards-Cootes-Taylor[114]and in 3D medical imaging in.[115]In the context of computational anatomy in which metrics on the anatomical orbit has been studied,metamorphosisfor modelling structures such as tumors and photometric changes which are not resident in the template was introduced in[28]for magnetic resonance image models, with many subsequent developments extending the metamorphosis framework.[116][117][118] For image matching the image metamorphosis framework enlarges the action so thatt↦(φt,It){\displaystyle t\mapsto (\varphi _{t},I_{t})}with actionφt⋅It≐It∘φt−1{\displaystyle \varphi _{t}\cdot I_{t}\doteq I_{t}\circ \varphi _{t}^{-1}}. In this setting metamorphosis combines both the diffeomorphic coordinate system transformation of computational anatomy as well as the earlymorphingtechnologies which only faded or modified the photometric or image intensity alone. Then the matching problem takes a form with equality boundary conditions: Transforming coordinate systems based onLandmark pointorfiducial markerfeatures dates back to Bookstein's early work on small deformation spline methods[119]for interpolating correspondences defined by fiducial points to the two-dimensional or three-dimensional background space in which the fiducials are defined. Large deformation landmark methods came on in the late 1990s.[26][32][120]The above Figure depicts a series of landmarks associated three brain structures, the amygdala, entorhinal cortex, and hippocampus. Matching geometrical objects like unlabelled point distributions, curves or surfaces is another common problem in computational anatomy. Even in the discrete setting where these are commonly given as vertices with meshes, there are no predetermined correspondences between points as opposed to the situation of landmarks described above. From the theoretical point of view, while any submanifoldX{\displaystyle X}inR3{\displaystyle {\mathbb {R} }^{3}},d=1,2,3{\displaystyle d=1,2,3}can be parameterized in local chartsm:u∈U⊂R0,1,2,3→R3{\displaystyle m:u\in U\subset {\mathbb {R} }^{0,1,2,3}\rightarrow {\mathbb {R} }^{3}}, all reparametrizations of these charts give geometrically the same manifold. Therefore, early on in computational anatomy, investigators have identified the necessity of parametrization invariant representations. One indispensable requirement is that the end-point matching term between two submanifolds is itself independent of their parametrizations. This can be achieved via concepts and methods borrowed fromGeometric measure theory, in particularcurrents[40]andvarifolds[45]which have been used extensively for curve and surface matching. Denoted the landmarked shapeX≐{x1,…,xn}⊂R3{\displaystyle X\doteq \{x_{1},\dots ,x_{n}\}\subset {\mathbb {R} }^{3}}with endpointE(φ1)≐∑i‖φ1(xi)−xi′‖2{\displaystyle E(\varphi _{1})\doteq \textstyle \sum _{i}\displaystyle \|\varphi _{1}(x_{i})-x_{i}^{\prime }\|^{2}}, the variational problem becomes The geodesic Eulerian momentum is a generalized functionAvt∈V∗,t∈[0,1]{\displaystyle \displaystyle Av_{t}\in V^{*}\textstyle ,t\in [0,1]}, supported on the landmarked set in the variational problem. The endpoint condition with conservation implies the initial momentum at the identity of the group: The iterative algorithmfor large deformation diffeomorphic metric mapping for landmarksis given. Glaunes and co-workers first introduced diffeomorphic matching of pointsets in the general setting of matching distributions.[121]As opposed to landmarks, this includes in particular the situation of weighted point clouds with no predefined correspondences and possibly different cardinalities. The template and target discrete point clouds are represented as two weighted sums of Diracsμm=∑i=1nρiδxi{\displaystyle \mu _{m}=\sum _{i=1}^{n}\rho _{i}\delta _{x_{i}}}andμm′=∑i=1n′ρi′δxi′{\displaystyle \mu _{m^{\prime }}=\sum _{i=1}^{n^{\prime }}\rho _{i}^{\prime }\delta _{x_{i}^{\prime }}}living in the space ofsigned measuresofR3{\displaystyle \mathbb {R} ^{3}}. The space is equipped with a Hilbert metric obtained from a real positive kernelk(x,y){\displaystyle k(x,y)}onR3{\displaystyle \mathbb {R} ^{3}}, giving the following norm: The matching problem between a template and target point cloud may be then formulated using this kernel metric for the endpoint matching term: whereμφ1⋅m=∑i=1nρiδφ1(xi){\displaystyle \mu _{\varphi _{1}\cdot m}=\sum _{i=1}^{n}\rho _{i}\delta _{\varphi _{1}(x_{i})}}is the distribution transported by the deformation. In the one dimensional case, a curve in 3D can be represented by an embeddingm:u∈[0,1]→R3{\displaystyle m:u\in [0,1]\rightarrow {\mathbb {R} }^{3}}, and the group action ofDiffbecomesφ⋅m=φ∘m{\displaystyle \varphi \cdot m=\varphi \circ m}. However, the correspondence between curves and embeddings is not one to one as the any reparametrizationm∘γ{\displaystyle m\circ \gamma }, forγ{\displaystyle \gamma }a diffeomorphism of the interval [0,1], represents geometrically the same curve. In order to preserve this invariance in the end-point matching term, several extensions of the previous 0-dimensional measure matching approach can be considered. In the situation of oriented curves, currents give an efficient setting to construct invariant matching terms. In such representation, curves are interpreted as elements of a functional space dual to the space vector fields, and compared through kernel norms on these spaces. Matching of two curvesm{\displaystyle m}andm′{\displaystyle m^{\prime }}writes eventually as the variational problem with the endpoint termE(φ1)=‖Cφ1⋅m−Cm′‖cur2/2{\displaystyle E(\varphi _{1})=\|{\mathcal {C}}_{\varphi _{1}\cdot m}-{\mathcal {C}}_{m^{\prime }}\|_{\mathrm {cur} }^{2}/2}is obtained from the norm the derivative∂m(u){\displaystyle \partial m(u)}being the tangent vector to the curve andKC{\displaystyle K_{\mathcal {C}}}a given matrix kernel ofR3{\displaystyle {\mathbb {R} }^{3}}. Such expressions are invariant to any positive reparametrizations ofm{\displaystyle m}andm′{\displaystyle m'}, and thus still depend on the orientation of the two curves. Varifold is an alternative to currents when orientation becomes an issue as for instance in situations involving multiple bundles of curves for which no "consistent" orientation may be defined. Varifolds directly extend 0-dimensional measures by adding an extra tangent space direction to the position of points, leading to represent curves as measures on the product ofR3{\displaystyle {\mathbb {R} }^{3}}and theGrassmannianof all straight lines inR3{\displaystyle {\mathbb {R} }^{3}}. The matching problem between two curves then consists in replacing the endpoint matching term byE(φ1)=‖Vφ1⋅m−Vm′‖cur2/2{\displaystyle E(\varphi _{1})=\|{\mathcal {V}}_{\varphi _{1}\cdot m}-{\mathcal {V}}_{m^{\prime }}\|_{\text{cur}}^{2}/2}with varifold norms of the form: where[∂m(u)]{\displaystyle [\partial m(u)]}is the non-oriented line directed by tangent vector∂m(u){\displaystyle \partial m(u)}andkR3,kGr{\displaystyle k_{\mathbb {R} ^{3}},k_{\mathbf {Gr} }}two scalar kernels respectively onR3{\displaystyle \mathbb {R} ^{3}}and the Grassmannian. Due to the inherent non-oriented nature of the Grassmannian representation, such expressions are invariant to positive and negative reparametrizations. Surface matching share many similarities with the case of curves. Surfaces inR3{\displaystyle {\mathbb {R} }^{3}}are parametrized in local charts by embeddingsm:u∈U⊂R2→R3{\displaystyle m:u\in U\subset {\mathbb {R} }^{2}\rightarrow {\mathbb {R} }^{3}}, with all reparametrizationsm∘γ{\displaystyle m\circ \gamma }withγ{\displaystyle \gamma }a diffeomorphism of U being equivalent geometrically. Currents and varifolds can be also used to formalize surface matching. Oriented surfaces can be represented as 2-currents which are dual to differential 2-forms. InR3{\displaystyle {\mathbb {R} }^{3}}, one can further identify 2-forms with vector fields through the standard wedge product of 3D vectors. In that setting, surface matching writes again: with the endpoint termE(φ1)=‖Cφ1⋅m−Cm′‖cur2/2{\displaystyle E(\varphi _{1})=\|{\mathcal {C}}_{\varphi _{1}\cdot m}-{\mathcal {C}}_{m^{\prime }}\|_{\mathrm {cur} }^{2}/2}given through the norm withn→=∂u1m∧∂u2m{\displaystyle {\vec {n}}=\partial _{u_{1}}m\wedge \partial _{u_{2}}m}the normal vector to the surface parametrized bym{\displaystyle m}. This surface mapping algorithm has been validated for brain cortical surfaces against CARET and FreeSurfer.[122]LDDMM mapping for multiscale surfaces is discussed in.[123] For non-orientable or non-oriented surfaces, the varifold framework is often more adequate. Identifying the parametric surfacem{\displaystyle m}with a varifoldVm{\displaystyle {\mathcal {V}}_{m}}in the space of measures on the product ofR3{\displaystyle {\mathbb {R} }^{3}}and the Grassmannian, one simply replaces the previous current metric‖Cm‖cur2{\displaystyle \|{\mathcal {C}}_{m}\|_{\mathrm {cur} }^{2}}by: where[n→(u)]{\displaystyle [{\vec {n}}(u)]}is the (non-oriented) line directed by the normal vector to the surface. There are many settings in which there are a series of measurements, a time-series to which the underlying coordinate systems will be matched and flowed onto. This occurs for example in the dynamic growth and atrophy models and motion tracking such as have been explored in[46][124][125][126]An observed time sequence is given and the goal is to infer the time flow of geometric change of coordinates carrying the exemplars or templars through the period of observations. The generic time-series matching problem considers the series of times is0<t1<⋯<tK=1{\displaystyle 0<t_{1}<\cdots <t_{K}=1}. The flow optimizes at the series of costsE(tk),k=1,…,K{\displaystyle E(t_{k}),k=1,\ldots ,K}giving optimization problems of the form There have been at least three solutions offered thus far, piecewise geodesic,[46]principal geodesic[126]and splines.[127] Therandom orbit modelof computational anatomy first appeared in[128][129][130]modelling the change in coordinates associated to the randomness of the group acting on the templates, which induces the randomness on the source of images in the anatomical orbit of shapes and forms and resulting observations through the medical imaging devices. Such arandom orbit modelin which randomness on the group induces randomness on the images was examined for the Special Euclidean Group for object recognition in.[131] Depicted in the figure is a depiction of the random orbits around each exemplar,m0∈M{\displaystyle m_{0}\in {\mathcal {M}}}, generated by randomizing the flow by generating the initial tangent space vector field at the identityv0∈V{\displaystyle v_{0}\in V}, and then generating random objectn≐Expid⁡(v0)⋅m0∈M{\displaystyle n\doteq \operatorname {Exp} _{\rm {id}}(v_{0})\cdot m_{0}\in {\mathcal {M}}}. The random orbit model induces the prior on shapes and imagesI∈I{\displaystyle I\in {\mathcal {I}}}conditioned on a particular atlasIa∈I{\displaystyle I_{a}\in {\mathcal {I}}}. For this the generative model generates the mean fieldI{\displaystyle I}as a random change in coordinates of the template according toI≐φ⋅Ia{\displaystyle I\doteq \varphi \cdot I_{a}}, where the diffeomorphic change in coordinates is generated randomly via the geodesic flows. The prior on random transformationsπDiff(dφ){\displaystyle \pi _{\text{Diff}}(d\varphi )}onDiffV{\displaystyle \operatorname {Diff} _{V}}is induced by the flowExpid⁡(v){\displaystyle \operatorname {Exp} _{\rm {id}}(v)}, withv∈V{\displaystyle v\in V}constructed as a Gaussian random field priorπV(dv){\displaystyle \pi _{V}(dv)}. The density on the random observables at the output of the sensorID∈ID{\displaystyle I^{D}\in {\mathcal {I}}^{D}}are given by Shown in the Figure on the right the cartoon orbit are a random spray of the subcortical manifolds generated by randomizing the vector fieldsv0{\displaystyle v_{0}}supported over the submanifolds. The central statistical model of computational anatomy in the context ofmedical imaginghas been the source-channel model ofShannon theory;[128][129][130]the source is the deformable template of imagesI∈I{\displaystyle I\in {\mathcal {I}}}, the channel outputs are the imaging sensors with observablesID∈ID{\displaystyle I^{D}\in {\mathcal {I}}^{\mathcal {D}}}(see Figure). SeeThe Bayesian model of computational anatomyfor discussions (i) MAP estimation with multiple atlases, (ii) MAP segmentation with multiple atlases, MAP estimation of templates from populations. Shapein computational anatomy is a local theory, indexing shapes and structures to templates to which they arebijectivelymapped.Statistical shapein computational anatomy is the empirical study of diffeomorphic correspondences between populations and common template coordinate systems. This is a strong departure fromProcrustes Analysesand shape theories pioneered byDavid G. Kendall[132]in that the central group of Kendall's theories are the finite-dimensional Lie groups, whereas the theories of shape in computational anatomy[133][134][135]have focused on the diffeomorphism group, which to first order via the Jacobian can be thought of as a field–thus infinite dimensional–of low-dimensional Lie groups of scale and rotations. The random orbit model provides the natural setting to understand empirical shape and shape statistics within computational anatomy since the non-linearity of the induced probability law on anatomical shapes and formsm∈M{\displaystyle m\in {\mathcal {M}}}is induced via the reduction to the vector fieldsv0∈V{\displaystyle v_{0}\in V}at the tangent space at the identity of the diffeomorphism group. The successive flow of the Euler equation induces the random space of shapes and formsExpid⁡(v0)⋅m∈M{\displaystyle \operatorname {Exp} _{\rm {id}}(v_{0})\cdot m\in {\mathcal {M}}}. Performing empirical statistics on this tangent space at the identity is the natural way for inducing probability laws on the statistics of shape. Since both the vector fields and the Eulerian momentumAv0{\displaystyle Av_{0}}are in a Hilbert space the natural model is one of a Gaussian random field, so that given test functionw∈V{\displaystyle w\in V}, then the inner-products with the test functions are Gaussian distributed with mean and covariance. This is depicted in the accompanying figure where sub-cortical brain structures are depicted in a two-dimensional coordinate system based on inner products of their initial vector fields that generate them from the template is shown in a 2-dimensional span of the Hilbert space. The study of shape and statistics in populations are local theories, indexing shapes and structures to templates to which they are bijectively mapped. Statistical shape is then the study of diffeomorphic correspondences relative to the template. A core operation is the generation of templates from populations, estimating a shape that is matched to the population. There are several important methods for generating templates including methods based onFrechetaveraging,[137]and statistical approaches based on theexpectation-maximization algorithmand the Bayes Random orbit models of computational anatomy.[136][138]Shown in the accompanying figure is a subcortical template reconstruction from the population of MRI subjects.[139] Software suitescontaining a variety of diffeomorphic mapping algorithms include the following:
https://en.wikipedia.org/wiki/Computational_anatomy
Themanifold hypothesisposits that manyhigh-dimensionaldata sets that occur in the real world actually lie along low-dimensionallatent manifoldsinside that high-dimensional space.[1][2][3][4]As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the localcoordinate systemof the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features. The manifold hypothesis is related to the effectiveness ofnonlinear dimensionality reductiontechniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such asmanifold sculpting,manifold alignment, andmanifold regularization. The major implications of this hypothesis is that The ability to interpolate between samples is the key to generalization indeep learning.[5] An empirically-motivated approach to the manifold hypothesis focuses on its correspondence with an effective theory for manifold learning under the assumption that robust machine learning requires encoding the dataset of interest using methods for data compression. This perspective gradually emerged using the tools of information geometry thanks to the coordinated effort of scientists working on theefficient coding hypothesis,predictive codingandvariational Bayesian methods. The argument for reasoning about the information geometry on the latent space of distributions rests upon the existence and uniqueness of theFisher information metric.[6]In this general setting, we are trying to find a stochastic embedding of a statistical manifold. From the perspective of dynamical systems, in thebig dataregime this manifold generally exhibits certain properties such as homeostasis: In a sense made precise by theoretical neuroscientists working on thefree energy principle, the statistical manifold in question possesses aMarkov blanket.[7]
https://en.wikipedia.org/wiki/Manifold_hypothesis
Indynamical systems, aspectral submanifold(SSM) is the uniquesmoothestinvariant manifoldserving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities.[2]SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs innonlinear dimensionality reduction. SSMs are chiefly employed for the exact model reduction of dynamical systems. For the automated computation of SSMs and the analysis of the reduced dynamics, open source online software packages such asSSMToolandSSMLearnhave been published. These tools allow to study system dynamics either from the underlying equations of motion or from trajectory data, supporting both analytical and data-driven approaches.[3][4]Detailed documentation for SSMTool is provided online.[5] Consider a nonlinearordinary differential equationof the form with constant matrixA∈Rn×n{\displaystyle \ A\in \mathbb {R} ^{n\times n}}and the nonlinearities contained in the smooth functionf0=O(|x|2){\displaystyle f_{0}={\mathcal {O}}(|x|^{2})}. Assume thatReλj<0{\displaystyle {\text{Re}}\lambda _{j}<0}for all eigenvaluesλj,j=1,…,n{\displaystyle \lambda _{j},\ j=1,\ldots ,n}ofA{\displaystyle A}, that is, the origin is an asymptotically stable fixed point. Now select a spanE=span{v1E,…vmE}{\displaystyle E={\text{span}}\,\{v_{1}^{E},\ldots v_{m}^{E}\}}ofm{\displaystyle m}eigenvectorsviE{\displaystyle v_{i}^{E}}ofA{\displaystyle A}. Then, the eigenspaceE{\displaystyle E}is an invariant subspace of the linearized system Under addition of the nonlinearityf0{\displaystyle f_{0}}to the linear system,E{\displaystyle E}generally perturbs into infinitely many invariant manifolds. Among these invariant manifolds, the unique smoothest one is referred to as the spectral submanifold. An equivalent result for unstable SSMs holds forReλj>0{\displaystyle {\text{Re}}\lambda _{j}>0}. The spectral submanifold tangent toE{\displaystyle E}at the origin is guaranteed to exist provided that certain non-resonance conditions are satisfied by the eigenvaluesλiE{\displaystyle \lambda _{i}^{E}}in the spectrum ofE{\displaystyle E}.[6]In particular, there can be no linear combination ofλiE{\displaystyle \lambda _{i}^{E}}equal to one of the eigenvalues ofA{\displaystyle A}outside of the spectral subspace. If there is such an outer resonance, one can include the resonant mode intoE{\displaystyle E}and extend the analysis to a higher-dimensional SSM pertaining to the extended spectral subspace. The theory on spectral submanifolds extends to nonlinearnon-autonomous systemsof the form withf1:Rn×Tk→Rn{\displaystyle f_{1}:\mathbb {R} ^{n}\times \mathbb {T} ^{k}\to \mathbb {R} ^{n}}aquasiperiodicforcing term.[7] Spectral submanifolds are useful for rigorous nonlinear dimensionality reduction in dynamical systems. The reduction of a high-dimensional phase space to a lower-dimensional manifold can lead to major simplifications by allowing for an accurate description of the system's main asymptotic behaviour.[8]For a known dynamical system, SSMs can be computed analytically by solving the invariance equations, and reduced models on SSMs may be employed for prediction of the response to forcing.[9] Furthermore these manifolds may also be extracted directly from trajectory data of a dynamical system with the use of machine learning algorithms.[10]
https://en.wikipedia.org/wiki/Spectral_submanifold
In the study ofdynamical systems, adelay embedding theoremgives the conditions under which achaoticdynamical system can be reconstructed from a sequence of observations of the state of that system. The reconstruction preserves the properties of the dynamical system that do not change under smoothcoordinate changes(i.e.,diffeomorphisms), but it does not preserve thegeometric shapeof structures inphase space. Takens' theoremis the 1981 delayembeddingtheorem ofFloris Takens. It provides the conditions under which a smoothattractorcan be reconstructed from the observations made with agenericfunction. Later results replaced the smooth attractor with a set of arbitrarybox counting dimensionand the class of generic functions with other classes of functions. It is the most commonly used method forattractor reconstruction.[1] Delay embedding theorems are simpler to state fordiscrete-time dynamical systems. The state space of the dynamical system is aν-dimensionalmanifoldM. The dynamics is given by asmooth map Assume that the dynamicsfhas astrange attractorA⊂M{\displaystyle A\subset M}withbox counting dimensiondA. Using ideas fromWhitney's embedding theorem,Acan be embedded ink-dimensionalEuclidean spacewith That is, there is adiffeomorphismφthat mapsAintoRk{\displaystyle \mathbb {R} ^{k}}such that thederivativeofφhas fullrank. A delay embedding theorem uses anobservation functionto construct the embedding function. An observation functionα:M→R{\displaystyle \alpha :M\to \mathbb {R} }must be twice-differentiable and associate a real number to any point of the attractorA. It must also betypical, so its derivative is of full rank and has no special symmetries in its components. The delay embedding theorem states that the function is anembeddingof the strange attractorAinRk.{\displaystyle \mathbb {R} ^{k}.} Suppose thed{\displaystyle d}-dimensional state vectorxt{\displaystyle x_{t}}evolves according to an unknown but continuous and (crucially) deterministic dynamic. Suppose, too, that the one-dimensional observabley{\displaystyle y}is a smooth function ofx{\displaystyle x}, and “coupled” to all the components ofx{\displaystyle x}. Now at any time we can look not just at the present measurementy(t){\displaystyle y(t)}, but also at observations made at times removed from us by multiples of some lagτ:yt+τ,yt+2τ{\displaystyle \tau :y_{t+\tau },y_{t+2\tau }}, etc. If we usek{\displaystyle k}lags, we have ak{\displaystyle k}-dimensional vector. One might expect that, as the number of lags is increased, the motion in the lagged space will become more and more predictable, and perhaps in the limitk→∞{\displaystyle k\to \infty }would become deterministic. In fact, the dynamics of the lagged vectors become deterministic at a finite dimension; not only that, but the deterministic dynamics are completely equivalent to those of the original state space (precisely, they are related by a smooth, invertible change of coordinates, or diffeomorphism). In fact, the theorem says that determinism appears once you reach dimension2d+1{\displaystyle 2d+1}, and the minimalembedding dimensionis often less.[2][3] Takens' theorem is usually used to reconstruct strange attractors out of experimental data, for which there is contamination by noise. As such, the choice of delay time becomes important. Whereas for data without noise, any choice of delay is valid, for noisy data, the attractor would be destroyed by noise for delays chosen badly. The optimal delay is typically around one-tenth to one-half the mean orbital period around the attractor.[4][5]
https://en.wikipedia.org/wiki/Takens%27s_theorem
Inmathematics, particularly indifferential topology, there are two Whitney embedding theorems, named afterHassler Whitney: The weak Whitney embedding is proved through a projection argument. When the manifold iscompact, one can first use a covering by finitely many local charts and then reduce the dimension with suitable projections.[1]: Ch. 1 §3[2]: Ch. 6[3]: Ch. 5 §3 The general outline of the proof is to start with an immersion⁠f:M→R2m{\displaystyle f:M\to \mathbb {R} ^{2m}}⁠withtransverseself-intersections. These are known to exist from Whitney's earlier work onthe weak immersion theorem. Transversality of the double points follows from a general-position argument. The idea is to then somehow remove all the self-intersections. IfMhas boundary, one can remove the self-intersections simply by isotopingMinto itself (the isotopy being in the domain off), to a submanifold ofMthat does not contain the double-points. Thus, we are quickly led to the case whereMhas no boundary. Sometimes it is impossible to remove the double-points via an isotopy—consider for example the figure-8 immersion of the circle in the plane. In this case, one needs to introduce a local double point. Once one has two opposite double points, one constructs a closed loop connecting the two, giving a closed path in⁠R2m.{\displaystyle \mathbb {R} ^{2m}.}⁠Since⁠R2m{\displaystyle \mathbb {R} ^{2m}}⁠issimply connected, one can assume this path bounds a disc, and provided2m> 4one can further assume (by theweak Whitney embedding theorem) that the disc is embedded in⁠R2m{\displaystyle \mathbb {R} ^{2m}}⁠such that it intersects the image ofMonly in its boundary. Whitney then uses the disc to create a1-parameter familyof immersions, in effect pushingMacross the disc, removing the two double points in the process. In the case of the figure-8 immersion with its introduced double-point, the push across move is quite simple (pictured). This process of eliminatingopposite signdouble-points by pushing the manifold along a disc is called theWhitney Trick. To introduce a local double point, Whitney created immersions⁠αm:Rm→R2m{\displaystyle \alpha _{m}:\mathbb {R} ^{m}\to \mathbb {R} ^{2m}}⁠which are approximately linear outside of the unit ball, but containing a single double point. Form= 1such an immersion is given by Notice that ifαis considered as a map to⁠R3{\displaystyle \mathbb {R} ^{3}}⁠like so: then the double point can be resolved to an embedding: Noticeβ(t, 0) = α(t)and fora≠ 0then as a function oft,β(t,a)is an embedding. For higher dimensionsm, there areαmthat can be similarly resolved in⁠R2m+1.{\displaystyle \mathbb {R} ^{2m+1}.}⁠For an embedding into⁠R5,{\displaystyle \mathbb {R} ^{5},}⁠for example, define This process ultimately leads one to the definition: where The key properties ofαmis that it is an embedding except for the double-pointαm(1, 0, ... , 0) = αm(−1, 0, ... , 0). Moreover, for|(t1, ... ,tm)|large, it is approximately the linear embedding(0,t1, 0,t2, ... , 0,tm). The Whitney trick was used byStephen Smaleto prove theh-cobordism theorem; from which follows thePoincaré conjecturein dimensionsm≥ 5, and the classification ofsmooth structureson discs (also in dimensions 5 and up). This provides the foundation forsurgery theory, which classifies manifolds in dimension 5 and above. Given two oriented submanifolds of complementary dimensions in a simply connected manifold of dimension ≥ 5, one can apply an isotopy to one of the submanifolds so that all the points of intersection have the same sign. The occasion of the proof byHassler Whitneyof the embedding theorem for smooth manifolds is said (rather surprisingly) to have been the first complete exposition of themanifold conceptprecisely because it brought together and unified the differing concepts of manifolds at the time: no longer was there any confusion as to whether abstract manifolds, intrinsically defined via charts, were any more or less general than manifolds extrinsically defined as submanifolds of Euclidean space. See also thehistory of manifolds and varietiesfor context. Although everyn-manifold embeds in⁠R2n,{\displaystyle \mathbb {R} ^{2n},}⁠one can frequently do better. Lete(n)denote the smallest integer so that all compact connectedn-manifolds embed in⁠Re(n).{\displaystyle \mathbb {R} ^{e(n)}.}⁠Whitney's strong embedding theorem states thate(n) ≤ 2n. Forn= 1, 2we havee(n) = 2n, as thecircleand theKlein bottleshow. More generally, forn= 2kwe havee(n) = 2n, as the2k-dimensionalreal projective spaceshow. Whitney's result can be improved toe(n) ≤ 2n− 1unlessnis a power of 2. This is a result ofAndré HaefligerandMorris Hirsch(forn> 4) andC. T. C. Wall(forn= 3); these authors used important preliminary results and particular cases proved by Hirsch,William S. Massey,Sergey NovikovandVladimir Rokhlin.[4]At present the functioneis not known in closed-form for all integers (compare to theWhitney immersion theorem, where the analogous number is known). One can strengthen the results by putting additional restrictions on the manifold. For example, then-spherealways embeds in⁠Rn+1{\displaystyle \mathbb {R} ^{n+1}}⁠– which is the best possible (closedn-manifolds cannot embed in⁠Rn{\displaystyle \mathbb {R} ^{n}}⁠). Any compactorientablesurface and any compact surfacewith non-empty boundaryembeds in⁠R3,{\displaystyle \mathbb {R} ^{3},}⁠though anyclosed non-orientablesurface needs⁠R4.{\displaystyle \mathbb {R} ^{4}.}⁠ IfNis a compact orientablen-dimensional manifold, thenNembeds in⁠R2n−1{\displaystyle \mathbb {R} ^{2n-1}}⁠(fornnot a power of 2 the orientability condition is superfluous). Forna power of 2 this is a result ofAndré HaefligerandMorris Hirsch(forn> 4), and Fuquan Fang (forn= 4); these authors used important preliminary results proved by Jacques Boéchat and Haefliger,Simon Donaldson, Hirsch andWilliam S. Massey.[4]Haefliger proved that ifNis a compactn-dimensionalk-connectedmanifold, thenNembeds in⁠R2n−k{\displaystyle \mathbb {R} ^{2n-k}}⁠provided2k+ 3 ≤n.[4] A relatively 'easy' result is to prove that any two embeddings of a 1-manifold into⁠R4{\displaystyle \mathbb {R} ^{4}}⁠are isotopic (seeKnot theory#Higher dimensions). This is proved using general position, which also allows to show that any two embeddings of ann-manifold into⁠R2n+2{\displaystyle \mathbb {R} ^{2n+2}}⁠are isotopic. This result is an isotopy version of the weak Whitney embedding theorem. Wu proved that forn≥ 2, any two embeddings of ann-manifold into⁠R2n+1{\displaystyle \mathbb {R} ^{2n+1}}⁠are isotopic. This result is an isotopy version of the strong Whitney embedding theorem. As an isotopy version of his embedding result,Haefligerproved that ifNis a compactn-dimensionalk-connected manifold, then any two embeddings ofNinto⁠R2n−k+1{\displaystyle \mathbb {R} ^{2n-k+1}}⁠are isotopic provided2k+ 2 ≤n. The dimension restriction2k+ 2 ≤nis sharp: Haefliger went on to give examples of non-trivially embedded 3-spheres in⁠R6{\displaystyle \mathbb {R} ^{6}}⁠(and, more generally,(2d− 1)-spheres in⁠R3d{\displaystyle \mathbb {R} ^{3d}}⁠). Seefurther generalizations.
https://en.wikipedia.org/wiki/Whitney_embedding_theorem
Linear discriminant analysis(LDA),normal discriminant analysis(NDA),canonical variates analysis(CVA), ordiscriminant function analysisis a generalization ofFisher's linear discriminant, a method used instatisticsand other fields, to find alinear combinationof features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier, or, more commonly, fordimensionality reductionbefore laterclassification. LDA is closely related toanalysis of variance(ANOVA) andregression analysis, which also attempt to express onedependent variableas a linear combination of other features or measurements.[2][3]However, ANOVA usescategoricalindependent variablesand acontinuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label).[4]Logistic regressionandprobit regressionare more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method. LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.[5]LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made. LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.[6][7] Discriminant analysis is used when groups are known a priori (unlike incluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure.[8]In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type. The originaldichotomousdiscriminant analysis was developed by SirRonald Fisherin 1936.[9]It is different from anANOVAorMANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.[10] Consider a set of observationsx→{\displaystyle {\vec {x}}}(also called features, attributes, variables or measurements) for each sample of an object or event with known classy{\displaystyle y}. This set of samples is called thetraining setin asupervised learningcontext. The classification problem is then to find a good predictor for the classy{\displaystyle y}of any sample of the same distribution (not necessarily from the training set) given only an observationx→{\displaystyle {\vec {x}}}.[11]: 338 LDA approaches the problem by assuming that the conditionalprobability density functionsp(x→|y=0){\displaystyle p({\vec {x}}|y=0)}andp(x→|y=1){\displaystyle p({\vec {x}}|y=1)}are boththe normal distributionwith mean andcovarianceparameters(μ→0,Σ0){\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}and(μ→1,Σ1){\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}, respectively. Under this assumption, theBayes-optimal solutionis to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that: Without any further assumptions, the resulting classifier is referred to asquadratic discriminant analysis(QDA). LDA instead makes the additional simplifyinghomoscedasticityassumption (i.e.that the class covariances are identical, soΣ0=Σ1=Σ{\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }) and that the covariances have full rank. In this case, several terms cancel: and the above decision criterion becomes a threshold on thedot product for some threshold constantc, where This means that the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of this linear combination of the known observations. It is often useful to see this conclusion in geometrical terms: the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of projection of multidimensional-space pointx→{\displaystyle {\vec {x}}}onto vectorw→{\displaystyle {\vec {w}}}(thus, we only consider its direction). In other words, the observation belongs toy{\displaystyle y}if correspondingx→{\displaystyle {\vec {x}}}is located on a certain side of a hyperplane perpendicular tow→{\displaystyle {\vec {w}}}. The location of the plane is defined by the thresholdc{\displaystyle c}. The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables.[8] It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions,[12]and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated).[13] Discriminant analysis works by creating one or more linear combinations of predictors, creating a newlatent variablefor each function. These functions are called discriminant functions. The number of functions possible is eitherNg−1{\displaystyle N_{g}-1}whereNg{\displaystyle N_{g}}= number of groups, orp{\displaystyle p}(the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions. Given groupj{\displaystyle j}, withRj{\displaystyle \mathbb {R} _{j}}sets of sample space, there is a discriminant rule such that ifx∈Rj{\displaystyle x\in \mathbb {R} _{j}}, thenx∈j{\displaystyle x\in j}. Discriminant analysis then, finds “good” regions ofRj{\displaystyle \mathbb {R} _{j}}to minimize classification error, therefore leading to a high percent correct classified in the classification table.[14] Each function is given a discriminant score[clarification needed]to determine how well it predicts group placement. Aneigenvaluein discriminant analysis is the characteristic root of each function.[clarification needed]It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates.[8]This however, should be interpreted with caution, as eigenvalues have no upper limit.[10][8]The eigenvalue can be viewed as a ratio ofSSbetweenandSSwithinas in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of theIV[clarification needed].[10]This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc.. Some suggest the use of eigenvalues aseffect sizemeasures, however, this is generally not supported.[10]Instead, thecanonical correlationis the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio ofSSbetweenandSStotal. It is the correlation between groups and the function.[10]Another popular measure of effect size is the percent of variance[clarification needed]for each function. This is calculated by: (λx/Σλi) X 100 whereλxis the eigenvalue for the function and Σλiis the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others.[10]Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement.[10]Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[clarification needed][17] Canonical discriminant analysis (CDA) finds axes (k− 1canonical coordinates,kbeing the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimalk− 1 space through then-dimensional cloud of data that best separates (the projections in that space of) thekgroups. See “Multiclass LDA” for details below. Because LDA uses canonical variates, it was initially often referred as the "method of canonical variates"[18]or canonical variates analysis (CVA).[19] The termsFisher's linear discriminantandLDAare often used interchangeably, althoughFisher'soriginal article[2]actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such asnormally distributedclasses or equal classcovariances. Suppose two classes of observations havemeansμ→0,μ→1{\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}and covariancesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}. Then the linear combination of featuresw→Tx→{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {x}}}will havemeansw→Tμ→i{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {\mu }}_{i}}andvariancesw→TΣiw→{\displaystyle {\vec {w}}^{\mathrm {T} }\Sigma _{i}{\vec {w}}}fori=0,1{\displaystyle i=0,1}. Fisher defined the separation between these twodistributionsto be the ratio of the variance between the classes to the variance within the classes: This measure is, in some sense, a measure of thesignal-to-noise ratiofor the class labelling. It can be shown that the maximum separation occurs when When the assumptions of LDA are satisfied, the above equation is equivalent to LDA. Be sure to note that the vectorw→{\displaystyle {\vec {w}}}is thenormalto the discriminanthyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular tow→{\displaystyle {\vec {w}}}. Generally, the data points to be discriminated are projected ontow→{\displaystyle {\vec {w}}}; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means,w→⋅μ→0{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}andw→⋅μ→1{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}. In this case the parameter c in threshold conditionw→⋅x→>c{\displaystyle {\vec {w}}\cdot {\vec {x}}>c}can be found explicitly: Otsu's methodis related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes. In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find asubspacewhich appears to contain all of the class variability.[20]This generalization is due toC. R. Rao.[21]Suppose that each of C classes has a meanμi{\displaystyle \mu _{i}}and the same covarianceΣ{\displaystyle \Sigma }. Then the scatter between class variability may be defined by the sample covariance of the class means whereμ{\displaystyle \mu }is the mean of the class means. The class separation in a directionw→{\displaystyle {\vec {w}}}in this case will be given by This means that whenw→{\displaystyle {\vec {w}}}is aneigenvectorofΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}the separation will be equal to the correspondingeigenvalue. IfΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to theC− 1 largest eigenvalues (sinceΣb{\displaystyle \Sigma _{b}}is of rankC− 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section. If classification is required, instead ofdimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (givingC(C− 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is anincremental LDA algorithm, and this idea has been extensively studied over the last two decades.[22]Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features.[23]In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules.[24]Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples.[22] In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either themaximum likelihood estimateor themaximum a posterioriestimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class.[5]In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use apseudo inverseinstead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned byΣb{\displaystyle \Sigma _{b}}.[25]Another strategy to deal with small sample size is to use ashrinkage estimatorof the covariance matrix, which can be expressed mathematically as whereI{\displaystyle I}is the identity matrix, andλ{\displaystyle \lambda }is theshrinkage intensityorregularisation parameter. This leads to the framework of regularized discriminant analysis[26]or shrinkage discriminant analysis.[27] Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via thekernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is thekernel Fisher discriminant. LDA can be generalized tomultiple discriminant analysis, wherecbecomes acategorical variablewithNpossible states, instead of only two. Analogously, if the class-conditional densitiesp(x→∣c=i){\displaystyle p({\vec {x}}\mid c=i)}are normal with shared covariances, thesufficient statisticforP(c∣x→){\displaystyle P(c\mid {\vec {x}})}are the values ofNprojections, which are thesubspacespanned by theNmeans,affine projectedby the inverse covariance matrix. These projections can be found by solving ageneralized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details. In addition to the examples given below, LDA is applied inpositioningandproduct management. Inbankruptcy predictionbased on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA,Edward Altman's1968 model[28]is still a leading model in practical applications.[29][30][31] In computerisedface recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are calledFisher faces, while those obtained using the relatedprincipal component analysisare calledeigenfaces. Inmarketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data.Logistic regressionor other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.[32] In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra,[33]to detect animal source ofEscherichia colistudying its virulence factors[34]etc. This method can be used toseparate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.[35] Discriminant function analysis is very similar tologistic regression, and both can be used to answer the same research questions.[10]Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression.[36]Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate.[8]Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.[9][8] Geometric anomalies in higher dimensions lead to the well-knowncurse of dimensionality. Nevertheless, proper utilization ofconcentration of measurephenomena can make computation easier.[37]An important case of theseblessing of dimensionalityphenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.[38]These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.[39]In particular, such theorems are proven forlog-concavedistributions includingmultidimensional normal distribution(the proof is based on the concentration inequalities for log-concave measures[40]) and for product measures on a multidimensional cube (this is proven usingTalagrand's concentration inequalityfor product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction forartificial intelligencesystems in high dimension.[41]
https://en.wikipedia.org/wiki/Discriminant_analysis
Elastic mapsprovide a tool fornonlinear dimensionality reduction. By their construction, they are a system of elasticspringsembedded in the data space.[1]This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructuredk-means clustering(zero elasticity) to the estimators located closely to linearPCA manifolds(for high bending and low stretching modules). With some intermediate values of theelasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on amechanicalanalogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed byA.N. Gorban,A.Y. Zinovyevand A.A. Pitenko in 1996–1998. LetS{\displaystyle {\mathcal {S}}}be a data set in a finite-dimensionalEuclidean space. Elastic map is represented by a set of nodeswj{\displaystyle {\bf {w}}_{j}}in the same space. Each datapoints∈S{\displaystyle s\in {\mathcal {S}}}has ahost node, namely the closest nodewj{\displaystyle {\bf {w}}_{j}}(if there are several closest nodes then one takes the node with the smallest number). The data setS{\displaystyle {\mathcal {S}}}is divided into classesKj={s|wjis a host ofs}{\displaystyle K_{j}=\{s\ |\ {\bf {w}}_{j}{\mbox{ is a host of }}s\}}. Theapproximation energyD is the distortion which is the energy of the springs with unit elasticity which connect each data point with its host node. It is possible to apply weighting factors to the terms of this sum, for example to reflect thestandard deviationof theprobability density functionof any subset of data points{si}{\displaystyle \{s_{i}\}}. On the set of nodes an additional structure is defined. Some pairs of nodes,(wi,wj){\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j})}, are connected byelastic edges. Call this set of pairsE{\displaystyle E}. Some triplets of nodes,(wi,wj,wk){\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j},{\bf {w}}_{k})}, formbending ribs. Call this set of tripletsG{\displaystyle G}. whereλ{\displaystyle \lambda }andμ{\displaystyle \mu }are the stretching and bending moduli respectively. The stretching energy is sometimes referred to as themembrane, while the bending energy is referred to as thethin plateterm.[5] For example, on the 2D rectangular grid the elastic edges are just vertical and horizontal edges (pairs of closest vertices) and the bending ribs are the vertical or horizontal triplets of consecutive (closest) vertices. The position of the nodes{wj}{\displaystyle \{{\bf {w}}_{j}\}}is determined by themechanical equilibriumof the elastic map, i.e. its location is such that it minimizes the total energyU{\displaystyle U}. For a given splitting of datasetS{\displaystyle {\mathcal {S}}}in classesKj{\displaystyle K_{j}}, minimization of the quadratic functionalU{\displaystyle U}is a linear problem with the sparse matrix of coefficients. Therefore, similar toprincipal component analysisork-means, a splitting method is used: Thisexpectation-maximization algorithmguarantees a local minimum ofU{\displaystyle U}. For improving the approximation various additional methods are proposed. For example, thesofteningstrategy is used. This strategy starts with a rigid grids (small length, small bending and large elasticity modulesλ{\displaystyle \lambda }andμ{\displaystyle \mu }coefficients) and finishes with soft grids (smallλ{\displaystyle \lambda }andμ{\displaystyle \mu }). The training goes in several epochs, each epoch with its own grid rigidness. Another adaptive strategy isgrowing net: one starts from a small number of nodes and gradually adds new nodes. Each epoch goes with its own number of nodes. Most important applications of the method and free software[3]are in bioinformatics[7][8]for exploratory data analysis and visualisation of multidimensional data, for data visualisation in economics, social and political sciences,[9]as an auxiliary tool for data mapping in geographic informational systems and for visualisation of data of various nature. The method is applied in quantitative biology for reconstructing the curved surface of a tree leaf from a stack of light microscopy images.[10]This reconstruction is used for quantifying thegeodesicdistances betweentrichomesand their patterning, which is a marker of the capability of a plant to resist to pathogenes. Recently, the method is adapted as a support tool in the decision process underlying the selection, optimization, and management offinancial portfolios.[11] The method of elastic maps has been systematically tested and compared with severalmachine learningmethods on the applied problem of identification of the flow regime of agas-liquid flowin a pipe.[12]There are various regimes: Single phase water or air flow, Bubbly flow, Bubbly-slug flow, Slug flow, Slug-churn flow, Churn flow, Churn-annular flow, and Annular flow. The simplest and most common method used to identify the flow regime is visual observation. This approach is, however, subjective and unsuitable for relatively high gas and liquid flow rates. Therefore, the machine learning methods are proposed by many authors. The methods are applied to differential pressure data collected during a calibration process. The method of elastic maps provided a 2D map, where the area of each regime is represented. The comparison with some other machine learning methods is presented in Table 1 for various pipe diameters and pressure. Here, ANN stands for thebackpropagationartificial neural networks, SVM stands for thesupport vector machine, SOM for theself-organizing maps. The hybrid technology was developed for engineering applications.[13]In this technology, elastic maps are used in combination withPrincipal Component Analysis(PCA),Independent Component Analysis(ICA) and backpropagation ANN. The textbook[14]provides a systematic comparison of elastic maps andself-organizing maps(SOMs) in applications to economic and financial decision-making.
https://en.wikipedia.org/wiki/Elastic_map
Agrowing self-organizing map (GSOM)is a growing variant of aself-organizing map(SOM). The GSOM was developed to address the issue of identifying a suitable map size in theSOM. It starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the value called Spread Factor (SF), the data analyst has the ability to control the growth of the GSOM. All the starting nodes of the GSOM are boundary nodes, i.e. each node has the freedom to grow in its own direction at the beginning. (Fig. 1) New Nodes are grown from the boundary nodes. Once a node is selected for growing all its free neighboring positions will be grown new nodes. The figure shows the three possible node growth options for a rectangular GSOM. The GSOM process is as follows: The GSOM can be used for many preprocessing tasks inData mining, forNonlinear dimensionality reduction, for approximation of principal curves and manifolds, forclusteringandclassification. It gives often the better representation of the data geometry than the SOM (see the classical benchmark for principal curves on the left).
https://en.wikipedia.org/wiki/Growing_self-organizing_map
Bienenstock–Cooper–Munro(BCM)theory,BCM synaptic modification, or theBCM rule, named afterElie Bienenstock,Leon Cooper, and Paul Munro, is a physical theory of learning in thevisual cortexdeveloped in 1981. The BCM model proposes a sliding threshold forlong-term potentiation(LTP) orlong-term depression(LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state (e.g., is firing at high frequency, and/or has high internal calcium concentrations), or LTD if it is in a lower-activity state (e.g., firing in low frequency, low internal calcium concentrations).[1]This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons (usually high-frequency stimulation, or HFS, for LTP, or low-frequency stimulation, LFS, for LTD).[2] In 1949,Donald Hebbproposed a working mechanism for memory and computational adaption in the brain now calledHebbian learning, or the maxim thatcells that fire together, wire together.[3]This notion is foundational in the modern understanding of the brain as a neural network, and though not universally true, remains a good first approximation supported by decades of evidence.[3][4] However, Hebb's rule has problems, namely that it has no mechanism for connections to get weaker and no upper bound for how strong they can get. In other words, the model is unstable, both theoretically and computationally. Later modifications gradually improved Hebb's rule, normalizing it and allowing for decay of synapses, where no activity or unsynchronized activity between neurons results in a loss of connection strength. New biological evidence brought this activity to a peak in the 1970s, where theorists formalized various approximations in the theory, such as the use of firing frequency instead of potential in determining neuron excitation, and the assumption of ideal and, more importantly, linear synaptic integration of signals. That is, there is no unexpected behavior in the adding of input currents to determine whether or not a cell will fire. These approximations resulted in the basic form of BCM below in 1979, but the final step came in the form of mathematical analysis to prove stability and computational analysis to prove applicability, culminating in Bienenstock, Cooper, and Munro's 1982 paper. Since then, experiments have shown evidence for BCM behavior in both thevisual cortexand thehippocampus, the latter of which plays an important role in the formation and storage of memories. Both of these areas are well-studied experimentally, but both theory and experiment have yet to establish conclusive synaptic behavior in other areas of the brain. It has been proposed that in thecerebellum, theparallel-fibertoPurkinje cellsynapse follows an "inverse BCM rule", meaning that at the time of parallel fiber activation, a high calcium concentration in the Purkinje cell results in LTD, while a lower concentration results in LTP.[2]Furthermore, the biological implementation forsynaptic plasticityin BCM has yet to be established.[5] The basic BCM rule takes the form where: This model is a modified form of theHebbian learningrule,mj˙=cdj{\displaystyle {\dot {m_{j}}}=cd_{j}}, and requires a suitable choice of functionϕ{\displaystyle \phi }to avoid the Hebbian problems of instability. Bienenstock at al.[6]rewriteϕ(c){\displaystyle \phi (c)}as a functionϕ(c,c¯){\displaystyle \phi (c,{\bar {c}})}wherec¯{\displaystyle {\bar {c}}}is the time average ofc{\displaystyle c}. With this modification and discarding the uniform decay the rule takes the vectorial form: The conditions for stable learning are derived rigorously in BCM noting that withc(t)=m(t)⋅d(t){\displaystyle c(t)={\textbf {m}}(t)\cdot {\textbf {d}}(t)}and with the approximation of the average outputc¯(t)≈m(t)d¯{\displaystyle {\bar {c}}(t)\approx {\textbf {m}}(t){\bar {\mathbf {d} }}}, it is sufficient that or equivalently, that the thresholdθM(c¯)=(c¯/c0)pc¯{\displaystyle \theta _{M}({\bar {c}})=({\bar {c}}/c_{0})^{p}{\bar {c}}}, wherep{\displaystyle p}andc0{\displaystyle c_{0}}are fixed positive constants.[6] When implemented, the theory is often taken such that whereτ{\displaystyle \tau }is a time constant of selectivity. The model has drawbacks, as it requires bothlong-term potentiationandlong-term depression, or increases and decreases in synaptic strength, something which has not been observed in all cortical systems. Further, it requires a variable activation threshold and depends strongly on stability of the selected fixed pointsc0{\displaystyle c_{0}}andp{\displaystyle p}. However, the model's strength is that it incorporates all these requirements from independently derived rules of stability, such asnormalizabilityand a decay function with time proportional to the square of the output.[7] This example is a particular case of the one at chapter "Mathematical results" of Bienenstock at al.[6]work, assumingp=2{\displaystyle p=2}andc0=1{\displaystyle c_{0}=1}. With these valuesθM=(c¯/c0)pc¯=c¯3{\displaystyle \theta _{M}=({\bar {c}}/c_{0})^{p}{\bar {c}}={\bar {c}}^{3}}and we decideϕ(c,c¯)=c(c−θM){\displaystyle \phi (c,{\bar {c}})=c(c-\theta _{M})}that fulfills the stability conditions said in previous chapter. Assume two presynaptic neurons that provides inputsd1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}, its activity a repetitive cycle with half of timed=(d1,d2)=(0.9,0.1){\displaystyle \mathbf {d} =(d_{1},d_{2})=(0.9,0.1)}and remainder timed=(0.2,0.7){\displaystyle \mathbf {d} =(0.2,0.7)}.c¯{\displaystyle {\bar {c}}}time average will be the average ofc{\displaystyle c}value in first and second half of a cycle. Let initial value of weightsm=(0.1,0.05){\displaystyle \mathbf {m} =(0.1,0.05)}. In the first half of timed=(0.9,0.1){\displaystyle \mathbf {d} =(0.9,0.1)}andm=(0.1,0.05){\displaystyle \mathbf {m} =(0.1,0.05)}, the weighted sumc{\displaystyle c}is equal to 0.095 and we use same value as initial averagec¯{\displaystyle {\bar {c}}}. That meansθM=0.001{\displaystyle \theta _{M}=0.001},ϕ=0.009{\displaystyle \phi =0.009},m˙=(0.008,0.001){\displaystyle {\dot {m}}=(0.008,0.001)}. Adding 10% of the derivative to the weights we obtain new onesm=(0.101,0.051){\displaystyle \mathbf {m} =(0.101,0.051)}. In next half of time, inputs ared=(0.2,0.7){\displaystyle \mathbf {d} =(0.2,0.7)}and weightsm=(0.101,0.051){\displaystyle \mathbf {m} =(0.101,0.051)}. That meansc=0.055{\displaystyle c=0.055},c¯{\displaystyle {\bar {c}}}of full cycle is 0.075,θM=0.000{\displaystyle \theta _{M}=0.000},ϕ=0.003{\displaystyle \phi =0.003},m˙=(0.001,0.002){\displaystyle {\dot {m}}=(0.001,0.002)}. Adding 10% of the derivative to the weights we obtain new onesm=(0.110,0.055){\displaystyle \mathbf {m} =(0.110,0.055)}. Repeating previous cycle we obtain, after several hundred of iterations, that stability is reached withm=(3.246,−0.927){\displaystyle \mathbf {m} =(3.246,-0.927)},c=8=2.828{\displaystyle c={\sqrt {8}}=2.828}(first half) andc=0.000{\displaystyle c=0.000}(remainder time),c¯=8/2=1.414{\displaystyle {\bar {c}}={\sqrt {8}}/2=1.414},θM=8=2.828{\displaystyle \theta _{M}={\sqrt {8}}=2.828},ϕ=0.000{\displaystyle \phi =0.000}andm˙=(0.000,0.000){\displaystyle {\dot {m}}=(0.000,0.000)}. Note how, as predicted, the final weight vectorm{\displaystyle m}has become orthogonal to one of the input patterns, being the final values ofc{\displaystyle c}in both intervals zeros of the functionϕ{\displaystyle \phi }. The first major experimental confirmation of BCM came in 1992 in investigatingLTPandLTDin thehippocampus.Serena Dudek's experimental work showed qualitative agreement with the final form of the BCM activation function.[8]This experiment was later replicated in thevisual cortex, which BCM was originally designed to model.[9]This work provided further evidence of the necessity for a variable threshold function for stability in Hebbian-type learning (BCM or others). Experimental evidence has been non-specific to BCM until Rittenhouseet al.confirmed BCM's prediction of synapse modification in the visual cortex when one eye is selectively closed. Specifically, wheren2¯{\displaystyle {\overline {n^{2}}}}describes the variance in spontaneous activity or noise in the closed eye andt{\displaystyle t}is time since closure. Experiment agreed with the general shape of this prediction and provided an explanation for the dynamics of monocular eye closure (monocular deprivation) versus binocular eye closure.[10]The experimental results are far from conclusive, but so far have favored BCM over competing theories of plasticity. While the algorithm of BCM is too complicated for large-scaleparallel distributed processing, it has been put to use in lateral networks with some success.[11]Furthermore, some existing computational network learning algorithms have been made to correspond to BCM learning.[12]
https://en.wikipedia.org/wiki/BCM_theory
Contrastive Hebbian learningis a biologically plausible form ofHebbian learning. It is based on the contrastive divergence algorithm, which has been used to train a variety of energy-based latent variable models.[1] In 2003, contrastive Hebbian learning was shown to be equivalent in power to thebackpropagationalgorithms commonly used inmachine learning.[2] Thisneurosciencearticle is astub. You can help Wikipedia byexpanding it. This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Contrastive_Hebbian_learning
Thegeneralized Hebbian algorithm, also known in the literature asSanger's rule, is a linearfeedforward neural networkforunsupervised learningwith applications primarily inprincipal components analysis. First defined in 1989,[1]it is similar toOja's rulein its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made byDonald Hebb[2]about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synapticneurons.[3] Consider a problem of learning a linear code for some data. Each data is a multi-dimensional vectorx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}, and can be (approximately) represented as a linear sum of linear code vectorsw1,…,wm{\displaystyle w_{1},\dots ,w_{m}}. Whenm=n{\displaystyle m=n}, it is possible to exactly represent the data. Ifm<n{\displaystyle m<n}, it is possible to approximately represent the data. To minimize the L2 loss of representation,w1,…,wm{\displaystyle w_{1},\dots ,w_{m}}should be the highest principal component vectors. The generalized Hebbian algorithm is an iterative algorithm to find the highest principal component vectors, in an algorithmic form that resemblesunsupervisedHebbian learning in neural networks. Consider a one-layered neural network withn{\displaystyle n}input neurons andm{\displaystyle m}output neuronsy1,…,ym{\displaystyle y_{1},\dots ,y_{m}}. The linear code vectors are the connection strengths, that is,wij{\displaystyle w_{ij}}is thesynaptic weightor connection strength between thej{\displaystyle j}-th input andi{\displaystyle i}-th output neurons. The generalized Hebbian algorithm learning rule is of the form whereη{\displaystyle \eta }is thelearning rateparameter.[4] In matrix form, Oja's rule can be written and the Gram-Schmidt algorithm is wherew(t)is any matrix, in this case representing synaptic weights,Q=ηxxTis the autocorrelation matrix, simply the outer product of inputs,diagis the function thatdiagonalizesa matrix, andloweris the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form, where the functionLTsets all matrix elements above the diagonal equal to 0, and note that our outputy(t) =w(t)x(t)is a linear neuron.[1] [5] Oja's ruleis the special case wherem=1{\displaystyle m=1}.[6]One can think of the generalized Hebbian algorithm as iterating Oja's rule. With Oja's rule,w1{\displaystyle w_{1}}is learned, and it has the same direction as the largest principal component vector is learned, with length determined byE[xj]=E[w1jy1]{\displaystyle E[x_{j}]=E[w_{1j}y_{1}]}for allj{\displaystyle j}, where the expectation is taken over all input-output pairs. In other words, the length of the vectorw1{\displaystyle w_{1}}is such that we have anautoencoder, with the latent codey1=∑iw1ixi{\displaystyle y_{1}=\sum _{i}w_{1i}x_{i}}, such thatE[‖x−y1w1‖2]{\displaystyle E[\|x-y_{1}w_{1}\|^{2}]}is minimized. Whenm=2{\displaystyle m=2}, the first neuron in the hidden layer of the autoencoder still learns as described, since it is unaffected by the second neuron. So, after the first neuron and its vectorw1{\displaystyle w_{1}}has converged, the second neuron is effectively running another Oja's rule on the modified input vectors, defined byx′=x−y1w1{\displaystyle x'=x-y_{1}w_{1}}, which we know is the input vector with the first principal component removed. Therefore, the second neuron learns to code for the second principal component. By induction, this results in finding the top-m{\displaystyle m}principal components for arbitrarym{\displaystyle m}. The generalized Hebbian algorithm is used in applications where aself-organizing mapis necessary, or where a feature orprincipal components analysiscan be used. Examples of such cases includeartificial intelligenceand speech and image processing. Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the multi-layer dependence associated with thebackpropagationalgorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by thelearningrate parameterη.[5] As an example, (Olshausen and Field, 1996)[7]performed the generalized Hebbian algorithm on 8-by-8 patches of photos of natural scenes, and found that it results in Fourier-like features. The features are the same as the principal components found by principal components analysis, as expected, and that, the features are determined by the64×64{\displaystyle 64\times 64}variance matrix of the samples of 8-by-8 patches. In other words, it is determined by the second-order statistics of the pixels in images. They criticized this as insufficient to capture higher-order statistics which are necessary to explain the Gabor-like features of simple cells in theprimary visual cortex.
https://en.wikipedia.org/wiki/Generalized_Hebbian_algorithm
Insignal processing,independent component analysis(ICA) is a computational method for separating amultivariatesignal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents arestatistically independentfrom each other.[1]ICA was invented by Jeanny Hérault and Christian Jutten in 1985.[2]ICA is a special case ofblind source separation. A common example application of ICA is the "cocktail party problem" of listening in on one person's speech in a noisy room.[3] Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. The question then is whether it is possible to separate these contributing sources from the observed total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results.[5]It is also used for signals that are not supposed to be generated by mixing for analysis purposes. A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. Note that a filtered and delayed signal is a copy of a dependent component, and thus the statistical independence assumption is not violated. Mixing weights for constructing theM{\textstyle M}observed signals from theN{\textstyle N}components can be placed in anM×N{\textstyle M\times N}matrix. An important thing to consider is that ifN{\textstyle N}sources are present, at leastN{\textstyle N}observations (e.g. microphones if the observed signal is audio) are needed to recover the original signals. When there are an equal number of observations and source signals, the mixing matrix is square (M=N{\textstyle M=N}). Other cases of underdetermined (M<N{\textstyle M<N}) and overdetermined (M>N{\textstyle M>N}) have been investigated. The success of ICA separation of mixed signals relies on two assumptions and three effects of mixing source signals. Two assumptions: Three effects of mixing source signals: Those principles contribute to the basic establishment of ICA. If the signals extracted from a set of mixtures are independent and have non-Gaussian distributions or have low complexity, then they must be source signals.[6][7] Another common example is imagesteganography, where ICA is used to embed one image within another. For instance, two grayscale images can be linearly combined to create mixed images in which the hidden content is visually imperceptible. ICA can then be used to recover the original source images from the mixtures. This technique underlies digital watermarking, which allows the embedding of ownership information into images, as well as more covert applications such as undetected information transmission. The method has even been linked to real-world cyberespionage cases. In such applications, ICA serves to unmix the data based on statistical independence, making it possible to extract hidden components that are not apparent in the observed data. Steganographic techniques, including those potentially involving ICA-based analysis, have been used in real-world cyberespionage cases. In 2010, the FBI uncovered a Russian spy network known as the "Illegals Program" (Operation Ghost Stories), where agents used custom-built steganography tools to conceal encrypted text messages within image files shared online.[8] In another case, a former General Electric engineer, Xiaoqing Zheng, was convicted in 2022 for economic espionage. Zheng used steganography to exfiltrate sensitive turbine technology by embedding proprietary data within image files for transfer to entities in China.[9] ICA finds the independent components (also called factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The two broadest definitions of independence for ICA are The Minimization-of-Mutual information(MMI) family of ICA algorithms uses measures likeKullback-Leibler Divergenceandmaximum entropy. The non-Gaussianity family of ICA algorithms, motivated by thecentral limit theorem, useskurtosisandnegentropy.[10] Typical algorithms for ICA use centering (subtract the mean to create a zero mean signal),whitening(usually with theeigenvalue decomposition),[11]anddimensionality reductionas preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case. In the classical ICA model, it is assumed that the observed dataxi∈Rm{\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{m}}at timeti{\displaystyle t_{i}}is generated from source signalssi∈Rm{\displaystyle \mathbf {s} _{i}\in \mathbb {R} ^{m}}via a linear transformationxi=Asi{\displaystyle \mathbf {x} _{i}=A\mathbf {s} _{i}}, whereA{\displaystyle A}is an unknown, invertible mixing matrix. To recover the source signals, the data is first centered (zero mean), and then whitened so that the transformed data has unit covariance. This whitening reduces the problem from estimating a general matrixA{\displaystyle A}to estimating an orthogonal matrixV{\displaystyle V}, significantly simplifying the search for independent components. If the covariance matrix of the centered data isΣx=AA⊤{\displaystyle \Sigma _{x}=AA^{\top }}, then using the eigen-decompositionΣx=QDQ⊤{\displaystyle \Sigma _{x}=QDQ^{\top }}, the whitening transformation can be taken asD−1/2Q⊤{\displaystyle D^{-1/2}Q^{\top }}. This step ensures that the recovered sources are uncorrelated and of unit variance, leaving only the task of rotating the whitened data to maximize statistical independence. This general derivation underlies many ICA algorithms and is foundational in understanding the ICA model.[12] Independent component analysis(ICA) addresses the problem of recovering a set of unobserved source signalssi=(si1,si2,…,sim)T{\displaystyle s_{i}=(s_{i1},s_{i2},\dots ,s_{im})^{T}}from observed mixed signalsxi=(xi1,xi2,…,xim)T{\displaystyle x_{i}=(x_{i1},x_{i2},\dots ,x_{im})^{T}}, based on the linear mixing model: xi=Asi,{\displaystyle x_{i}=A\,s_{i},} where theA{\displaystyle A}is anm×m{\displaystyle m\times m}invertible matrix called themixing matrix,si{\displaystyle s_{i}}represents the m‑dimensional vector containing the values of the sources at timeti{\displaystyle t_{i}}, andxi{\displaystyle x_{i}}is the corresponding vector of observed values at timeti{\displaystyle t_{i}}. The goal is to estimate bothA{\displaystyle A}and the source signals{si}{\displaystyle \{s_{i}\}}solely from the observed data{xi}{\displaystyle \{x_{i}\}}. After centering, the Gram matrix is computed as:(X∗)TX∗=QDQT,{\displaystyle (X^{*})^{T}X^{*}=Q\,D\,Q^{T},}where D is a diagonal matrix with positive entries (assumingX∗{\displaystyle X^{*}}has maximum rank), and Q is an orthogonal matrix.[13]Writing the SVD of the mixing matrixA=UΣVT{\displaystyle A=U\Sigma V^{T}}and comparing withAAT=UΣ2UT{\displaystyle AA^{T}=U\Sigma ^{2}U^{T}}the mixing A has the formA=QD1/2VT.{\displaystyle A=Q\,D^{1/2}\,V^{T}.}So, the normalized source values satisfysi∗=Vyi∗{\displaystyle s_{i}^{*}=V\,y_{i}^{*}}, whereyi∗=D−12QTxi∗.{\displaystyle y_{i}^{*}=D^{-{\tfrac {1}{2}}}Q^{T}x_{i}^{*}.}Thus, ICA reduces to finding the orthogonal matrixV{\displaystyle V}. This matrix can be computed using optimization techniques via projection pursuit methods (seeProjection Pursuit).[14] Well-known algorithms for ICA includeinfomax,FastICA,JADE, andkernel-independent component analysis, among others. In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals. ICA is important toblind signal separationand has many practical applications. It is closely related to (or even a special case of) the search for afactorial codeof the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. The componentsxi{\displaystyle x_{i}}of the observed random vectorx=(x1,…,xm)T{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{m})^{T}}are generated as a sum of the independent componentssk{\displaystyle s_{k}},k=1,…,n{\displaystyle k=1,\ldots ,n}: xi=ai,1s1+⋯+ai,ksk+⋯+ai,nsn{\displaystyle x_{i}=a_{i,1}s_{1}+\cdots +a_{i,k}s_{k}+\cdots +a_{i,n}s_{n}} weighted by the mixing weightsai,k{\displaystyle a_{i,k}}. The same generative model can be written in vector form asx=∑k=1nskak{\displaystyle {\boldsymbol {x}}=\sum _{k=1}^{n}s_{k}{\boldsymbol {a}}_{k}}, where the observed random vectorx{\displaystyle {\boldsymbol {x}}}is represented by the basis vectorsak=(a1,k,…,am,k)T{\displaystyle {\boldsymbol {a}}_{k}=({\boldsymbol {a}}_{1,k},\ldots ,{\boldsymbol {a}}_{m,k})^{T}}. The basis vectorsak{\displaystyle {\boldsymbol {a}}_{k}}form the columns of the mixing matrixA=(a1,…,an){\displaystyle {\boldsymbol {A}}=({\boldsymbol {a}}_{1},\ldots ,{\boldsymbol {a}}_{n})}and the generative formula can be written asx=As{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}}, wheres=(s1,…,sn)T{\displaystyle {\boldsymbol {s}}=(s_{1},\ldots ,s_{n})^{T}}. Given the model and realizations (samples)x1,…,xN{\displaystyle {\boldsymbol {x}}_{1},\ldots ,{\boldsymbol {x}}_{N}}of the random vectorx{\displaystyle {\boldsymbol {x}}}, the task is to estimate both the mixing matrixA{\displaystyle {\boldsymbol {A}}}and the sourcess{\displaystyle {\boldsymbol {s}}}. This is done by adaptively calculating thew{\displaystyle {\boldsymbol {w}}}vectors and setting up a cost function which either maximizes the non-gaussianity of the calculatedsk=wTx{\displaystyle s_{k}={\boldsymbol {w}}^{T}{\boldsymbol {x}}}or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function. The original sourcess{\displaystyle {\boldsymbol {s}}}can be recovered by multiplying the observed signalsx{\displaystyle {\boldsymbol {x}}}with the inverse of the mixing matrixW=A−1{\displaystyle {\boldsymbol {W}}={\boldsymbol {A}}^{-1}}, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (n=m{\displaystyle n=m}). If the number of basis vectors is greater than the dimensionality of the observed vectors,n>m{\displaystyle n>m}, the task is overcomplete but is still solvable with thepseudo inverse. With the added assumption of zero-mean and uncorrelated Gaussian noisen∼N(0,diag⁡(Σ)){\displaystyle n\sim N(0,\operatorname {diag} (\Sigma ))}, the ICA model takes the formx=As+n{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}+n}. The mixing of the sources does not need to be linear. Using a nonlinear mixing functionf(⋅|θ){\displaystyle f(\cdot |\theta )}with parametersθ{\displaystyle \theta }thenonlinear ICAmodel isx=f(s|θ)+n{\displaystyle x=f(s|\theta )+n}. The independent components are identifiable up to a permutation and scaling of the sources.[15]This identifiability requires that: A special variant of ICA is binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains includingmedical diagnosis,multi-cluster assignment,network tomographyandinternet resource management. Letx1,x2,…,xm{\displaystyle {x_{1},x_{2},\ldots ,x_{m}}}be the set of binary variables fromm{\displaystyle m}monitors andy1,y2,…,yn{\displaystyle {y_{1},y_{2},\ldots ,y_{n}}}be the set of binary variables fromn{\displaystyle n}sources. Source-monitor connections are represented by the (unknown) mixing matrixG{\textstyle {\boldsymbol {G}}}, wheregij=1{\displaystyle g_{ij}=1}indicates that signal from thei-th source can be observed by thej-th monitor. The system works as follows: at any time, if a sourcei{\displaystyle i}is active (yi=1{\displaystyle y_{i}=1}) and it is connected to the monitorj{\displaystyle j}(gij=1{\displaystyle g_{ij}=1}) then the monitorj{\displaystyle j}will observe some activity (xj=1{\displaystyle x_{j}=1}). Formally we have: where∧{\displaystyle \wedge }is Boolean AND and∨{\displaystyle \vee }is Boolean OR. Noise is not explicitly modelled, rather, can be treated as independent sources. The above problem can be heuristically solved[16]by assuming variables are continuous and runningFastICAon binary observation data to get the mixing matrixG{\textstyle {\boldsymbol {G}}}(real values), then applyround numbertechniques onG{\textstyle {\boldsymbol {G}}}to obtain the binary values. This approach has been shown to produce a highly inaccurate result.[citation needed] Another method is to usedynamic programming: recursively breaking the observation matrixX{\textstyle {\boldsymbol {X}}}into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrixX0{\textstyle {\boldsymbol {X}}^{0}}ofX{\textstyle {\boldsymbol {X}}}wherexij=0,∀j{\textstyle x_{ij}=0,\forall j}corresponds to the unbiased observation matrix of hidden components that do not have connection to thei{\displaystyle i}-th monitor. Experimental results from[17]show that this approach is accurate under moderate noise levels. The Generalized Binary ICA framework[18]introduces a broader problem formulation which does not necessitate any knowledge on the generative model. In other words, this method attempts to decompose a source into its independent components (as much as possible, and without losing any information) with no prior assumption on the way it was generated. Although this problem appears quite complex, it can be accurately solved with abranch and boundsearch tree algorithm or tightly upper bounded with a single multiplication of a matrix with a vector. Signal mixtures tend to have Gaussian probability density functions, and source signals tend to have non-Gaussian probability density functions. Each source signal can be extracted from a set of signal mixtures by taking the inner product of a weight vector and those signal mixtures where this inner product provides an orthogonal projection of the signal mixtures. The remaining challenge is finding such a weight vector. One type of method for doing so isprojection pursuit.[19][20] Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible. This contrasts with ICA, which typically extractsMsignals simultaneously fromMsignal mixtures, which requires estimating aM×Munmixing matrix. One practical advantage of projection pursuit over ICA is that fewer thanMsignals can be extracted if required, where each source signal is extracted fromMsignal mixtures using anM-element weight vector. We can usekurtosisto recover the multiple source signal by finding the correct weight vectors with the use of projection pursuit. The kurtosis of the probability density function of a signal, for a finite sample, is computed as wherey¯{\displaystyle \mathbf {\overline {y}} }is thesample meanofy{\displaystyle \mathbf {y} }, the extracted signals. The constant 3 ensures that Gaussian signals have zero kurtosis, Super-Gaussian signals have positive kurtosis, and Sub-Gaussian signals have negative kurtosis. The denominator is thevarianceofy{\displaystyle \mathbf {y} }, and ensures that the measured kurtosis takes account of signal variance. The goal of projection pursuit is to maximize the kurtosis, and make the extracted signal as non-normal as possible. Using kurtosis as a measure of non-normality, we can now examine how the kurtosis of a signaly=wTx{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {x} }extracted from a set ofMmixturesx=(x1,x2,…,xM)T{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{M})^{T}}varies as the weight vectorw{\displaystyle \mathbf {w} }is rotated around the origin. Given our assumption that each source signals{\displaystyle \mathbf {s} }is super-gaussian we would expect: For multiple source mixture signals, we can use kurtosis andGram-SchmidtOrthogonalization (GSO) to recover the signals. GivenMsignal mixtures in anM-dimensional space, GSO project these data points onto an (M-1)-dimensional space by using the weight vector. We can guarantee the independence of the extracted signals with the use of GSO. In order to find the correct value ofw{\displaystyle \mathbf {w} }, we can usegradient descentmethod. We first of all whiten the data, and transformx{\displaystyle \mathbf {x} }into a new mixturez{\displaystyle \mathbf {z} }, which has unit variance, andz=(z1,z2,…,zM)T{\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{M})^{T}}. This process can be achieved by applyingSingular value decompositiontox{\displaystyle \mathbf {x} }, Rescaling each vectorUi=Ui/E⁡(Ui2){\displaystyle U_{i}=U_{i}/\operatorname {E} (U_{i}^{2})}, and letz=U{\displaystyle \mathbf {z} =\mathbf {U} }. The signal extracted by a weighted vectorw{\displaystyle \mathbf {w} }isy=wTz{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {z} }. If the weight vectorwhas unit length, then the variance ofyis also 1, that isE⁡[(wTz)2]=1{\displaystyle \operatorname {E} [(\mathbf {w} ^{T}\mathbf {z} )^{2}]=1}. The kurtosis can thus be written as: The updating process forw{\displaystyle \mathbf {w} }is: whereη{\displaystyle \eta }is a small constant to guarantee thatw{\displaystyle \mathbf {w} }converges to the optimal solution. After each update, we normalizewnew=wnew|wnew|{\displaystyle \mathbf {w} _{new}={\frac {\mathbf {w} _{new}}{|\mathbf {w} _{new}|}}}, and setwold=wnew{\displaystyle \mathbf {w} _{old}=\mathbf {w} _{new}}, and repeat the updating process until convergence. We can also use another algorithm to update the weight vectorw{\displaystyle \mathbf {w} }. Another approach is usingnegentropy[10][21]instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers. The negentropy methods are based on an important property of Gaussian distribution: a Gaussian variable has the largest entropy among all continuous random variables of equal variance. This is also the reason why we want to find the most nongaussian variables. A simple proof can be found inDifferential entropy. y is a Gaussian random variable of the same covariance matrix as x An approximation for negentropy is A proof can be found in the original papers of Comon;[22][10]it has been reproduced in the bookIndependent Component Analysisby Aapo Hyvärinen, Juha Karhunen, andErkki Oja[23]This approximation also suffers from the same problem as kurtosis (sensitivity to outliers). Other approaches have been developed.[24] A choice ofG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}are Infomax ICA[25]is essentially a multivariate, parallel version of projection pursuit. Whereas projection pursuit extracts a series of signals one at a time from a set ofMsignal mixtures, ICA extractsMsignals in parallel. This tends to make ICA more robust than projection pursuit.[26] The projection pursuit method usesGram-Schmidtorthogonalization to ensure the independence of the extracted signal, while ICA useinfomaxandmaximum likelihoodestimate to ensure the independence of the extracted signal. The Non-Normality of the extracted signal is achieved by assigning an appropriate model, or prior, for the signal. The process of ICA based oninfomaxin short is: given a set of signal mixturesx{\displaystyle \mathbf {x} }and a set of identical independent modelcumulative distribution functions(cdfs)g{\displaystyle g}, we seek the unmixing matrixW{\displaystyle \mathbf {W} }which maximizes the jointentropyof the signalsY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }are the signals extracted byW{\displaystyle \mathbf {W} }. Given the optimalW{\displaystyle \mathbf {W} }, the signalsY{\displaystyle \mathbf {Y} }have maximum entropy and are therefore independent, which ensures that the extracted signalsy=g−1(Y){\displaystyle \mathbf {y} =g^{-1}(\mathbf {Y} )}are also independent.g{\displaystyle g}is an invertible function, and is the signal model. Note that if the source signal modelprobability density functionps{\displaystyle p_{s}}matches theprobability density functionof the extracted signalpy{\displaystyle p_{\mathbf {y} }}, then maximizing the joint entropy ofY{\displaystyle Y}also maximizes the amount ofmutual informationbetweenx{\displaystyle \mathbf {x} }andY{\displaystyle \mathbf {Y} }. For this reason, using entropy to extract independent signals is known asinfomax. Consider the entropy of the vector variableY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }is the set of signals extracted by the unmixing matrixW{\displaystyle \mathbf {W} }. For a finite set of values sampled from a distribution with pdfpy{\displaystyle p_{\mathbf {y} }}, the entropy ofY{\displaystyle \mathbf {Y} }can be estimated as: The joint pdfpY{\displaystyle p_{\mathbf {Y} }}can be shown to be related to the joint pdfpy{\displaystyle p_{\mathbf {y} }}of the extracted signals by the multivariate form: whereJ=∂Y∂y{\displaystyle \mathbf {J} ={\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}}is theJacobian matrix. We have|J|=g′(y){\displaystyle |\mathbf {J} |=g'(\mathbf {y} )}, andg′{\displaystyle g'}is the pdf assumed for source signalsg′=ps{\displaystyle g'=p_{s}}, therefore, therefore, We know that whenpy=ps{\displaystyle p_{\mathbf {y} }=p_{s}},pY{\displaystyle p_{\mathbf {Y} }}is of uniform distribution, andH(Y){\displaystyle H({\mathbf {Y} })}is maximized. Since where|W|{\displaystyle |\mathbf {W} |}is the absolute value of the determinant of the unmixing matrixW{\displaystyle \mathbf {W} }. Therefore, so, sinceH(x)=−1N∑t=1Nln⁡px(xt){\displaystyle H(\mathbf {x} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {x} }(\mathbf {x} ^{t})}, and maximizingW{\displaystyle \mathbf {W} }does not affectHx{\displaystyle H_{\mathbf {x} }}, so we can maximize the function to achieve the independence of the extracted signal. If there areMmarginal pdfs of the model joint pdfps{\displaystyle p_{\mathbf {s} }}are independent and use the commonly super-gaussian model pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{\mathbf {s} }=(1-\tanh(\mathbf {s} )^{2})}, then we have In the sum, given an observed signal mixturex{\displaystyle \mathbf {x} }, the corresponding set of extracted signalsy{\displaystyle \mathbf {y} }and source signal modelps=g′{\displaystyle p_{\mathbf {s} }=g'}, we can find the optimal unmixing matrixW{\displaystyle \mathbf {W} }, and make the extracted signals independent and non-gaussian. Like the projection pursuit situation, we can use gradient descent method to find the optimal solution of the unmixing matrix. Maximum likelihoodestimation (MLE)is a standard statistical tool for finding parameter values (e.g. the unmixing matrixW{\displaystyle \mathbf {W} }) that provide the best fit of some data (e.g., the extracted signalsy{\displaystyle y}) to a given a model (e.g., the assumed joint probability density function (pdf)ps{\displaystyle p_{s}}of source signals).[26] TheML"model" includes a specification of a pdf, which in this case is the pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. UsingML ICA, the objective is to find an unmixing matrix that yields extracted signalsy=Wx{\displaystyle y=\mathbf {W} x}with a joint pdf as similar as possible to the joint pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. MLEis thus based on the assumption that if the model pdfps{\displaystyle p_{s}}and the model parametersA{\displaystyle \mathbf {A} }are correct then a high probability should be obtained for the datax{\displaystyle x}that were actually observed. Conversely, ifA{\displaystyle \mathbf {A} }is far from the correct parameter values then a low probability of the observed data would be expected. UsingMLE, we call the probability of the observed data for a given set of model parameter values (e.g., a pdfps{\displaystyle p_{s}}and a matrixA{\displaystyle \mathbf {A} }) thelikelihoodof the model parameter values given the observed data. We define alikelihoodfunctionL(W){\displaystyle \mathbf {L(W)} }ofW{\displaystyle \mathbf {W} }: L(W)=ps(Wx)|detW|.{\displaystyle \mathbf {L(W)} =p_{s}(\mathbf {W} x)|\det \mathbf {W} |.} This equals to the probability density atx{\displaystyle x}, sinces=Wx{\displaystyle s=\mathbf {W} x}. Thus, if we wish to find aW{\displaystyle \mathbf {W} }that is most likely to have generated the observed mixturesx{\displaystyle x}from the unknown source signalss{\displaystyle s}with pdfps{\displaystyle p_{s}}then we need only find thatW{\displaystyle \mathbf {W} }which maximizes thelikelihoodL(W){\displaystyle \mathbf {L(W)} }. The unmixing matrix that maximizes equation is known as theMLEof the optimal unmixing matrix. It is common practice to use the loglikelihood, because this is easier to evaluate. As the logarithm is a monotonic function, theW{\displaystyle \mathbf {W} }that maximizes the functionL(W){\displaystyle \mathbf {L(W)} }also maximizes its logarithmln⁡L(W){\displaystyle \ln \mathbf {L(W)} }. This allows us to take the logarithm of equation above, which yields the loglikelihoodfunction ln⁡L(W)=∑i∑tln⁡ps(wiTxt)+Nln⁡|detW|{\displaystyle \ln \mathbf {L(W)} =\sum _{i}\sum _{t}\ln p_{s}(w_{i}^{T}x_{t})+N\ln |\det \mathbf {W} |} If we substitute a commonly used high-Kurtosismodel pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{s}=(1-\tanh(s)^{2})}then we have ln⁡L(W)=1N∑iM∑tNln⁡(1−tanh⁡(wiTxt)2)+ln⁡|detW|{\displaystyle \ln \mathbf {L(W)} ={1 \over N}\sum _{i}^{M}\sum _{t}^{N}\ln(1-\tanh(w_{i}^{T}x_{t})^{2})+\ln |\det \mathbf {W} |} This matrixW{\displaystyle \mathbf {W} }that maximizes this function is themaximum likelihoodestimation. The early general framework for independent component analysis was introduced by Jeanny Hérault and Bernard Ans from 1984,[27]further developed by Christian Jutten in 1985 and 1986,[2][28][29]and refined by Pierre Comon in 1991,[22]and popularized in his paper of 1994.[10]In 1995, Tony Bell andTerry Sejnowskiintroduced a fast and efficient ICA algorithm based oninfomax, a principle introduced by Ralph Linsker in 1987. A link exists between maximum-likelihood estimation and Infomax approaches.[30]A quite comprehensive tutorial on the maximum-likelihood approach to ICA has been published by J-F. Cardoso in 1998.[31] There are many algorithms available in the literature which do ICA. A largely used one, including in industrial applications, is the FastICA algorithm, developed by Hyvärinen and Oja,[32]which uses thenegentropyas cost function, already proposed 7 years before by Pierre Comon in this context.[10]Other examples are rather related toblind source separationwhere a more general approach is used. For example, one can drop the independence assumption and separate mutually correlated signals, thus, statistically "dependent" signals. Sepp Hochreiter andJürgen Schmidhubershowed how to obtain non-linear ICA or source separation as a by-product ofregularization(1999).[33]Their method does not require a priori knowledge about the number of independent sources. ICA can be extended to analyze non-physical signals. For instance, ICA has been applied to discover discussion topics on a bag of news list archives. Some ICA applications are listed below:[6] ICA can be applied through the following software:
https://en.wikipedia.org/wiki/Independent_components_analysis
Inneuroscience,synaptic plasticityis the ability ofsynapsestostrengthen or weakenover time, in response to increases or decreases in their activity.[1]Sincememoriesare postulated to be represented by vastly interconnectedneural circuitsin thebrain, synaptic plasticity is one of the important neurochemical foundations oflearningandmemory(seeHebbian theory). Plastic change often results from the alteration of the number ofneurotransmitter receptorslocated on a synapse.[2]There are several underlying mechanisms that cooperate to achieve synaptic plasticity, including changes in the quantity ofneurotransmittersreleased into a synapse and changes in how effectively cells respond to those neurotransmitters.[3]Synaptic plasticity in bothexcitatoryandinhibitorysynapses has been found to be dependent uponpostsynapticcalciumrelease.[2] In 1973,Terje LømoandTim Blissfirst described the now widely studied phenomenon oflong-term potentiation(LTP) in a publication in theJournal of Physiology. The experiment described was conducted on the synapse between theperforant pathanddentate gyrusin thehippocampiof anaesthetised rabbits. They were able to show a burst of tetanic (100 Hz) stimulus on perforant path fibres led to a dramatic and long-lasting augmentation in the post-synaptic response of cells onto which these fibres synapse in the dentate gyrus. In the same year, the pair published very similar data recorded from awake rabbits. This discovery was of particular interest due to the proposed role of the hippocampus in certain forms of memory. Two molecular mechanisms for synaptic plasticity involve theNMDAandAMPAglutamate receptors. Opening of NMDA channels (which relates to the level of cellulardepolarization) leads to a rise in post-synaptic Ca2+concentration and this has been linked to long-term potentiation, LTP (as well as to proteinkinaseactivation); strong depolarization of the post-synaptic cell completely displaces themagnesiumions that block NMDA ion channels and allows calcium ions to enter a cell – probably causing LTP, while weaker depolarization only partially displaces the Mg2+ions, resulting in less Ca2+entering the post-synaptic neuron and lower intracellular Ca2+concentrations (which activate protein phosphatases and inducelong-term depression, LTD).[4] These activated protein kinases serve to phosphorylate post-synaptic excitatory receptors (e.g.AMPA receptors), improving cation conduction, and thereby potentiating the synapse. Also, these signals recruit additional receptors into the post-synaptic membrane, stimulating the production of a modified receptor type, thereby facilitating an influx of calcium. This in turn increases post-synaptic excitation by a given pre-synaptic stimulus. This process can be reversed via the activity of protein phosphatases, which act to dephosphorylate these cation channels.[5] The second mechanism depends on asecond messengercascade regulatinggene transcriptionand changes in the levels of key proteins such asCaMKIIand PKAII. Activation of the second messenger pathway leads to increased levels of CaMKII and PKAII within thedendritic spine. These protein kinases have been linked to growth in dendritic spine volume and LTP processes such as the addition of AMPA receptors to theplasma membraneand phosphorylation of ion channels for enhanced permeability.[6]Localization or compartmentalization of activated proteins occurs in the presence of their given stimulus which creates local effects in the dendritic spine. Calcium influx from NMDA receptors is necessary for the activation of CaMKII. This activation is localized to spines with focal stimulation and is inactivated before spreading to adjacent spines or the shaft, indicating an important mechanism of LTP in that particular changes in protein activation can be localized or compartmentalized to enhance the responsivity of single dendritic spines. Individual dendritic spines are capable of forming unique responses to presynaptic cells.[7]This second mechanism can be triggered byprotein phosphorylationbut takes longer and lasts longer, providing the mechanism for long-lasting memory storage. The duration of the LTP can be regulated by breakdown of thesesecond messengers.Phosphodiesterase, for example, breaks down the secondary messengercAMP, which has been implicated in increased AMPA receptor synthesis in the post-synaptic neuron[citation needed]. Long-lasting changes in the efficacy of synaptic connections (long-term potentiation, or LTP) between two neurons can involve the making and breaking of synaptic contacts. Genes such as activin ß-A, which encodes a subunit ofactivin A, are up-regulated during early stage LTP. The activin molecule modulates the actin dynamics in dendritic spines through theMAP-kinase pathway. By changing theF-actincytoskeletalstructure of dendritic spines, spine necks are lengthened producing increased electrical isolation.[8]The end result is long-term maintenance of LTP.[9] The number ofion channelson the post-synaptic membrane affects the strength of the synapse.[10]Research suggests that the density of receptors on post-synaptic membranes changes, affecting the neuron's excitability in response to stimuli. In a dynamic process that is maintained in equilibrium,N-methyl D-aspartate receptor (NMDA receptor)and AMPA receptors are added to the membrane byexocytosisand removed byendocytosis.[11][12][13]These processes, and by extension the number of receptors on the membrane, can be altered by synaptic activity.[11][13]Experiments have shown that AMPA receptors are delivered to the synapse through vesicularmembrane fusionwith the postsynaptic membrane via the protein kinase CaMKII, which is activated by the influx of calcium through NMDA receptors. CaMKII also improves AMPA ionic conductance through phosphorylation.[14]When there is high-frequency NMDA receptor activation, there is an increase in the expression of a proteinPSD-95that increases synaptic capacity for AMPA receptors.[15]This is what leads to a long-term increase in AMPA receptors and thus synaptic strength and plasticity. If the strength of a synapse is only reinforced by stimulation or weakened by its lack, apositive feedback loopwill develop, causing some cells never to fire and some to fire too much. But two regulatory forms of plasticity, called scaling andmetaplasticity, also exist to providenegative feedback.[13]Synaptic scaling is a primary mechanism by which a neuron is able to stabilize firing rates up or down.[16] Synaptic scalingserves to maintain the strengths of synapses relative to each other, lowering amplitudes of smallexcitatory postsynaptic potentialsin response to continual excitation and raising them after prolonged blockage or inhibition.[13]This effect occurs gradually over hours or days, by changing the numbers ofNMDA receptorsat the synapse (Pérez-Otaño and Ehlers, 2005).Metaplasticityvaries the threshold level at which plasticity occurs, allowing integrated responses to synaptic activity spaced over time and preventing saturated states of LTP and LTD. Since LTP and LTD (long-term depression) rely on the influx ofCa2+through NMDA channels, metaplasticity may be due to changes in NMDA receptors, altered calcium buffering, altered states of kinases or phosphatases and a priming of protein synthesis machinery.[17]Synaptic scaling is a primary mechanism by which a neuron to be selective to its varying inputs.[18]The neuronal circuitry affected by LTP/LTD and modified by scaling and metaplasticity leads to reverberatory neural circuit development and regulation in a Hebbian manner which is manifested as memory, whereas the changes in neural circuitry, which begin at the level of the synapse, are an integral part in the ability of an organism to learn.[19] There is also a specificity element of biochemical interactions to create synaptic plasticity, namely the importance of location. Processes occur at microdomains – such asexocytosisof AMPA receptors is spatially regulated by thet-SNARESTX4.[20]Specificity is also an important aspect of CAMKII signaling involving nanodomain calcium.[7]The spatial gradient of PKA between dendritic spines and shafts is also important for the strength and regulation of synaptic plasticity.[6]It is important to remember that the biochemical mechanisms altering synaptic plasticity occur at the level of individual synapses of a neuron. Since the biochemical mechanisms are confined to these "microdomains," the resulting synaptic plasticity affects only the specific synapse at which it took place. A bidirectional model, describing both LTP and LTD, of synaptic plasticity has proved necessary for a number of different learning mechanisms incomputational neuroscience,neural networks, andbiophysics. Three major hypotheses for the molecular nature of this plasticity have been well-studied, and none are required to be the exclusive mechanism: Of these, the latter two hypotheses have been recently mathematically examined to have identical calcium-dependent dynamics which provides strong theoretical evidence for a calcium-based model of plasticity, which in a linear model where the total number of receptors are conserved looks like where BothΩ{\displaystyle \Omega }andτ{\displaystyle \tau }are found experimentally and agree on results from both hypotheses. The model makes important simplifications that make it unsuited for actual experimental predictions, but provides a significant basis for the hypothesis of a calcium-based synaptic plasticity dependence.[21] Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes unlike long-term plasticity, which lasts from minutes to hours. Short-term plasticity can either strengthen or weaken a synapse. Short-term synaptic enhancement results from an increased probability of synaptic terminals releasing transmitters in response to pre-synaptic action potentials. Synapses will strengthen for a short time because of an increase in the amount of packaged transmitter released in response to each action potential.[22]Depending on the time scales over which it acts synaptic enhancement is classified asneural facilitation,synaptic augmentationorpost-tetanic potentiation. Synaptic fatigueor depression is usually attributed to the depletion of the readily releasable vesicles. Depression can also arise from post-synaptic processes and from feedback activation of presynaptic receptors.[23]heterosynapticdepression is thought to be linked to the release ofadenosine triphosphate(ATP) fromastrocytes.[24] Long-term depression(LTD) andlong-term potentiation(LTP) are two forms of long-term plasticity, lasting minutes or more, that occur at excitatory synapses.[2]NMDA-dependent LTD and LTP have been extensively researched, and are found to require the binding ofglutamate, andglycineorD-serinefor activation of NMDA receptors.[24]The turning point for the synaptic modification of a synapse has been found to be modifiable itself, depending on the history of the synapse.[25]Recently, a number of attempts have been made to offer a comprehensive model that could account for most forms of synaptic plasticity.[26] Brief activation of an excitatory pathway can produce what is known as long-term depression (LTD) of synaptic transmission in many areas of the brain. LTD is induced by a minimum level of postsynaptic depolarization and simultaneous increase in the intracellular calcium concentration at the postsynaptic neuron. LTD can be initiated at inactive synapses if the calcium concentration is raised to the minimum required level by heterosynaptic activation, or if the extracellular concentration is raised. These alternative conditions capable of causing LTD differ from the Hebb rule, and instead depend on synaptic activity modifications.D-serinerelease byastrocyteshas been found to lead to a significant reduction of LTD in the hippocampus.[24]Activity-dependent LTD was investigated in 2011 for the electrical synapses (modification of Gap Junctions efficacy through their activity).[27]In the brain, cerebellum is one of the structures where LTD is a form of neuroplasticity.[28] Long-term potentiation, commonly referred to as LTP, is an increase in synaptic response following potentiating pulses of electrical stimuli that sustains at a level above the baseline response for hours or longer. LTP involves interactions between postsynaptic neurons and the specific presynaptic inputs that form a synaptic association, and is specific to the stimulated pathway of synaptic transmission. The long-term stabilization of synaptic changes is determined by a parallel increase of pre- and postsynaptic structures such asaxonal bouton,dendritic spineandpostsynaptic density.[15]On the molecular level, an increase of the postsynaptic scaffolding proteinsPSD-95andHomer1chas been shown to correlate with the stabilization of synaptic enlargement.[15] Modification of astrocyte coverage at the synapses in the hippocampus has been found to result from theinduction of LTP, which has been found to be linked to the release ofD-serine,nitric oxide, and thechemokine,s100Bbyastrocytes.[24]LTP is also a model for studying the synaptic basis of Hebbian plasticity. Induction conditions resemble those described for the initiation of long-term depression (LTD), but a stronger depolarization and a greater increase of calcium are necessary to achieve LTP.[29]Experiments performed by stimulating an array of individual dendritic spines, have shown that synaptic cooperativity by as few as two adjacent dendritic spines prevents LTD, allowing only LTP.[30] The modification ofsynaptic strengthis referred to as functional plasticity. Changes in synaptic strength involve distinct mechanisms of particular types ofglial cells, the most researched type beingastrocytes.[24] Every kind of synaptic plasticity has different computational uses.[31]Short-term facilitation has been demonstrated to serve as both working memory and mapping input for readout, short-term depression for removing auto-correlation. Long-term potentiation is used for spatial memory storage while long-term depression for both encoding space features, selective weakening of synapses and clearing old memory traces respectively. Forwardspike-timing-dependent plasticityis used for long range temporal correlation, temporal coding and spatiotemporal coding. The reversedspike-timing-dependent plasticityacts as sensory filtering.
https://en.wikipedia.org/wiki/Synaptic_plasticity
Instatistics,Procrustes analysisis a form ofstatistical shape analysisused to analyse the distribution of a set ofshapes. The nameProcrustes(Greek:Προκρούστης) refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off. In mathematics: When a shape is compared to another, or a set of shapes is compared to an arbitrarily selected reference shape, Procrustes analysis is sometimes further qualified asclassicalorordinary, as opposed togeneralized Procrustes analysis(GPA), which compares three or more shapes to an optimally determined "mean shape". To compare the shapes of two or more objects, the objects must be first optimally "superimposed".Procrustes superimposition(PS) is performed by optimallytranslating,rotatinganduniformly scalingthe objects. In other words, both theplacement in spaceand the size of the objects are freely adjusted. The aim is to obtain a similar placement and size, by minimizing a measure of shape difference called the Procrustes distance between the objects. This is sometimes calledfull, as opposed topartialPS, in which scaling is not performed (i.e. the size of the objects is preserved). Notice that, after full PS, the objects will exactly coincide if theirshapeis identical. For instance, with full PS two spheres with different radii will always coincide, because they have exactly the same shape. Conversely, with partial PS they will never coincide. This implies that, by the strict definition of the termshapeingeometry, shape analysis should be performed using full PS. A statistical analysis based on partial PS is not a pure shape analysis as it is not only sensitive to shape differences, but also to size differences. Both full and partial PS will never manage to perfectly match two objects with different shape, such as a cube and a sphere, or a right hand and a left hand. In some cases, both full and partial PS may also includereflection. Reflection allows, for instance, a successful (possibly perfect) superimposition of a right hand to a left hand. Thus, partial PS with reflection enabled preserves size but allows translation, rotation and reflection, while full PS with reflection enabled allows translation, rotation, scaling and reflection. Optimal translation and scaling are determined with much simpler operations (see below). Here we just consider objects made up from a finite numberkof points inndimensions. Often, these points are selected on the continuous surface of complex objects, such as a human bone, and in this case they are calledlandmark points. The shape of an object can be considered as a member of anequivalence classformed by removing thetranslational,rotationalanduniform scalingcomponents. For example, translational components can be removed from an object by translating the object so that themeanof all the object's points (i.e. itscentroid) lies at the origin. Mathematically: takek{\displaystyle k}points in two dimensions, say The mean of these points is(x¯,y¯){\displaystyle ({\bar {x}},{\bar {y}})}where Now translate these points so that their mean is translated to the origin(x,y)→(x−x¯,y−y¯){\displaystyle (x,y)\to (x-{\bar {x}},y-{\bar {y}})}, giving the point(x1−x¯,y1−y¯),…{\displaystyle (x_{1}-{\bar {x}},y_{1}-{\bar {y}}),\dots }. Likewise, the scale component can be removed by scaling the object so that theroot mean squaredistance (RMSD) from the points to the translated origin is 1. This RMSD is a statistical measure of the object'sscaleorsize: The scale becomes 1 when the point coordinates are divided by the object's initial scale: Notice that other methods for defining and removing the scale are sometimes used in the literature. Removing the rotational component is more complex, as a standard reference orientation is not always available. Consider two objects composed of the same number of points with scale and translation removed. Let the points of these be((x1,y1),…){\displaystyle ((x_{1},y_{1}),\ldots )},((w1,z1),…){\displaystyle ((w_{1},z_{1}),\ldots )}. One of these objects can be used to provide a reference orientation. Fix the reference object and rotate the other around the origin, until you find an optimum angle of rotationθ{\displaystyle \theta \,\!}such that the sum of the squared distances (SSD) between the corresponding points is minimised (an example ofleast squarestechnique). A rotation by angleθ{\displaystyle \theta \,\!}gives where (u,v) are the coordinates of a rotated point. Taking the derivative of(u1−x1)2+(v1−y1)2+⋯{\displaystyle (u_{1}-x_{1})^{2}+(v_{1}-y_{1})^{2}+\cdots }with respect toθ{\displaystyle \theta }and solving forθ{\displaystyle \theta }when the derivative is zero gives When the object is three-dimensional, the optimum rotation is represented by a 3-by-3rotation matrixR, rather than a simple angle, and in this casesingular value decompositioncan be used to find the optimum value forR(see the solution for the constrainedorthogonal Procrustes problem, subject todet(R) = 1). The difference between the shape of two objects can be evaluated only after "superimposing" the two objects by translating, scaling and optimally rotating them as explained above. Thesquare rootof the above mentioned SSD between corresponding points can be used as a statistical measure of this difference in shape: This measure is often calledProcrustes distance. Notice that other more complex definitions of Procrustes distance, and other measures of "shape difference" are sometimes used in the literature. We showed how to superimpose two shapes. The same method can be applied to superimpose a set of three or more shapes, as far as the above mentioned reference orientation is used for all of them. However, Generalized Procrustes analysis provides a better method to achieve this goal. GPA applies the Procrustes analysis method to optimally superimpose a set of objects, instead of superimposing them to an arbitrarily selected shape. Generalized and ordinary Procrustes analysis differ only in their determination of a referenceorientationfor the objects, which in the former technique is optimally determined, and in the latter one is arbitrarily selected. Scaling and translation are performed the same way by both techniques. When only two shapes are compared, GPA is equivalent to ordinary Procrustes analysis. The algorithm outline is the following: There are many ways of representing the shape of an object. The shape of an object can be considered as a member of an equivalence class formed by taking the set of all sets ofkpoints inndimensions, that isRknand factoring out the set of all translations, rotations and scalings. A particular representation of shape is found by choosing a particular representation of the equivalence class. This will give amanifoldof dimensionkn-4. Procrustes is one method of doing this with particular statistical justification. Bookstein obtains a representation of shape by fixing the position of two points called the bases line. One point will be fixed at the origin and the other at (1,0) the remaining points form theBooksteincoordinates. It is also common to considershape and scalethat is with translational and rotational components removed. Shape analysis is used inbiological datato identify the variations of anatomical features characterised by landmark data, for example in considering the shape of jaw bones.[1] One study byDavid George Kendallexamined the triangles formed bystanding stonesto deduce if these were often arranged in straight lines. The shape of a triangle can be represented as a point on the sphere, and the distribution of all shapes can be thought of a distribution over the sphere. The sample distribution from the standing stones was compared with the theoretical distribution to show that the occurrence of straight lines was no more than average.[2]
https://en.wikipedia.org/wiki/Procrustes_analysis
Instatistics,Deming regression, named afterW. Edwards Deming, is anerrors-in-variables modelthat tries to find theline of best fitfor a two-dimensional data set. It differs from thesimple linear regressionin that it accounts forerrorsin observations on both thex- and they- axis. It is a special case oftotal least squares, which allows for any number of predictors and a more complicated error structure. Deming regression is equivalent to themaximum likelihoodestimation of anerrors-in-variables modelin which the errors for the two variables are assumed to be independent andnormally distributed, and the ratio of their variances, denotedδ, is known.[1]In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio. The Deming regression is only slightly more difficult to compute than thesimple linear regression. Most statistical software packages used in clinical chemistry offer Deming regression. The model was originally introduced byAdcock (1878)who considered the caseδ= 1, and then more generally byKummell (1879)with arbitraryδ. However their ideas remained largely unnoticed for more than 50 years, until they were revived byKoopmans (1936)and later propagated even more byDeming (1943). The latter book became so popular inclinical chemistryand related fields that the method was even dubbedDeming regressionin those fields.[2] Assume that the available data (yi,xi) are measured observations of the "true" values (yi*,xi*), which lie on the regression line: where errorsεandηare independent and the ratio of their variances is assumed to be known: In practice, the variances of thex{\displaystyle x}andy{\displaystyle y}parameters are often unknown, which complicates the estimate ofδ{\displaystyle \delta }. Note that when the measurement method forx{\displaystyle x}andy{\displaystyle y}is the same, these variances are likely to be equal, soδ=1{\displaystyle \delta =1}for this case. We seek to find the line of "best fit" such that the weighted sum of squared residuals of the model is minimized:[3] SeeJensen (2007)for a full derivation. The solution can be expressed in terms of the second-degree sample moments. That is, we first calculate the following quantities (all sums go fromi= 1 ton): Finally, the least-squares estimates of model's parameters will be[4] For the case of equal error variances, i.e., whenδ=1{\displaystyle \delta =1}, Deming regression becomesorthogonal regression: it minimizes the sum of squaredperpendicular distances from the data points to the regression line. In this case, denote each observation as a pointzj=xj+iyj{\displaystyle z_{j}=x_{j}+iy_{j}}in the complex plane (i.e., the point(xj,yj){\displaystyle (x_{j},y_{j})}wherei{\displaystyle i}is theimaginary unit). Denote asS=∑(zj−z¯)2{\displaystyle S=\sum {(z_{j}-{\overline {z}})^{2}}}the sum of the squared differences of the data points from thecentroidz¯=1n∑zj{\displaystyle {\overline {z}}={\tfrac {1}{n}}\sum z_{j}}(also denoted in complex coordinates), which is the point whose horizontal and vertical locations are the averages of those of the data points. Then:[5] Atrigonometricrepresentation of the orthogonal regression line was given by Coolidge in 1913.[6] In the case of threenon-collinearpoints in the plane, thetrianglewith these points as itsverticeshas a uniqueSteiner inellipsethat is tangent to the triangle's sides at their midpoints. Themajor axis of this ellipsefalls on the orthogonal regression line for the three vertices.[7]The quantification of a biological cell's intrinsiccellular noisecan be quantified upon applying Deming regression to the observed behavior of a two reportersynthetic biological circuit.[8] When humans are asked to draw a linear regression on a scatterplot by guessing, their answers are closer to orthogonal regression than to ordinary least squares regression.[9] The York regression extends Deming regression by allowing correlated errors in x and y.[10]
https://en.wikipedia.org/wiki/Deming_regression
Instatistical data analysisthetotal sum of squares(TSSorSST) is a quantity that appears as part of a standard way of presenting results of such analyses. For a set of observations,yi,i≤n{\displaystyle y_{i},i\leq n}, it is defined as the sum over all squared differences between the observations and their overallmeany¯{\displaystyle {\bar {y}}}.:[1] For wide classes oflinear models, the total sum of squares equals theexplained sum of squaresplus theresidual sum of squares. For proof of this in the multivariate OLS case, seepartitioning in the general OLS model. Inanalysis of variance(ANOVA) the total sum of squares is the sum of the so-called "within-samples" sum of squares and "between-samples" sum of squares, i.e., partitioning of the sum of squares. Inmultivariate analysis of variance(MANOVA) the following equation applies[2] whereTis the total sum of squares and products (SSP)matrix,Wis the within-samples SSP matrix andBis the between-samples SSP matrix. Similar terminology may also be used inlinear discriminant analysis, whereWandBare respectively referred to as the within-groups and between-groups SSP matrices.[2]
https://en.wikipedia.org/wiki/Total_sum_of_squares
Insignal processing,multitaperanalysis is aspectral density estimationtechnique developed byDavid J. Thomson.[1][2]It canestimatethepower spectrumSXof astationaryergodicfinite-variancerandom processX, given a finite contiguousrealizationofXas data. The multitaper method overcomes some of the limitations of non-parametricFourier analysis. When applying theFourier transformto extract spectral information from a signal, we assume that each Fourier coefficient is a reliable representation of the amplitude and relative phase of the corresponding component frequency. This assumption, however, is not generally valid for empirical data. For instance, a single trial represents only one noisy realization of the underlying process of interest. A comparable situation arises in statistics when estimating measures ofcentral tendencyi.e., it is bad practice to estimate qualities of a population using individuals or very small samples. Likewise, a single sample of a process does not necessarily provide a reliable estimate of its spectral properties. Moreover, the naivepower spectral densityobtained from the signal's raw Fourier transform is abiasedestimate of the true spectral content. These problems are often overcome by averaging over many realizations of the same event after applying ataperto each trial. However, this method is unreliable with small data sets and undesirable when one does not wish to attenuate signal components that vary across trials. Furthermore, even when many trials are available the untaperedperiodogramis generally biased (with the exception of white noise) and the bias depends upon the length of each realization, not the number of realizations recorded. Applying a single taper reduces bias but at the cost of increased estimator variance due to attenuation of activity at the start and end of each recorded segment of the signal. The multitaper method partially obviates these problems by obtaining multiple independent estimates from the same sample. Eachdata taperis multiplied element-wise by the signal to provide a windowed trial from which one estimates the power at each component frequency. As each taper is pairwise orthogonal to all other tapers, the window functions are uncorrelated with one another. The final spectrum is obtained by averaging over all the tapered spectra thus recovering some of the information that is lost due to partial attenuation of the signal that results from applying individual tapers. This method is especially useful when a small number of trials is available as it reduces the estimator variance beyond what is possible with single taper methods. Moreover, even when many trials are available the multitaper approach is useful as it permits more rigorous control of the trade-off between bias and variance than what is possible in the single taper case. Thomson chose the Slepian functions[4]or discrete prolate spheroidal sequences as tapers since these vectors are mutually orthogonal and possess desirablespectral concentrationproperties (see the section on Slepian sequences). In practice, aweighted averageis often used to compensate for increased energy loss at higher order tapers.[5] Consider a p-dimensional zero meanstationary stochastic process HereTdenotes the matrix transposition. Inneurophysiologyfor example,prefers to the total number of channels and henceX(t){\displaystyle \mathbf {X} (t)}can represent simultaneous measurement of electrical activity of thosepchannels. Let the sampling interval between observations beΔt{\displaystyle \Delta t}, so that theNyquist frequencyisfN=1/(2Δt){\displaystyle f_{N}=1/(2\Delta t)}. The multitaper spectral estimator utilizes several different data tapers which are orthogonal to each other. The multitaper cross-spectral estimator between channellandmis the average of K direct cross-spectral estimators between the same pair of channels (landm) and hence takes the form Here,S^klm(f){\displaystyle {\hat {S}}_{k}^{lm}(f)}(for0≤k≤K−1{\displaystyle 0\leq k\leq K-1}) is thekthdirect cross spectral estimator between channellandmand is given by where The sequence{ht,k}{\displaystyle \lbrace h_{t,k}\rbrace }is the data taper for thekthdirect cross-spectral estimatorS^klm(f){\displaystyle {\hat {S}}_{k}^{lm}(f)}and is chosen as follows: We choose a set ofKorthogonal data tapers such that each one provides a good protection against leakage. These are given by the Slepian sequences,[6]afterDavid Slepian(also known in literature as discrete prolate spheroidal sequences or DPSS for short) with parameterWand ordersk= 0 toK− 1. The maximum orderKis chosen to be less than theShannon number2NWΔt{\displaystyle 2NW\Delta t}. The quantity 2Wdefines the resolution bandwidth for thespectral concentration problemandW∈(0,fN){\displaystyle W\in (0,f_{N})}. Whenl=m, we get the multitaper estimator for the auto-spectrum of thelthchannel. In recent years, a dictionary based on modulated DPSS was proposed as an overcomplete alternative to DPSS.[7] See alsoWindow function:DPSS or Slepian window Not limited to time series, the multitaper method is easily extensible to multiple Cartesian dimensions using custom Slepian functions,[8]and can be reformulated for spectral estimation on the sphere using Slepian functions constructed fromspherical harmonics[9]for applications ingeophysicsandcosmology[10][11]among others. An extensive treatment about the application of this method to analyze multi-trial, multi-channel data generated inneuroscience,biomedical engineeringand elsewhere can be foundhere. This technique is currently used in thespectral analysistoolkit ofChronux.
https://en.wikipedia.org/wiki/Multitaper
Theshort-time Fourier transform(STFT) is aFourier-related transformused to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time.[1]In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as aspectrogramorwaterfall plot, such as commonly used insoftware defined radio(SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.[citation needed] Simply, in the continuous-time case, the function to be transformed is multiplied by awindow functionwhich is nonzero for only a short period of time. TheFourier transform(a one-dimensional function) of the resulting signal is taken, then the window is slid along the time axis until the end resulting in a two-dimensional representation of the signal. Mathematically, this is written as: wherew(τ){\displaystyle w(\tau )}is thewindow function, commonly aHann windoworGaussian windowcentered around zero, andx(t){\displaystyle x(t)}is the signal to be transformed (note the difference between the window functionw{\displaystyle w}and the frequencyω{\displaystyle \omega }).X(τ,ω){\displaystyle X(\tau ,\omega )}is essentially the Fourier transform ofx(t)w(t−τ){\displaystyle x(t)w(t-\tau )}, acomplex functionrepresenting the phase and magnitude of the signal over time and frequency. Oftenphase unwrappingis employed along either or both the time axis,τ{\displaystyle \tau }, and frequency axis,ω{\displaystyle \omega }, to suppress anyjump discontinuityof the phase result of the STFT. The time indexτ{\displaystyle \tau }is normally considered to be "slow" time and usually not expressed in as high resolution as timet{\displaystyle t}. Given that the STFT is essentially a Fourier transform times a window function, the STFT is also called windowed Fourier transform or time-dependent Fourier transform. In the discrete time case, the data to be transformed could be broken up into chunks or frames (which usually overlap each other, to reduce artifacts at the boundary). Each chunk isFourier transformed, and the complex result is added to a matrix, which records magnitude and phase for each point in time and frequency. This can be expressed as: likewise, with signalx[n]{\displaystyle x[n]}and windoww[n]{\displaystyle w[n]}. In this case,mis discrete and ω is continuous, but in most typical applications the STFT is performed on a computer using thefast Fourier transform, so both variables are discrete andquantized. Themagnitudesquared of the STFT yields thespectrogramrepresentation of the power spectral density of the function: See also themodified discrete cosine transform(MDCT), which is also a Fourier-related transform that uses overlapping windows. If only a small number of ω are desired, or if the STFT is desired to be evaluated for every shiftmof the window, then the STFT may be more efficiently evaluated using asliding DFTalgorithm.[2] The STFT isinvertible, that is, the original signal can be recovered from the transform by the inverse STFT. The most widely accepted way of inverting the STFT is by using theoverlap-add (OLA) method, which also allows for modifications to the STFT complex spectrum. This makes for a versatile signal processing method,[3]referred to as theoverlap and add with modificationsmethod. Given the width and definition of the window functionw(t), we initially require the area of the window function to be scaled so that It easily follows that and The continuous Fourier transform is Substitutingx(t) from above: Swapping order of integration: So the Fourier transform can be seen as a sort of phase coherent sum of all of the STFTs ofx(t). Since the inverse Fourier transform is thenx(t) can be recovered fromX(τ,ω) as or It can be seen, comparing to above that windowed "grain" or "wavelet" ofx(t) is the inverse Fourier transform ofX(τ,ω) for τ fixed. An alternative definition that is valid only in the vicinity of τ, the inverse transform is: In general, the window functionw(t){\displaystyle w(t)}has the following properties: One of the pitfalls of the STFT is that it has a fixed resolution. The width of the windowing function relates to how the signal is represented—it determines whether there is good frequency resolution (frequency components close together can be separated) or good time resolution (the time at which frequencies change). A wide window gives better frequency resolution but poor time resolution. A narrower window gives good time resolution but poor frequency resolution. These are called narrowband and wideband transforms, respectively. This is one of the reasons for the creation of thewavelet transformandmultiresolution analysis, which can give good time resolution for high-frequency events and good frequency resolution for low-frequency events, the combination best suited for many real signals. This property is related to theHeisenberguncertainty principle, but not directly – seeGabor limitfor discussion. The product of the standard deviation in time and frequency is limited. The boundary of the uncertainty principle (best simultaneous resolution of both) is reached with a Gaussian window function (or mask function), as the Gaussian minimizes theFourier uncertainty principle. This is called theGabor transform(and with modifications for multiresolution becomes theMorlet wavelettransform). One can consider the STFT for varying window size as a two-dimensional domain (time and frequency), as illustrated in the example below, which can be calculated by varying the window size. However, this is no longer a strictly time-frequency representation – the kernel is not constant over the entire signal. When the original function is: We can have a simple example: w(t) = 1 for |t| smaller than or equal B w(t) = 0 otherwise B = window Now the original function of the Short-time Fourier transform can be changed as Another example: Using the following sample signalx(t){\displaystyle x(t)}that is composed of a set of four sinusoidal waveforms joined together in sequence. Each waveform is only composed of one of four frequencies (10, 25, 50, 100Hz). The definition ofx(t){\displaystyle x(t)}is: Then it is sampled at 400 Hz. The following spectrograms were produced: The 25 ms window allows us to identify a precise time at which the signals change but the precise frequencies are difficult to identify. At the other end of the scale, the 1000 ms window allows the frequencies to be precisely seen but the time between frequency changes is blurred. Other examples: Normally we callexp(σ−t2){\displaystyle exp(\sigma -t^{2})}aGaussian functionor Gabor function. When we use it, the short-time Fourier transform is called the "Gabor transform". It can also be explained with reference to the sampling andNyquist frequency. Take a window ofNsamples from an arbitrary real-valued signal at sampling ratefs. Taking the Fourier transform producesNcomplex coefficients. Of these coefficients only half are useful (the lastN/2being thecomplex conjugateof the firstN/2in reverse order, as this is a real valued signal). TheseN/2coefficients represent the frequencies 0 tofs/2 (Nyquist) and two consecutive coefficients are spaced apart byfs/NHz. To increase the frequency resolution of the window the frequency spacing of the coefficients needs to be reduced. There are only two variables, but decreasingfs(and keepingNconstant) will cause the window size to increase — since there are now fewer samples per unit time. The other alternative is to increaseN, but this again causes the window size to increase. So any attempt to increase the frequency resolution causes a larger window size and therefore a reduction in time resolution—and vice versa. As theNyquist frequencyis a limitation in the maximum frequency that can be meaningfully analysed, so is the Rayleigh frequency a limitation on the minimum frequency. The Rayleigh frequency is the minimum frequency that can be resolved by a finite duration time window.[4][5] Given a time window that is Τ seconds long, the minimum frequency that can be resolved is 1/Τ Hz. The Rayleigh frequency is an important consideration in applications of the short-time Fourier transform (STFT), as well as any other method of harmonic analysis on a signal of finite record-length.[6][7] STFTs as well as standard Fourier transforms and other tools are frequently used to analyze music. Thespectrogramcan, for example, show frequency on the horizontal axis, with the lowest frequencies at left, and the highest at the right. The height of each bar (augmented by color) represents theamplitudeof the frequencies within that band. The depth dimension represents time, where each new bar was a separate distinct transform. Audio engineers use this kind of visual to gain information about an audio sample, for example, to locate the frequencies of specific noises (especially when used with greater frequency resolution) or to find frequencies which may be more or less resonant in the space where the signal was recorded. This information can be used forequalizationor tuning other audio effects. Original function Converting into the discrete form: Suppose that Then we can write the original function into a. Nyquist criterion (avoiding the aliasing effect): a.ΔtΔf=1N{\displaystyle \Delta _{t}\Delta _{f}={\tfrac {1}{N}}}, whereN{\displaystyle N}is an integer b.N≥2Q+1{\displaystyle N\geq 2Q+1} c. Nyquist criterion (avoiding the aliasing effect): a.ΔtΔf=1N{\displaystyle \Delta _{t}\Delta _{f}={\tfrac {1}{N}}}, whereN{\displaystyle N}is an integer b.N≥2Q+1{\displaystyle N\geq 2Q+1} c. Nyquist criterion (avoiding the aliasing effect): d. Only for implementing therectangular-STFT Rectangular window imposes the constraint Substitution gives: Change of variablen-1forn: CalculateX(minnΔt,mΔf){\displaystyle X(\min {n}\Delta _{t},m\Delta _{f})}by theN-point FFT: where Applying the recursive formula to calculateX(nΔt,mΔf){\displaystyle X(n\Delta _{t},m\Delta _{f})} so Other time-frequency transforms:
https://en.wikipedia.org/wiki/Short-time_Fourier_transform
Instatistical signal processing, the goal ofspectral density estimation(SDE) or simplyspectral estimationis toestimatethespectral density(also known as thepower spectral density) of a signal from a sequence of time samples of the signal.[1]Intuitively speaking, the spectral density characterizes thefrequencycontent of the signal. One purpose of estimating the spectral density is to detect anyperiodicitiesin the data, by observing peaks at the frequencies corresponding to these periodicities. Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum. Spectrum analysis, also referred to asfrequency domainanalysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (orphase) can be calledspectrum analysis. Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes calledframes), and spectrum analysis may be applied to these individual segments.Periodic functions(such assin⁡(t){\displaystyle \sin(t)}) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category ofFourier analysis. TheFourier transformof a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by aninverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both theamplitudeandphaseof each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as acomplex number, or as magnitude (amplitude) and phase inpolar coordinates(i.e., as aphasor). A common technique in signal processing is to consider the squared amplitude, orpower; in this case the resulting plot is referred to as apower spectrum. Because of reversibility, the Fourier transform is called arepresentationof the function, in terms of frequency instead of time; thus, it is afrequency domainrepresentation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, onlynon-linearortime-variantoperations can create new frequencies in the frequency spectrum. In practice, nearly all software and electronic devices that generate frequency spectra utilize adiscrete Fourier transform(DFT), which operates onsamplesof the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm calledfast Fourier transform(FFT). The array of squared-magnitude components of a DFT is a type of power spectrum calledperiodogram, which is widely used for examining the frequency characteristics of noise-free functions such asfilter impulse responsesandwindow functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios[why?]. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method[2])  or over frequency (smoothing). Welch's method is widely used for spectral density estimation (SDE). However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section. Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided intonon-parametric,parametric,and more recentlysemi-parametric(also called sparse) methods.[3]The non-parametric approaches explicitly estimate thecovarianceor the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g.Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlyingstationary stochastic processhas a certain structure that can be described using a small number of parameters (for example, using anauto-regressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a non-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Similar approaches may also be used for missing data recovery[4]as well assignal reconstruction. Following is a partial list of spectral density estimation techniques: In parametric spectral estimation, one assumes that the signal is modeled by astationary processwhich has a spectral density function (SDF)S(f;a1,…,ap){\displaystyle S(f;a_{1},\ldots ,a_{p})}that is a function of the frequencyf{\displaystyle f}andp{\displaystyle p}parametersa1,…,ap{\displaystyle a_{1},\ldots ,a_{p}}.[8]The estimation problem then becomes one of estimating these parameters. The most common form of parametric SDF estimate uses as a model anautoregressive modelAR(p){\displaystyle {\text{AR}}(p)}of orderp{\displaystyle p}.[8]: 392A signal sequence{Yt}{\displaystyle \{Y_{t}\}}obeying a zero meanAR(p){\displaystyle {\text{AR}}(p)}process satisfies the equation where theϕ1,…,ϕp{\displaystyle \phi _{1},\ldots ,\phi _{p}}are fixed coefficients andϵt{\displaystyle \epsilon _{t}}is a white noise process with zero mean andinnovation varianceσp2{\displaystyle \sigma _{p}^{2}}. The SDF for this process is withΔt{\displaystyle \Delta t}the sampling time interval andfN{\displaystyle f_{N}}theNyquist frequency. There are a number of approaches to estimating the parametersϕ1,…,ϕp,σp2{\displaystyle \phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2}}of theAR(p){\displaystyle {\text{AR}}(p)}process and thus the spectral density:[8]: 452-453 Alternative parametric methods include fitting to amoving-average model(MA) and to a fullautoregressive moving-average model(ARMA). Frequency estimationis the process ofestimatingthefrequency, amplitude, and phase-shift of asignalin the presence ofnoisegiven assumptions about the number of the components.[10]This contrasts with the general methods above, which do not make prior assumptions about the components. If one only wants to estimate the frequency of the single loudestpure-tone signal, one can use apitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of theinstantaneous frequencyas defined in thetime–frequency representation. Methods for instantaneous frequency estimation include those based on theWigner–Ville distributionand higher orderambiguity functions.[11] If one wants to knowallthe (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a multiple-tone approach. A typical model for a signalx(n){\displaystyle x(n)}consists of a sum ofp{\displaystyle p}complex exponentials in the presence ofwhite noise,w(n){\displaystyle w(n)} The power spectral density ofx(n){\displaystyle x(n)}is composed ofp{\displaystyle p}impulse functionsin addition to the spectral density function due to noise. The most common methods for frequency estimation involve identifying the noisesubspaceto extract these components. These methods are based oneigen decompositionof theautocorrelation matrixinto a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation arePisarenko's method, themultiple signal classification(MUSIC) method, the eigenvector method, and the minimum norm method. Supposexn{\displaystyle x_{n}}, fromn=0{\displaystyle n=0}toN−1{\displaystyle N-1}is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive): The variance ofxn{\displaystyle x_{n}}is, for a zero-mean function as above, given by If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared). Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit asN→∞.{\displaystyle N\to \infty .}If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data. Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become and The root mean square ofsin{\displaystyle \sin }is1/2{\displaystyle 1/{\sqrt {2}}}, so the variance ofAksin⁡(2πνkt+ϕk){\displaystyle A_{k}\sin(2\pi \nu _{k}t+\phi _{k})}is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.}Hence, the contribution to the average power ofx(t){\displaystyle x(t)}coming from the component with frequencyνk{\displaystyle \nu _{k}}is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.}All these contributions add up to the average power ofx(t).{\displaystyle x(t).} Then the power as a function of frequency is12Ak2,{\displaystyle {\tfrac {1}{2}}A_{k}^{2},}and its statisticalcumulative distribution functionS(ν){\displaystyle S(\nu )}will be S{\displaystyle S}is astep function, monotonically non-decreasing. Its jumps occur at the frequencies of theperiodiccomponents ofx{\displaystyle x}, and the value of each jump is the power or variance of that component. The variance is the covariance of the data with itself. If we now consider the same data but with a lag ofτ{\displaystyle \tau }, we can take thecovarianceofx(t){\displaystyle x(t)}withx(t+τ){\displaystyle x(t+\tau )}, and define this to be theautocorrelation functionc{\displaystyle c}of the signal (or data)x{\displaystyle x}: If it exists, it is an even function ofτ.{\displaystyle \tau .}If the average power is bounded, thenc{\displaystyle c}exists everywhere, is finite, and is bounded byc(0),{\displaystyle c(0),}which is the average power or variance of the data. It can be shown thatc{\displaystyle c}can be decomposed into periodic components with the same periods asx{\displaystyle x}: This is in fact the spectral decomposition ofc{\displaystyle c}over the different frequencies, and is related to the distribution of power ofx{\displaystyle x}over the frequencies: the amplitude of a frequency component ofc{\displaystyle c}is its contribution to the average power of the signal. The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.
https://en.wikipedia.org/wiki/Spectral_density_estimation
In the theory ofstochastic processes, theKarhunen–Loève theorem(named afterKari KarhunenandMichel Loève), also known as theKosambi–Karhunen–Loève theorem[1][2]states that astochastic processcan be represented as an infinite linear combination oforthogonal functions, analogous to aFourier seriesrepresentation of a function on a bounded interval. The transformation is also known asHotellingtransform andeigenvectortransform, and is closely related toprincipal component analysis(PCA) technique widely used in image processing and in data analysis in many fields.[3] There exist many such expansions of a stochastic process: if the process is indexed over[a,b], anyorthonormal basisofL2([a,b])yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the totalmean squared error. In contrast to a Fourier series where the coefficients are fixed numbers and the expansion basis consists ofsinusoidal functions(that is,sineandcosinefunctions), the coefficients in the Karhunen–Loève theorem arerandom variablesand the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by thecovariance functionof the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion. In the case of acenteredstochastic process{Xt}t∈ [a,b](centeredmeansE[Xt] = 0for allt∈ [a,b]) satisfying a technical continuity condition,Xadmits a decomposition whereZkare pairwiseuncorrelatedrandom variables and the functionsekare continuous real-valued functions on[a,b]that are pairwiseorthogonalinL2([a,b]). It is therefore sometimes said that the expansion isbi-orthogonalsince the random coefficientsZkare orthogonal in the probability space while the deterministic functionsekare orthogonal in the time domain. The general case of a processXtthat is not centered can be brought back to the case of a centered process by consideringXt−E[Xt]which is a centered process. Moreover, if the process isGaussian, then the random variablesZkare Gaussian andstochastically independent. This result generalizes theKarhunen–Loève transform. An important example of a centered real stochastic process on[0, 1]is theWiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions. The above expansion into uncorrelated random variables is also known as theKarhunen–Loève expansionorKarhunen–Loève decomposition. Theempiricalversion (i.e., with the coefficients computed from a sample) is known as theKarhunen–Loève transform(KLT),principal component analysis,proper orthogonal decomposition(POD),empirical orthogonal functions(a term used inmeteorologyandgeophysics), or theHotellingtransform. The square-integrable conditionE[Xt2]<∞{\displaystyle \mathbf {E} [X_{t}^{2}]<\infty }is logically equivalent toKX(s,t){\displaystyle K_{X}(s,t)}being finite for alls,t∈[a,b]{\displaystyle s,t\in [a,b]}.[4] Theorem. LetXtbe a zero-mean square-integrable stochastic process defined over a probability space(Ω,F,P)and indexed over a closed and bounded interval [a,b], with continuous covariance functionKX(s,t). ThenKX(s,t)is aMercer kerneland lettingekbe an orthonormal basis onL2([a,b])formed by the eigenfunctions ofTKXwith respective eigenvaluesλk, Xtadmits the following representation where the convergence is inL2, uniform intand Furthermore, the random variablesZkhave zero-mean, are uncorrelated and have varianceλk Note that by generalizations of Mercer's theorem we can replace the interval [a,b] with other compact spacesCand the Lebesgue measure on [a,b] with a Borel measure whose support isC. Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude: Theorem. The variablesZihave a joint Gaussian distribution and are stochastically independent if the original process{Xt}tis Gaussian. In the Gaussian case, since the variablesZiare independent, we can say more: almost surely. This is a consequence of the independence of theZk. In the introduction, we mentioned that the truncated Karhunen–Loeve expansion was the best approximation of the original process in the sense that it reduces the total mean-square error resulting of its truncation. Because of this property, it is often said that the KL transform optimally compacts the energy. More specifically, given any orthonormal basis{fk} ofL2([a,b]), we may decompose the processXtas: where and we may approximateXtby the finite sum for some integerN. Claim. Of all such approximations, the KL approximation is the one that minimizes the total mean square error (provided we have arranged the eigenvalues in decreasing order). Consider the error resulting from the truncation at theN-th term in the following orthonormal expansion: The mean-square errorεN2(t) can be written as: We then integrate this last equality over [a,b]. The orthonormality of thefkyields: The problem of minimizing the total mean-square error thus comes down to minimizing the right hand side of this equality subject to the constraint that thefkbe normalized. We hence introduceβk, the Lagrangian multipliers associated with these constraints, and aim at minimizing the following function: Differentiating with respect tofi(t) (this is afunctional derivative) and setting the derivative to 0 yields: which is satisfied in particular when In other words, when thefkare chosen to be the eigenfunctions ofTKX, hence resulting in the KL expansion. An important observation is that since the random coefficientsZkof the KL expansion are uncorrelated, theBienaymé formulaasserts that the variance ofXtis simply the sum of the variances of the individual components of the sum: Integrating over [a,b] and using the orthonormality of theek, we obtain that the total variance of the process is: In particular, the total variance of theN-truncated approximation is As a result, theN-truncated expansion explains of the variance; and if we are content with an approximation that explains, say, 95% of the variance, then we just have to determine anN∈N{\displaystyle N\in \mathbb {N} }such that Given a representation ofXt=∑k=1∞Wkφk(t){\displaystyle X_{t}=\sum _{k=1}^{\infty }W_{k}\varphi _{k}(t)}, for some orthonormal basisφk(t){\displaystyle \varphi _{k}(t)}and randomWk{\displaystyle W_{k}}, we letpk=E[|Wk|2]/E[|Xt|L22]{\displaystyle p_{k}=\mathbb {E} [|W_{k}|^{2}]/\mathbb {E} [|X_{t}|_{L^{2}}^{2}]}, so that∑k=1∞pk=1{\displaystyle \sum _{k=1}^{\infty }p_{k}=1}. We may then define the representationentropyto beH({φk})=−∑ipklog⁡(pk){\displaystyle H(\{\varphi _{k}\})=-\sum _{i}p_{k}\log(p_{k})}. Then we haveH({φk})≥H({ek}){\displaystyle H(\{\varphi _{k}\})\geq H(\{e_{k}\})}, for all choices ofφk{\displaystyle \varphi _{k}}. That is, the KL-expansion has minimal representation entropy. Proof: Denote the coefficients obtained for the basisek(t){\displaystyle e_{k}(t)}aspk{\displaystyle p_{k}}, and forφk(t){\displaystyle \varphi _{k}(t)}asqk{\displaystyle q_{k}}. ChooseN≥1{\displaystyle N\geq 1}. Note that sinceek{\displaystyle e_{k}}minimizes the mean squared error, we have that Expanding the right hand size, we get: Using the orthonormality ofφk(t){\displaystyle \varphi _{k}(t)}, and expandingXt{\displaystyle X_{t}}in theφk(t){\displaystyle \varphi _{k}(t)}basis, we get that the right hand size is equal to: We may perform identical analysis for theek(t){\displaystyle e_{k}(t)}, and so rewrite the above inequality as: Subtracting the common first term, and dividing byE[|Xt|L22]{\displaystyle \mathbb {E} [|X_{t}|_{L^{2}}^{2}]}, we obtain that: This implies that: Consider a whole class of signals we want to approximate over the firstMvectors of a basis. These signals are modeled as realizations of a random vectorY[n]of sizeN. To optimize the approximation we design a basis that minimizes the average approximation error. This section proves that optimal bases are Karhunen–Loeve bases that diagonalize the covariance matrix ofY. The random vectorYcan be decomposed in an orthogonal basis as follows: where each is a random variable. The approximation from the firstM≤Nvectors of the basis is The energy conservation in an orthogonal basis implies This error is related to the covariance ofYdefined by For any vectorx[n]we denote byKthe covariance operator represented by this matrix, The errorε[M]is therefore a sum of the lastN−Mcoefficients of the covariance operator The covariance operatorKis Hermitian and Positive and is thus diagonalized in an orthogonal basis called a Karhunen–Loève basis. The following theorem states that a Karhunen–Loève basis is optimal for linear approximations. Theorem (Optimality of Karhunen–Loève basis).LetKbe a covariance operator. For allM≥ 1, the approximation error is minimum if and only if is a Karhunen–Loeve basis ordered by decreasing eigenvalues. Linear approximations project the signal onMvectors a priori. The approximation can be made more precise by choosing theMorthogonal vectors depending on the signal properties. This section analyzes the general performance of these non-linear approximations. A signalf∈H{\displaystyle f\in \mathrm {H} }is approximated with M vectors selected adaptively in an orthonormal basis forH{\displaystyle \mathrm {H} }[definition needed] LetfM{\displaystyle f_{M}}be the projection of f over M vectors whose indices are inIM: The approximation error is the sum of the remaining coefficients To minimize this error, the indices inIMmust correspond to the M vectors having the largest inner product amplitude These are the vectors that best correlate f. They can thus be interpreted as the main features of f. The resulting error is necessarily smaller than the error of a linear approximation which selects the M approximation vectors independently of f. Let us sort in decreasing order The best non-linear approximation is It can also be written as inner product thresholding: with The non-linear error is this error goes quickly to zero as M increases, if the sorted values of|⟨f,gmk⟩|{\displaystyle \left|\left\langle f,g_{m_{k}}\right\rangle \right|}have a fast decay as k increases. This decay is quantified by computing theIP{\displaystyle \mathrm {I} ^{\mathrm {P} }}norm of the signal inner products in B: The following theorem relates the decay ofε[M]to‖f‖B,p{\displaystyle \|f\|_{\mathrm {B} ,p}} Theorem (decay of error).If‖f‖B,p<∞{\displaystyle \|f\|_{\mathrm {B} ,p}<\infty }withp< 2then and Conversely, ifε[M]=o(M1−2p){\displaystyle \varepsilon [M]=o\left(M^{1-{\frac {2}{p}}}\right)}then ‖f‖B,q<∞{\displaystyle \|f\|_{\mathrm {B} ,q}<\infty }for anyq>p. To further illustrate the differences between linear and non-linear approximations, we study the decomposition of a simple non-Gaussian random vector in a Karhunen–Loève basis. Processes whose realizations have a random translation are stationary. The Karhunen–Loève basis is then a Fourier basis and we study its performance. To simplify the analysis, consider a random vectorY[n] of sizeNthat is random shift moduloNof a deterministic signalf[n] of zero mean The random shiftPis uniformly distributed on [0,N− 1]: Clearly and Hence Since RYis N periodic, Y is a circular stationary random vector. The covariance operator is a circular convolution with RYand is therefore diagonalized in the discrete Fourier Karhunen–Loève basis The power spectrum is Fourier transform ofRY: Example:Consider an extreme case wheref[n]=δ[n]−δ[n−1]{\displaystyle f[n]=\delta [n]-\delta [n-1]}. A theorem stated above guarantees that the Fourier Karhunen–Loève basis produces a smaller expected approximation error than a canonical basis of Diracs{gm[n]=δ[n−m]}0≤m<N{\displaystyle \left\{g_{m}[n]=\delta [n-m]\right\}_{0\leq m<N}}. Indeed, we do not know a priori the abscissa of the non-zero coefficients ofY, so there is no particular Dirac that is better adapted to perform the approximation. But the Fourier vectors cover the whole support of Y and thus absorb a part of the signal energy. Selecting higher frequency Fourier coefficients yields a better mean-square approximation than choosing a priori a few Dirac vectors to perform the approximation. The situation is totally different for non-linear approximations. Iff[n]=δ[n]−δ[n−1]{\displaystyle f[n]=\delta [n]-\delta [n-1]}then the discrete Fourier basis is extremely inefficient because f and hence Y have an energy that is almost uniformly spread among all Fourier vectors. In contrast, since f has only two non-zero coefficients in the Dirac basis, a non-linear approximation of Y withM≥ 2gives zero error.[5] We have established the Karhunen–Loève theorem and derived a few properties thereof. We also noted that one hurdle in its application was the numerical cost of determining the eigenvalues and eigenfunctions of its covariance operator through the Fredholm integral equation of the second kind However, when applied to a discrete and finite process(Xn)n∈{1,…,N}{\displaystyle \left(X_{n}\right)_{n\in \{1,\ldots ,N\}}}, the problem takes a much simpler form and standard algebra can be used to carry out the calculations. Note that a continuous process can also be sampled atNpoints in time in order to reduce the problem to a finite version. We henceforth consider a randomN-dimensional vectorX=(X1X2…XN)T{\displaystyle X=\left(X_{1}~X_{2}~\ldots ~X_{N}\right)^{T}}. As mentioned above,Xcould containNsamples of a signal but it can hold many more representations depending on the field of application. For instance it could be the answers to a survey or economic data in an econometrics analysis. As in the continuous version, we assume thatXis centered, otherwise we can letX:=X−μX{\displaystyle X:=X-\mu _{X}}(whereμX{\displaystyle \mu _{X}}is themean vectorofX) which is centered. Let us adapt the procedure to the discrete case. Recall that the main implication and difficulty of the KL transformation is computing the eigenvectors of the linear operator associated to the covariance function, which are given by the solutions to the integral equation written above. Define Σ, the covariance matrix ofX, as anN×Nmatrix whose elements are given by: Rewriting the above integral equation to suit the discrete case, we observe that it turns into: wheree=(e1e2…eN)T{\displaystyle e=(e_{1}~e_{2}~\ldots ~e_{N})^{T}}is anN-dimensional vector. The integral equation thus reduces to a simple matrix eigenvalue problem, which explains why the PCA has such a broad domain of applications. Since Σ is a positive definite symmetric matrix, it possesses a set of orthonormal eigenvectors forming a basis ofRN{\displaystyle \mathbb {R} ^{N}}, and we write{λi,φi}i∈{1,…,N}{\displaystyle \{\lambda _{i},\varphi _{i}\}_{i\in \{1,\ldots ,N\}}}this set of eigenvalues and corresponding eigenvectors, listed in decreasing values ofλi. Let alsoΦbe the orthonormal matrix consisting of these eigenvectors: It remains to perform the actual KL transformation, called theprincipal component transformin this case. Recall that the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. In this case, we hence have: In a more compact form, the principal component transform ofXis defined by: Thei-th component ofYisYi=φiTX{\displaystyle Y_{i}=\varphi _{i}^{T}X}, the projection ofXonφi{\displaystyle \varphi _{i}}and the inverse transformX= ΦYyields the expansion ofXon the space spanned by theφi{\displaystyle \varphi _{i}}: As in the continuous case, we may reduce the dimensionality of the problem by truncating the sum at someK∈{1,…,N}{\displaystyle K\in \{1,\ldots ,N\}}such that where α is the explained variance threshold we wish to set. We can also reduce the dimensionality through the use of multilevel dominant eigenvector estimation (MDEE).[6] There are numerous equivalent characterizations of theWiener processwhich is a mathematical formalization ofBrownian motion. Here we regard it as the centered standard Gaussian processWtwith covariance function We restrict the time domain to [a,b]=[0,1] without loss of generality. The eigenvectors of the covariance kernel are easily determined. These are and the corresponding eigenvalues are In order to find the eigenvalues and eigenvectors, we need to solve the integral equation: differentiating once with respect totyields: a second differentiation produces the following differential equation: The general solution of which has the form: whereAandBare two constants to be determined with the boundary conditions. Settingt= 0 in the initial integral equation givese(0) = 0 which implies thatB= 0 and similarly, settingt= 1 in the first differentiation yieldse'(1) = 0, whence: which in turn implies that eigenvalues ofTKXare: The corresponding eigenfunctions are thus of the form: Ais then chosen so as to normalizeek: This gives the following representation of the Wiener process: Theorem. There is a sequence {Zi}iof independent Gaussian random variables with mean zero and variance 1 such that Note that this representation is only valid fort∈[0,1].{\displaystyle t\in [0,1].}On larger intervals, the increments are not independent. As stated in the theorem, convergence is in the L2norm and uniform int. Similarly theBrownian bridgeBt=Wt−tW1{\displaystyle B_{t}=W_{t}-tW_{1}}which is astochastic processwith covariance function can be represented as the series Adaptive opticssystems sometimes use K–L functions to reconstruct wave-front phase information (Dai 1996, JOSA A). Karhunen–Loève expansion is closely related to theSingular Value Decomposition. The latter has myriad applications in image processing, radar, seismology, and the like. If one has independent vector observations from a vector valued stochastic process then the left singular vectors aremaximum likelihoodestimates of the ensemble KL expansion. In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signals(t) from channel outputX(t),N(t) is the channel noise, which is usually assumed zero mean Gaussian process with correlation functionRN(t,s)=E[N(t)N(s)]{\displaystyle R_{N}(t,s)=E[N(t)N(s)]} When the channel noise is white, its correlation function is and it has constant power spectrum density. In physically practical channel, the noise power is finite, so: Then the noise correlation function is sinc function with zeros atn2ω,n∈Z.{\displaystyle {\frac {n}{2\omega }},n\in \mathbf {Z} .}Since are uncorrelated and gaussian, they are independent. Thus we can take samples fromX(t) with time spacing LetXi=X(iΔt){\displaystyle X_{i}=X(i\,\Delta t)}. We have a total ofn=TΔt=T(2ω)=2ωT{\displaystyle n={\frac {T}{\Delta t}}=T(2\omega )=2\omega T}i.i.d observations{X1,X2,…,Xn}{\displaystyle \{X_{1},X_{2},\ldots ,X_{n}\}}to develop the likelihood-ratio test. Define signalSi=S(iΔt){\displaystyle S_{i}=S(i\,\Delta t)}, the problem becomes, The log-likelihood ratio Ast→ 0, let: ThenGis the test statistics and theNeyman–Pearson optimum detectoris AsGis Gaussian, we can characterize it by finding its mean and variances. Then we get where is the signal energy. The false alarm error And the probability of detection: where Φ is the cdf of standard normal, or Gaussian, variable. When N(t) is colored (correlated in time) Gaussian noise with zero mean and covariance functionRN(t,s)=E[N(t)N(s)],{\displaystyle R_{N}(t,s)=E[N(t)N(s)],}we cannot sample independent discrete observations by evenly spacing the time. Instead, we can use K–L expansion to decorrelate the noise process and get independent Gaussian observation 'samples'. The K–L expansion ofN(t): whereNi=∫N(t)Φi(t)dt{\displaystyle N_{i}=\int N(t)\Phi _{i}(t)\,dt}and the orthonormal bases{Φit}{\displaystyle \{\Phi _{i}{t}\}}are generated by kernelRN(t,s){\displaystyle R_{N}(t,s)}, i.e., solution to Do the expansion: whereSi=∫0TS(t)Φi(t)dt{\displaystyle S_{i}=\int _{0}^{T}S(t)\Phi _{i}(t)\,dt}, then under H andNi+Si{\displaystyle N_{i}+S_{i}}under K. LetX¯={X1,X2,…}{\displaystyle {\overline {X}}=\{X_{1},X_{2},\dots \}}, we have Hence, the log-LR is given by and the optimum detector is Define thenG=∫0Tk(t)x(t)dt.{\displaystyle G=\int _{0}^{T}k(t)x(t)\,dt.} Since k(t) is the solution to IfN(t)is wide-sense stationary, which is known as theWiener–Hopf equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculatek(t) is white Gaussian noise. The corresponding impulse response ish(t) =k(T−t) =CS(T−t). LetC= 1, this is just the result we arrived at in previous section for detecting of signal in white noise. Since X(t) is a Gaussian process, is a Gaussian random variable that can be characterized by its mean and variance. Hence, we obtain the distributions ofHandK: The false alarm error is So the test threshold for the Neyman–Pearson optimum detector is Its power of detection is When the noise is white Gaussian process, the signal power is For some type of colored noise, a typical practise is to add a prewhitening filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function The transfer function of prewhitening filter is When the signal we want to detect from the noisy channel is also random, for example, a white Gaussian processX(t), we can still implement K–L expansion to get independent sequence of observation. In this case, the detection problem is described as follows: X(t) is a random process with correlation functionRX(t,s)=E{X(t)X(s)}{\displaystyle R_{X}(t,s)=E\{X(t)X(s)\}} The K–L expansion ofX(t) is where andΦi(t){\displaystyle \Phi _{i}(t)}are solutions to SoXi{\displaystyle X_{i}}'s are independent sequence of r.v's with zero mean and varianceλi{\displaystyle \lambda _{i}}. ExpandingY(t) andN(t) byΦi(t){\displaystyle \Phi _{i}(t)}, we get where AsN(t) is Gaussian white noise,Ni{\displaystyle N_{i}}'s are i.i.d sequence of r.v with zero mean and variance12N0{\displaystyle {\tfrac {1}{2}}N_{0}}, then the problem is simplified as follows, The Neyman–Pearson optimal test: so the log-likelihood ratio is Since is just the minimum-mean-square estimate ofXi{\displaystyle X_{i}}givenYi{\displaystyle Y_{i}}'s, K–L expansion has the following property: If where then So let Noncausal filterQ(t,s) can be used to get the estimate through Byorthogonality principle,Q(t,s) satisfies However, for practical reasons, it's necessary to further derive the causal filterh(t,s), whereh(t,s) = 0 fors>t, to get estimateX^(t∣t){\displaystyle {\widehat {X}}(t\mid t)}. Specifically,
https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem
Inmathematics, atransformation,transform, orself-map[1]is afunctionf, usually with somegeometricalunderpinning, that maps asetXto itself, i.e.f:X→X.[2][3][4]Examples includelinear transformationsofvector spacesandgeometric transformations, which includeprojective transformations,affine transformations, and specific affine transformations, such asrotations,reflectionsandtranslations.[5][6] While it is common to use the termtransformationfor any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized topartial functions, then apartial transformationis a functionf:A→B, where bothAandBaresubsetsof some setX.[7] The set of all transformations on a given base set, together withfunction composition, forms aregular semigroup. For a finite set ofcardinalityn, there arenntransformations and (n+1)npartial transformations.[8]
https://en.wikipedia.org/wiki/Transformation_(function)
The method ofiteratively reweighted least squares(IRLS) is used to solve certain optimization problems withobjective functionsof the form of ap-norm: argminβ⁡∑i=1n|yi−fi(β)|p,{\displaystyle \mathop {\operatorname {arg\,min} } _{\boldsymbol {\beta }}\sum _{i=1}^{n}{\big |}y_{i}-f_{i}({\boldsymbol {\beta }}){\big |}^{p},} by aniterative methodin which each step involves solving aweighted least squaresproblem of the form:[1] β(t+1)=argminβ∑i=1nwi(β(t))|yi−fi(β)|2.{\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}w_{i}({\boldsymbol {\beta }}^{(t)}){\big |}y_{i}-f_{i}({\boldsymbol {\beta }}){\big |}^{2}.} IRLS is used to find themaximum likelihoodestimates of ageneralized linear model, and inrobust regressionto find anM-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing theleast absolute errorsrather than theleast square errors. One of the advantages of IRLS overlinear programmingandconvex programmingis that it can be used withGauss–NewtonandLevenberg–Marquardtnumerical algorithms. IRLS can be used forℓ1minimization and smoothedℓpminimization,p< 1, incompressed sensingproblems. It has been proved that the algorithm has a linear rate of convergence forℓ1norm and superlinear forℓtwitht< 1, under therestricted isometry property, which is generally a sufficient condition for sparse solutions.[2][3] To find the parametersβ= (β1, …,βk)Twhich minimize theLpnormfor thelinear regressionproblem, argminβ‖y−Xβ‖p=argminβ∑i=1n|yi−Xiβ|p,{\displaystyle {\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}{\big \|}\mathbf {y} -X{\boldsymbol {\beta }}\|_{p}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}\left|y_{i}-X_{i}{\boldsymbol {\beta }}\right|^{p},} the IRLS algorithm at stept+ 1 involves solving theweighted linear least squaresproblem:[4] β(t+1)=argminβ∑i=1nwi(t)|yi−Xiβ|2=(XTW(t)X)−1XTW(t)y,{\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}w_{i}^{(t)}\left|y_{i}-X_{i}{\boldsymbol {\beta }}\right|^{2}=(X^{\rm {T}}W^{(t)}X)^{-1}X^{\rm {T}}W^{(t)}\mathbf {y} ,} whereW(t)is thediagonal matrixof weights, usually with all elements set initially to: wi(0)=1{\displaystyle w_{i}^{(0)}=1} and updated after each iteration to: wi(t)=|yi−Xiβ(t)|p−2.{\displaystyle w_{i}^{(t)}={\big |}y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}{\big |}^{p-2}.} In the casep= 1, this corresponds toleast absolute deviationregression (in this case, the problem would be better approached by use oflinear programmingmethods,[5]so the result would be exact) and the formula is: wi(t)=1|yi−Xiβ(t)|.{\displaystyle w_{i}^{(t)}={\frac {1}{{\big |}y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}{\big |}}}.} To avoid dividing by zero,regularizationmust be done, so in practice the formula is: wi(t)=1max{δ,|yi−Xiβ(t)|}.{\displaystyle w_{i}^{(t)}={\frac {1}{\max \left\{\delta ,\left|y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}\right|\right\}}}.} whereδ{\displaystyle \delta }is some small value, like 0.0001.[5]Note the use ofδ{\displaystyle \delta }in the weighting function is equivalent to theHuber lossfunction in robust estimation.[6]
https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares
The topic ofheteroskedasticity-consistent(HC)standard errorsarises instatisticsandeconometricsin the context oflinear regressionandtime series analysis. These are also known asheteroskedasticity-robust standard errors(or simplyrobust standard errors),Eicker–Huber–White standard errors(alsoHuber–White standard errorsorWhite standard errors),[1]to recognize the contributions ofFriedhelm Eicker,[2]Peter J. Huber,[3]andHalbert White.[4] In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbancesuihave the same variance across all observation points. When this is not the case, the errors are said to be heteroskedastic, or to haveheteroskedasticity, and this behaviour will be reflected in the residualsu^i{\textstyle {\widehat {u}}_{i}}estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data,time-seriesdata andGARCH estimation. Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed.[5]Other types of standard error adjustments, such asclustered standard errorsorHAC standard errors, may be considered as extensions to HC standard errors. Heteroskedasticity-consistent standard errors are introduced byFriedhelm Eicker,[6][7]and popularized in econometrics byHalbert White. Consider the linear regression model for the scalary{\displaystyle y}. wherex{\displaystyle \mathbf {x} }is akx 1 column vector of explanatory variables (features),β{\displaystyle {\boldsymbol {\beta }}}is ak× 1 column vector of parameters to be estimated, andε{\displaystyle \varepsilon }is theresidual error. Theordinary least squares(OLS) estimator is wherey{\displaystyle \mathbf {y} }is a vector of observationsyi{\displaystyle y_{i}}, andX{\displaystyle \mathbf {X} }denotes the matrix of stackedxi{\displaystyle \mathbf {x} _{i}}values observed in the data. If thesample errorshave equal varianceσ2{\displaystyle \sigma ^{2}}and areuncorrelated, then the least-squares estimate ofβ{\displaystyle {\boldsymbol {\beta }}}isBLUE(best linear unbiased estimator), and its variance is estimated with whereε^i=yi−xi⊤β^OLS{\displaystyle {\widehat {\varepsilon }}_{i}=y_{i}-\mathbf {x} _{i}^{\top }{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }}are the regression residuals. When the error terms do not have constant variance (i.e., the assumption ofE[uu⊤]=σ2In{\displaystyle \mathbb {E} [\mathbf {u} \mathbf {u} ^{\top }]=\sigma ^{2}\mathbf {I} _{n}}is untrue), the OLS estimator loses its desirable properties. The formula for variance now cannot be simplified: whereΣ=V[u].{\displaystyle \mathbf {\Sigma } =\mathbb {V} [\mathbf {u} ].} While the OLS point estimator remains unbiased, it is not "best" in the sense of having minimum mean square error, and the OLS variance estimatorV^[β^OLS]{\displaystyle {\hat {\mathbb {V} }}\left[{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }\right]}does not provide a consistent estimate of the variance of the OLS estimates. For any non-linear model (for instancelogitandprobitmodels), however, heteroskedasticity has more severe consequences: themaximum likelihood estimatesof the parameters will be biased (in an unknown direction), as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroskedasticity).[8][9]As pointed out byGreene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”[10] If the regression errorsεi{\displaystyle \varepsilon _{i}}are independent, but have distinct variancesσi2{\displaystyle \sigma _{i}^{2}}, thenΣ=diag⁡(σ12,…,σn2){\displaystyle \mathbf {\Sigma } =\operatorname {diag} (\sigma _{1}^{2},\ldots ,\sigma _{n}^{2})}which can be estimated withσ^i2=ε^i2{\displaystyle {\widehat {\sigma }}_{i}^{2}={\widehat {\varepsilon }}_{i}^{2}}. This provides White's (1980) estimator, often referred to asHCE(heteroskedasticity-consistent estimator): where as aboveX{\displaystyle \mathbf {X} }denotes the matrix of stackedxi⊤{\displaystyle \mathbf {x} _{i}^{\top }}values from the data. The estimator can be derived in terms of thegeneralized method of moments(GMM). Also often discussed in the literature (including White's paper) is the covariance matrixΩ^n{\displaystyle {\widehat {\mathbf {\Omega } }}_{n}}of then{\displaystyle {\sqrt {n}}}-consistent limiting distribution: where and Thus, and Precisely which covariance matrix is of concern is a matter of context. Alternative estimators have been proposed in MacKinnon & White (1985) that correct for unequal variances of regression residuals due to differentleverage.[11]Unlike the asymptotic White's estimator, their estimators are unbiased when the data are homoscedastic. Of the four widely available different options, often denoted as HC0-HC3, the HC3 specification appears to work best, with tests relying on the HC3 estimator featuring better power and closer proximity to the targetedsize, especially in small samples. The larger the sample, the smaller the difference between the different estimators.[12] An alternative to explicitly modelling the heteroskedasticity is using aresampling methodsuch as thewild bootstrap. Given that thestudentized bootstrap, which standardizes the resampled statistic by its standard error, yields an asymptotic refinement,[13]heteroskedasticity-robust standard errors remain nevertheless useful. Instead of accounting for the heteroskedastic errors, most linear models can be transformed to feature homoskedastic error terms (unless the error term is heteroskedastic by construction, e.g. in alinear probability model). One way to do this is usingweighted least squares, which also features improved efficiency properties.
https://en.wikipedia.org/wiki/Heteroscedasticity-consistent_standard_errors
Theweighted arithmetic meanis similar to an ordinaryarithmetic mean(the most common type ofaverage), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role indescriptive statisticsand also occurs in a more general form in several other areas of mathematics. If all the weights are equal, then the weighted mean is the same as thearithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance inSimpson's paradox. Given two schoolclasses—onewith 20 students, one with 30students—andtest grades in each class as follows: The mean for the morning class is 80 and the mean of the afternoon class is 90. The unweighted mean of the two means is 85. However, this does not account for the difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students):x¯=430050=86.{\displaystyle {\bar {x}}={\frac {4300}{50}}=86.} Or, this can be accomplished by weighting the class means by the number of students in each class. The larger class is given more "weight": Thus, the weighted mean makes it possible to find the mean average student grade without knowing each student's score. Only the class means and the number of students in each class are needed. Since only therelativeweights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called aconvex combination. Using the previous example, we would get the following weights: Then, apply the weights like this: Formally, the weighted mean of a non-empty finitetupleof data(x1,x2,…,xn){\displaystyle \left(x_{1},x_{2},\dots ,x_{n}\right)}, with corresponding non-negativeweights(w1,w2,…,wn){\displaystyle \left(w_{1},w_{2},\dots ,w_{n}\right)}is which expands to: Therefore, data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights may not be negative in order for the equation to work[a]. Some may be zero, but not all of them (since division by zero is not allowed). The formulas are simplified when the weights are normalized such that they sum up to 1, i.e.,∑i=1nwi′=1{\textstyle \sum \limits _{i=1}^{n}{w_{i}'}=1}. For such normalized weights, the weighted mean is equivalently: One can always normalize the weights by making the following transformation on the original weights: Theordinary mean1n∑i=1nxi{\textstyle {\frac {1}{n}}\sum \limits _{i=1}^{n}{x_{i}}}is a special case of the weighted mean where all data have equal weights. If the data elements areindependent and identically distributed random variableswith varianceσ2{\displaystyle \sigma ^{2}}, thestandard error of the weighted mean,σx¯{\displaystyle \sigma _{\bar {x}}}, can be shown viauncertainty propagationto be: For the weighted mean of a list of data for which each elementxi{\displaystyle x_{i}}potentially comes from a differentprobability distributionwith knownvarianceσi2{\displaystyle \sigma _{i}^{2}}, all having the same mean, one possible choice for the weights is given by the reciprocal of variance: The weighted mean in this case is: and thestandard error of the weighted mean (with inverse-variance weights)is: Note this reduces toσx¯2=σ02/n{\displaystyle \sigma _{\bar {x}}^{2}=\sigma _{0}^{2}/n}when allσi=σ0{\displaystyle \sigma _{i}=\sigma _{0}}. It is a special case of the general formula in previous section, The equations above can be combined to obtain: The significance of this choice is that this weighted mean is themaximum likelihood estimatorof the mean of the probability distributions under the assumption that they are independent andnormally distributedwith the same mean. The weighted sample mean,x¯{\displaystyle {\bar {x}}}, is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations, as follows. For simplicity, we assume normalized weights (weights summing to one). If the observations have expected valuesE(xi)=μi,{\displaystyle E(x_{i})={\mu _{i}},}then the weighted sample mean has expectationE(x¯)=∑i=1nwi′μi.{\displaystyle E({\bar {x}})=\sum _{i=1}^{n}{w_{i}'\mu _{i}}.}In particular, if the means are equal,μi=μ{\displaystyle \mu _{i}=\mu }, then the expectation of the weighted sample mean will be that value,E(x¯)=μ.{\displaystyle E({\bar {x}})=\mu .} When treating the weights as constants, and having a sample ofnobservations fromuncorrelatedrandom variables, all with the samevarianceandexpectation(as is the case fori.i.drandom variables), then the variance of the weighted mean can be estimated as the multiplication of the unweighted variance byKish's design effect(seeproof): Withσ^y2=∑i=1n(yi−y¯)2n−1{\displaystyle {\hat {\sigma }}_{y}^{2}={\frac {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}{n-1}}},w¯=∑i=1nwin{\displaystyle {\bar {w}}={\frac {\sum _{i=1}^{n}w_{i}}{n}}}, andw2¯=∑i=1nwi2n{\displaystyle {\overline {w^{2}}}={\frac {\sum _{i=1}^{n}w_{i}^{2}}{n}}} However, this estimation is rather limited due to the strong assumption about theyobservations. This has led to the development of alternative, more general, estimators. From amodel basedperspective, we are interested in estimating the variance of the weighted mean when the differentyi{\displaystyle y_{i}}are noti.i.drandom variables. An alternative perspective for this problem is that of some arbitrarysampling designof the data in which units areselected with unequal probabilities(with replacement).[1]: 306 InSurvey methodology, the population mean, of some quantity of interesty, is calculated by taking an estimation of the total ofyover all elements in the population (Yor sometimesT) and dividing it by the population size – either known (N{\displaystyle N}) or estimated (N^{\displaystyle {\hat {N}}}). In this context, each value ofyis considered constant, and the variability comes from the selection procedure. This in contrast to "model based" approaches in which the randomness is often described in the y values. Thesurvey samplingprocedure yields a series ofBernoulliindicator values (Ii{\displaystyle I_{i}}) that get 1 if some observationiis in the sample and 0 if it was not selected. This can occur with fixed sample size, or varied sample size sampling (e.g.:Poisson sampling). The probability of some element to be chosen, given a sample, is denoted asP(Ii=1∣Some sample of sizen)=πi{\displaystyle P(I_{i}=1\mid {\text{Some sample of size }}n)=\pi _{i}}, and the one-draw probability of selection isP(Ii=1|one sample draw)=pi≈πin{\displaystyle P(I_{i}=1|{\text{one sample draw}})=p_{i}\approx {\frac {\pi _{i}}{n}}}(If N is very large and eachpi{\displaystyle p_{i}}is very small). For the following derivation we'll assume that the probability of selecting each element is fully represented by these probabilities.[2]: 42, 43, 51I.e.: selecting some element will not influence the probability of drawing another element (this doesn't apply for things such ascluster samplingdesign). Since each element (yi{\displaystyle y_{i}}) is fixed, and the randomness comes from it being included in the sample or not (Ii{\displaystyle I_{i}}), we often talk about the multiplication of the two, which is a random variable. To avoid confusion in the following section, let's call this term:yi′=yiIi{\displaystyle y'_{i}=y_{i}I_{i}}. With the following expectancy:E[yi′]=yiE[Ii]=yiπi{\displaystyle E[y'_{i}]=y_{i}E[I_{i}]=y_{i}\pi _{i}}; and variance:V[yi′]=yi2V[Ii]=yi2πi(1−πi){\displaystyle V[y'_{i}]=y_{i}^{2}V[I_{i}]=y_{i}^{2}\pi _{i}(1-\pi _{i})}. When each element of the sample is inflated by the inverse of its selection probability, it is termed theπ{\displaystyle \pi }-expandedyvalues, i.e.:yˇi=yiπi{\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}}. A related quantity isp{\displaystyle p}-expandedyvalues:yipi=nyˇi{\displaystyle {\frac {y_{i}}{p_{i}}}=n{\check {y}}_{i}}.[2]: 42, 43, 51, 52As above, we can add a tick mark if multiplying by the indicator function. I.e.:yˇi′=Iiyˇi=Iiyiπi{\displaystyle {\check {y}}'_{i}=I_{i}{\check {y}}_{i}={\frac {I_{i}y_{i}}{\pi _{i}}}} In thisdesign basedperspective, the weights, used in the numerator of the weighted mean, are obtained from taking the inverse of the selection probability (i.e.: the inflation factor). I.e.:wi=1πi≈1n×pi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}\approx {\frac {1}{n\times p_{i}}}}. If the population sizeNis known we can estimate the population mean usingY¯^knownN=Y^pwrN≈∑i=1nwiyi′N{\displaystyle {\hat {\bar {Y}}}_{{\text{known }}N}={\frac {{\hat {Y}}_{pwr}}{N}}\approx {\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{N}}}. If thesampling designis one that results in a fixed sample sizen(such as inpps sampling), then the variance of this estimator is: The general formula can be developed like this: The population total is denoted asY=∑i=1Nyi{\displaystyle Y=\sum _{i=1}^{N}y_{i}}and it may be estimated by the (unbiased)Horvitz–Thompson estimator, also called theπ{\displaystyle \pi }-estimator. This estimator can be itself estimated using thepwr-estimator (i.e.:p{\displaystyle p}-expanded with replacement estimator, or "probability with replacement" estimator). With the above notation, it is:Y^pwr=1n∑i=1nyi′pi=∑i=1nyi′npi≈∑i=1nyi′πi=∑i=1nwiyi′{\displaystyle {\hat {Y}}_{pwr}={\frac {1}{n}}\sum _{i=1}^{n}{\frac {y'_{i}}{p_{i}}}=\sum _{i=1}^{n}{\frac {y'_{i}}{np_{i}}}\approx \sum _{i=1}^{n}{\frac {y'_{i}}{\pi _{i}}}=\sum _{i=1}^{n}w_{i}y'_{i}}.[2]: 51 The estimated variance of thepwr-estimator is given by:[2]: 52Var⁡(Y^pwr)=nn−1∑i=1n(wiyi−wy¯)2{\displaystyle \operatorname {Var} ({\hat {Y}}_{pwr})={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}}wherewy¯=∑i=1nwiyin{\displaystyle {\overline {wy}}=\sum _{i=1}^{n}{\frac {w_{i}y_{i}}{n}}}. The above formula was taken from Sarndal et al. (1992) (also presented in Cochran 1977), but was written differently.[2]: 52[1]: 307 (11.35)The left side is how the variance was written and the right side is how we've developed the weighted version: Var⁡(Y^pwr)=1n1n−1∑i=1n(yipi−Y^pwr)2=1n1n−1∑i=1n(nnyipi−nn∑i=1nwiyi)2=1n1n−1∑i=1n(nyiπi−n∑i=1nwiyin)2=n2n1n−1∑i=1n(wiyi−wy¯)2=nn−1∑i=1n(wiyi−wy¯)2{\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{\text{pwr}})&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {y_{i}}{p_{i}}}-{\hat {Y}}_{pwr}\right)^{2}\\&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {n}{n}}{\frac {y_{i}}{p_{i}}}-{\frac {n}{n}}\sum _{i=1}^{n}w_{i}y_{i}\right)^{2}={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(n{\frac {y_{i}}{\pi _{i}}}-n{\frac {\sum _{i=1}^{n}w_{i}y_{i}}{n}}\right)^{2}\\&={\frac {n^{2}}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\\&={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\end{aligned}}} And we got to the formula from above. An alternative term, for when the sampling has a random sample size (as inPoisson sampling), is presented in Sarndal et al. (1992) as:[2]: 182 Var⁡(Y¯^pwr (knownN))=1N2∑i=1n∑j=1n(Δˇijyˇiyˇj){\displaystyle \operatorname {Var} ({\hat {\bar {Y}}}_{{\text{pwr (known }}N{\text{)}}})={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)} Withyˇi=yiπi{\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}}. Also,C(Ii,Ij)=πij−πiπj=Δij{\displaystyle C(I_{i},I_{j})=\pi _{ij}-\pi _{i}\pi _{j}=\Delta _{ij}}whereπij{\displaystyle \pi _{ij}}is the probability of selecting both i and j.[2]: 36AndΔˇij=1−πiπjπij{\displaystyle {\check {\Delta }}_{ij}=1-{\frac {\pi _{i}\pi _{j}}{\pi _{ij}}}}, and for i=j:Δˇii=1−πiπiπi=1−πi{\displaystyle {\check {\Delta }}_{ii}=1-{\frac {\pi _{i}\pi _{i}}{\pi _{i}}}=1-\pi _{i}}.[2]: 43 If the selection probability are uncorrelated (i.e.:∀i≠j:C(Ii,Ij)=0{\displaystyle \forall i\neq j:C(I_{i},I_{j})=0}), and when assuming the probability of each element is very small, then: We assume that(1−πi)≈1{\displaystyle (1-\pi _{i})\approx 1}and thatVar⁡(Y^pwr (knownN))=1N2∑i=1n∑j=1n(Δˇijyˇiyˇj)=1N2∑i=1n(Δˇiiyˇiyˇi)=1N2∑i=1n((1−πi)yiπiyiπi)=1N2∑i=1n(wiyi)2{\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{{\text{pwr (known }}N{\text{)}}})&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left({\check {\Delta }}_{ii}{\check {y}}_{i}{\check {y}}_{i}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left((1-\pi _{i}){\frac {y_{i}}{\pi _{i}}}{\frac {y_{i}}{\pi _{i}}}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left(w_{i}y_{i}\right)^{2}\end{aligned}}} The previous section dealt with estimating the population mean as a ratio of an estimated population total (Y^{\displaystyle {\hat {Y}}}) with a known population size (N{\displaystyle N}), and the variance was estimated in that context. Another common case is that the population size itself (N{\displaystyle N}) is unknown and is estimated using the sample (i.e.:N^{\displaystyle {\hat {N}}}). The estimation ofN{\displaystyle N}can be described as the sum of weights. So whenwi=1πi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}}we getN^=∑i=1nwiIi=∑i=1nIiπi=∑i=1n1ˇi′{\displaystyle {\hat {N}}=\sum _{i=1}^{n}w_{i}I_{i}=\sum _{i=1}^{n}{\frac {I_{i}}{\pi _{i}}}=\sum _{i=1}^{n}{\check {1}}'_{i}}. With the above notation, the parameter we care about is the ratio of the sums ofyi{\displaystyle y_{i}}s, and 1s. I.e.:R=Y¯=∑i=1Nyiπi∑i=1N1πi=∑i=1Nyˇi∑i=1N1ˇi=∑i=1Nwiyi∑i=1Nwi{\displaystyle R={\bar {Y}}={\frac {\sum _{i=1}^{N}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}_{i}}{\sum _{i=1}^{N}{\check {1}}_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y_{i}}{\sum _{i=1}^{N}w_{i}}}}. We can estimate it using our sample with:R^=Y¯^=∑i=1NIiyiπi∑i=1NIi1πi=∑i=1Nyˇi′∑i=1N1ˇi′=∑i=1Nwiyi′∑i=1Nwi1i′=∑i=1nwiyi′∑i=1nwi1i′=y¯w{\displaystyle {\hat {R}}={\hat {\bar {Y}}}={\frac {\sum _{i=1}^{N}I_{i}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}I_{i}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}'_{i}}{\sum _{i=1}^{N}{\check {1}}'_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y'_{i}}{\sum _{i=1}^{N}w_{i}1'_{i}}}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}={\bar {y}}_{w}}. As we moved from using N to using n, we actually know that all the indicator variables get 1, so we could simply write:y¯w=∑i=1nwiyi∑i=1nwi{\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y_{i}}{\sum _{i=1}^{n}w_{i}}}}. This will be theestimandfor specific values of y and w, but the statistical properties comes when including the indicator variabley¯w=∑i=1nwiyi′∑i=1nwi1i′{\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}}.[2]: 162, 163, 176 This is called aRatio estimatorand it is approximately unbiased forR.[2]: 182 In this case, the variability of theratiodepends on the variability of the random variables both in the numerator and the denominator - as well as their correlation. Since there is no closed analytical form to compute this variance, various methods are used for approximate estimation. PrimarilyTaylor seriesfirst-order linearization, asymptotics, and bootstrap/jackknife.[2]: 172The Taylor linearization method could lead to under-estimation of the variance for small sample sizes in general, but that depends on the complexity of the statistic. For the weighted mean, the approximate variance is supposed to be relatively accurate even for medium sample sizes.[2]: 176For when the sampling has a random sample size (as inPoisson sampling), it is as follows:[2]: 182 Ifπi≈pin{\displaystyle \pi _{i}\approx p_{i}n}, then either usingwi=1πi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}}orwi=1pi{\displaystyle w_{i}={\frac {1}{p_{i}}}}would give the same estimator, since multiplyingwi{\displaystyle w_{i}}by some factor would lead to the same estimator. It also means that if we scale the sum of weights to be equal to a known-from-before population sizeN, the variance calculation would look the same. When all weights are equal to one another, this formula is reduced to the standard unbiased variance estimator. The Taylor linearization states that for a general ratio estimator of two sums (R^=Y^Z^{\displaystyle {\hat {R}}={\frac {\hat {Y}}{\hat {Z}}}}), they can be expanded around the true value R, and give:[2]: 178 R^=Y^Z^=∑i=1nwiyi′∑i=1nwizi′≈R+1Z∑i=1n(yi′πi−Rzi′πi){\displaystyle {\hat {R}}={\frac {\hat {Y}}{\hat {Z}}}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}z'_{i}}}\approx R+{\frac {1}{Z}}\sum _{i=1}^{n}\left({\frac {y'_{i}}{\pi _{i}}}-R{\frac {z'_{i}}{\pi _{i}}}\right)} And the variance can be approximated by:[2]: 178, 179 V(R^)^=1Z^2∑i=1n∑j=1n(Δˇijyi−R^ziπiyj−R^zjπj)=1Z^2[V(Y^)^+R^V(Z^)^−2R^C^(Y^,Z^)]{\displaystyle {\widehat {V({\hat {R}})}}={\frac {1}{{\hat {Z}}^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\frac {y_{i}-{\hat {R}}z_{i}}{\pi _{i}}}{\frac {y_{j}-{\hat {R}}z_{j}}{\pi _{j}}}\right)={\frac {1}{{\hat {Z}}^{2}}}\left[{\widehat {V({\hat {Y}})}}+{\hat {R}}{\widehat {V({\hat {Z}})}}-2{\hat {R}}{\hat {C}}({\hat {Y}},{\hat {Z}})\right]}. The termC^(Y^,Z^){\displaystyle {\hat {C}}({\hat {Y}},{\hat {Z}})}is the estimated covariance between the estimated sum of Y and estimated sum of Z. Since this is thecovariance of two sums of random variables, it would include many combinations of covariances that will depend on the indicator variables. If the selection probability are uncorrelated (i.e.:∀i≠j:Δij=C(Ii,Ij)=0{\displaystyle \forall i\neq j:\Delta _{ij}=C(I_{i},I_{j})=0}), this term would still include a summation ofncovariances for each elementibetweenyi′=Iiyi{\displaystyle y'_{i}=I_{i}y_{i}}andzi′=Iizi{\displaystyle z'_{i}=I_{i}z_{i}}. This helps illustrate that this formula incorporates the effect of correlation between y and z on the variance of the ratio estimators. When definingzi=1{\displaystyle z_{i}=1}the above becomes:[2]: 182 V(R^)^=V(y¯w)^=1N^2∑i=1n∑j=1n(Δˇijyi−y¯wπiyj−y¯wπj).{\displaystyle {\widehat {V({\hat {R}})}}={\widehat {V({\bar {y}}_{w})}}={\frac {1}{{\hat {N}}^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\frac {y_{i}-{\bar {y}}_{w}}{\pi _{i}}}{\frac {y_{j}-{\bar {y}}_{w}}{\pi _{j}}}\right).} If the selection probability are uncorrelated (i.e.:∀i≠j:Δij=C(Ii,Ij)=0{\displaystyle \forall i\neq j:\Delta _{ij}=C(I_{i},I_{j})=0}), and when assuming the probability of each element is very small (i.e.:(1−πi)≈1{\displaystyle (1-\pi _{i})\approx 1}), then the above reduced to the following:V(y¯w)^=1N^2∑i=1n((1−πi)yi−y¯wπi)2=1(∑i=1nwi)2∑i=1nwi2(yi−y¯w)2.{\displaystyle {\widehat {V({\bar {y}}_{w})}}={\frac {1}{{\hat {N}}^{2}}}\sum _{i=1}^{n}\left((1-\pi _{i}){\frac {y_{i}-{\bar {y}}_{w}}{\pi _{i}}}\right)^{2}={\frac {1}{(\sum _{i=1}^{n}w_{i})^{2}}}\sum _{i=1}^{n}w_{i}^{2}(y_{i}-{\bar {y}}_{w})^{2}.} A similar re-creation of the proof (up to some mistakes at the end) was provided by Thomas Lumley in crossvalidated.[3] We have (at least) two versions of variance for the weighted mean: one with known and one with unknown population size estimation. There is no uniformly better approach, but the literature presents several arguments to prefer using the population estimation version (even when the population size is known).[2]: 188For example: if all y values are constant, the estimator with unknown population size will give the correct result, while the one with known population size will have some variability. Also, when the sample size itself is random (e.g.: inPoisson sampling), the version with unknown population mean is considered more stable. Lastly, if the proportion of sampling is negatively correlated with the values (i.e.: smaller chance to sample an observation that is large), then the un-known population size version slightly compensates for that. For the trivial case in which all the weights are equal to 1, the above formula is just like the regular formula for the variance of the mean (but notice that it uses the maximum likelihood estimator for the variance instead of the unbiased variance. I.e.: dividing it by n instead of (n-1)). It has been shown, by Gatz et al. (1995), that in comparison tobootstrappingmethods, the following (variance estimation of ratio-mean usingTaylor serieslinearization) is a reasonable estimation for the square of the standard error of the mean (when used in the context of measuring chemical constituents):[4]: 1186 wherew¯=∑win{\displaystyle {\bar {w}}={\frac {\sum w_{i}}{n}}}. Further simplification leads to Gatz et al. mention that the above formulation was published by Endlich et al. (1988) when treating the weighted mean as a combination of a weighted total estimator divided by an estimator of the population size,[5]based on the formulation published by Cochran (1977), as an approximation to the ratio mean. However, Endlich et al. didn't seem to publish this derivation in their paper (even though they mention they used it), and Cochran's book includes a slightly different formulation.[1]: 155Still, it's almost identical to the formulations described in previous sections. Because there is no closed analytical form for the variance of the weighted mean, it was proposed in the literature to rely on replication methods such as theJackknifeandBootstrapping.[1]: 321 For uncorrelated observations with variancesσi2{\displaystyle \sigma _{i}^{2}}, the variance of the weighted sample mean is[citation needed] whose square rootσx¯{\displaystyle \sigma _{\bar {x}}}can be called thestandard error of the weighted mean (general case).[citation needed] Consequently, if all the observations have equal variance,σi2=σ02{\displaystyle \sigma _{i}^{2}=\sigma _{0}^{2}}, the weighted sample mean will have variance where1/n≤∑i=1nwi′2≤1{\textstyle 1/n\leq \sum _{i=1}^{n}{w_{i}'^{2}}\leq 1}. The variance attains its maximum value,σ02{\displaystyle \sigma _{0}^{2}}, when all weights except one are zero. Its minimum value is found when all weights are equal (i.e., unweighted mean), in which case we haveσx¯=σ0/n{\textstyle \sigma _{\bar {x}}=\sigma _{0}/{\sqrt {n}}}, i.e., it degenerates into thestandard error of the mean, squared. Because one can always transform non-normalized weights to normalized weights, all formulas in this section can be adapted to non-normalized weights by replacing allwi′=wi∑i=1nwi{\displaystyle w_{i}'={\frac {w_{i}}{\sum _{i=1}^{n}{w_{i}}}}}. Typically when a mean is calculated it is important to know thevarianceandstandard deviationabout that mean. When a weighted meanμ∗{\displaystyle \mu ^{*}}is used, the variance of the weighted sample is different from the variance of the unweighted sample. Thebiasedweightedsample varianceσ^w2{\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}is defined similarly to the normalbiasedsample varianceσ^2{\displaystyle {\hat {\sigma }}^{2}}: where∑i=1Nwi=1{\displaystyle \sum _{i=1}^{N}w_{i}=1}for normalized weights. If the weights arefrequency weights(and thus are random variables), it can be shown[citation needed]thatσ^w2{\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}is the maximum likelihood estimator ofσ2{\displaystyle \sigma ^{2}}foriidGaussian observations. For small samples, it is customary to use anunbiased estimatorfor the population variance. In normal unweighted samples, theNin the denominator (corresponding to the sample size) is changed toN− 1 (seeBessel's correction). In the weighted setting, there are actually two different unbiased estimators, one for the case offrequency weightsand another for the case ofreliability weights. If the weights arefrequency weights(where a weight equals the number of occurrences), then the unbiased estimator is: This effectively applies Bessel's correction for frequency weights. For example, if values{2,2,4,5,5,5}{\displaystyle \{2,2,4,5,5,5\}}are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample{2,4,5}{\displaystyle \{2,4,5\}}with corresponding weights{2,1,3}{\displaystyle \{2,1,3\}}, and we get the same result either way. If the frequency weights{wi}{\displaystyle \{w_{i}\}}are normalized to 1, then the correct expression after Bessel's correction becomes where the total number of samples is∑i=1Nwi{\displaystyle \sum _{i=1}^{N}w_{i}}(notN{\displaystyle N}). In any case, the information on total number of samples is necessary in order to obtain an unbiased correction, even ifwi{\displaystyle w_{i}}has a different meaning other than frequency weight. The estimator can be unbiased only if the weights are notstandardizednornormalized, these processes changing the data's mean and variance and thus leading to aloss of the base rate(the population count, which is a requirement for Bessel's correction). If the weights are insteadreliability weights(non-random values reflecting the sample's relative trustworthiness, often derived from sample variance), we can determine a correction factor to yield an unbiased estimator. Assuming each random variable is sampled from the same distribution with meanμ{\displaystyle \mu }and actual varianceσactual2{\displaystyle \sigma _{\text{actual}}^{2}}, taking expectations we have, whereV1=∑i=1Nwi{\displaystyle V_{1}=\sum _{i=1}^{N}w_{i}}andV2=∑i=1Nwi2{\displaystyle V_{2}=\sum _{i=1}^{N}w_{i}^{2}}. Therefore, the bias in our estimator is(1−V2V12){\displaystyle \left(1-{\frac {V_{2}}{V_{1}^{2}}}\right)}, analogous to the(N−1N){\displaystyle \left({\frac {N-1}{N}}\right)}bias in the unweighted estimator (also notice thatV12/V2=Neff{\displaystyle \ V_{1}^{2}/V_{2}=N_{eff}}is theeffective sample size). This means that to unbias our estimator we need to pre-divide by1−(V2/V12){\displaystyle 1-\left(V_{2}/V_{1}^{2}\right)}, ensuring that the expected value of the estimated variance equals the actual variance of the sampling distribution. The final unbiased estimate of sample variance is: whereE⁡[sw2]=σactual2{\displaystyle \operatorname {E} [s_{\mathrm {w} }^{2}]=\sigma _{\text{actual}}^{2}}. The degrees of freedom of this weighted, unbiased sample variance vary accordingly fromN− 1 down to 0. The standard deviation is simply the square root of the variance above. As a side note, other approaches have been described to compute the weighted sample variance.[7] In a weighted sample, each row vectorxi{\displaystyle \mathbf {x} _{i}}(each set of single observations on each of theKrandom variables) is assigned a weightwi≥0{\displaystyle w_{i}\geq 0}. Then theweighted meanvectorμ∗{\displaystyle \mathbf {\mu ^{*}} }is given by And the weighted covariance matrix is given by:[8] Similarly to weighted sample variance, there are two different unbiased estimators depending on the type of the weights. If the weights arefrequency weights, theunbiasedweighted estimate of the covariance matrixC{\displaystyle \textstyle \mathbf {C} }, with Bessel's correction, is given by:[8] This estimator can be unbiased only if the weights are notstandardizednornormalized, these processes changing the data's mean and variance and thus leading to aloss of the base rate(the population count, which is a requirement for Bessel's correction). In the case ofreliability weights, the weights arenormalized: (If they are not, divide the weights by their sum to normalize prior to calculatingV1{\displaystyle V_{1}}: Then theweighted meanvectorμ∗{\displaystyle \mathbf {\mu ^{*}} }can be simplified to and theunbiasedweighted estimate of the covariance matrixC{\displaystyle \mathbf {C} }is:[9] The reasoning here is the same as in the previous section. Since we are assuming the weights are normalized, thenV1=1{\displaystyle V_{1}=1}and this reduces to: If all weights are the same, i.e.wi/V1=1/N{\displaystyle w_{i}/V_{1}=1/N}, then the weighted mean and covariance reduce to the unweighted sample mean and covariance above. The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide amaximum likelihoodestimate. We simply replace the varianceσ2{\displaystyle \sigma ^{2}}by thecovariance matrixC{\displaystyle \mathbf {C} }and thearithmetic inverseby thematrix inverse(both denoted in the same way, via superscripts); the weight matrix then reads:[10] Wi=Ci−1.{\displaystyle \mathbf {W} _{i}=\mathbf {C} _{i}^{-1}.} The weighted mean in this case is:x¯=Cx¯(∑i=1nWixi),{\displaystyle {\bar {\mathbf {x} }}=\mathbf {C} _{\bar {\mathbf {x} }}\left(\sum _{i=1}^{n}\mathbf {W} _{i}\mathbf {x} _{i}\right),}(where the order of thematrix–vector productis notcommutative), in terms of the covariance of the weighted mean:Cx¯=(∑i=1nWi)−1,{\displaystyle \mathbf {C} _{\bar {\mathbf {x} }}=\left(\sum _{i=1}^{n}\mathbf {W} _{i}\right)^{-1},} For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then then the weighted mean is: which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1]. In the general case, suppose thatX=[x1,…,xn]T{\displaystyle \mathbf {X} =[x_{1},\dots ,x_{n}]^{T}},C{\displaystyle \mathbf {C} }is thecovariance matrixrelating the quantitiesxi{\displaystyle x_{i}},x¯{\displaystyle {\bar {x}}}is the common mean to be estimated, andJ{\displaystyle \mathbf {J} }is adesign matrixequal to avector of ones[1,…,1]T{\displaystyle [1,\dots ,1]^{T}}(of lengthn{\displaystyle n}). TheGauss–Markov theoremstates that the estimate of the mean having minimum variance is given by: and where: Consider the time series of an independent variablex{\displaystyle x}and a dependent variabley{\displaystyle y}, withn{\displaystyle n}observations sampled at discrete timesti{\displaystyle t_{i}}. In many common situations, the value ofy{\displaystyle y}at timeti{\displaystyle t_{i}}depends not only onxi{\displaystyle x_{i}}but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding meanz{\displaystyle z}for a window sizem{\displaystyle m}. In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction0<Δ<1{\displaystyle 0<\Delta <1}at each time step. Settingw=1−Δ{\displaystyle w=1-\Delta }we can definem{\displaystyle m}normalized weights by whereV1{\displaystyle V_{1}}is the sum of the unnormalized weights. In this caseV1{\displaystyle V_{1}}is simply approachingV1=1/(1−w){\displaystyle V_{1}=1/(1-w)}for large values ofm{\displaystyle m}. The damping constantw{\displaystyle w}must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step(1−w)−1{\displaystyle (1-w)^{-1}}, the weight approximately equalse−1(1−w)=0.39(1−w){\displaystyle {e^{-1}}(1-w)=0.39(1-w)}, the tail area the valuee−1{\displaystyle e^{-1}}, the head area1−e−1=0.61{\displaystyle {1-e^{-1}}=0.61}. The tail area at stepn{\displaystyle n}is≤e−n(1−w){\displaystyle \leq {e^{-n(1-w)}}}. Where primarily the closestn{\displaystyle n}observations matter and the effect of the remaining observations can be ignored safely, then choosew{\displaystyle w}such that the tail area is sufficiently small. The concept of weighted average can be extended to functions.[11]Weighted averages of functions play an important role in the systems of weighted differential and integral calculus.[12] Weighted means are typically used to find the weighted mean of historical data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact thatχ2{\displaystyle \chi ^{2}}is too large. The correction that must be made is whereχν2{\displaystyle \chi _{\nu }^{2}}is thereduced chi-squared: The square rootσ^x¯{\displaystyle {\hat {\sigma }}_{\bar {x}}}can be called thestandard error of the weighted mean (variance weights, scale corrected). When all data variances are equal,σi=σ0{\displaystyle \sigma _{i}=\sigma _{0}}, they cancel out in the weighted mean variance,σx¯2{\displaystyle \sigma _{\bar {x}}^{2}}, which again reduces to thestandard error of the mean(squared),σx¯2=σ2/n{\displaystyle \sigma _{\bar {x}}^{2}=\sigma ^{2}/n}, formulated in terms of thesample standard deviation(squared),
https://en.wikipedia.org/wiki/Weighted_mean
In mathematics, afunctionon thereal numbersis called astep functionif it can be written as afinitelinear combinationofindicator functionsofintervals. Informally speaking, a step function is apiecewiseconstant functionhaving only finitely many pieces. A functionf:R→R{\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} }is called astep functionif it can be written as[citation needed] wheren≥0{\displaystyle n\geq 0},αi{\displaystyle \alpha _{i}}are real numbers,Ai{\displaystyle A_{i}}are intervals, andχA{\displaystyle \chi _{A}}is theindicator functionofA{\displaystyle A}: In this definition, the intervalsAi{\displaystyle A_{i}}can be assumed to have the following two properties: Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function can be written as Sometimes, the intervals are required to be right-open[1]or allowed to be singleton.[2]The condition that the collection of intervals must be finite is often dropped, especially in school mathematics,[3][4][5]though it must still belocally finite, resulting in the definition of piecewise constant functions.
https://en.wikipedia.org/wiki/Step_function
TheHeaviside step function, or theunit step function, usually denoted byHorθ(but sometimesu,1or𝟙), is astep functionnamed afterOliver Heaviside, the value of which iszerofor negative arguments andonefor positive arguments. Different conventions concerning the valueH(0)are in use. It is an example of the general class of step functions, all of which can be represented aslinear combinationsof translations of this one. The function was originally developed inoperational calculusfor the solution ofdifferential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as1. Taking the convention thatH(0) = 1, the Heaviside function may be defined as: For the alternative convention thatH(0) =⁠1/2⁠, it may be expressed as: Other definitions which are undefined atH(0)include: H(x)=x+|x|2x{\displaystyle H(x)={\frac {x+|x|}{2x}}} TheDirac delta functionis theweak derivativeof the Heaviside function:δ(x)=ddxH(x).{\displaystyle \delta (x)={\frac {d}{dx}}H(x).}Hence the Heaviside function can be considered to be theintegralof the Dirac delta function. This is sometimes written asH(x):=∫−∞xδ(s)ds{\displaystyle H(x):=\int _{-\infty }^{x}\delta (s)\,ds}although this expansion may not hold (or even make sense) forx= 0, depending on which formalism one uses to give meaning to integrals involvingδ. In this context, the Heaviside function is thecumulative distribution functionof arandom variablewhich isalmost surely0. (SeeConstant random variable.) Approximations to the Heaviside step function are of use inbiochemistryandneuroscience, wherelogisticapproximations of step functions (such as theHilland theMichaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals. For asmoothapproximation to the step function, one can use thelogistic functionH(x)≈12+12tanh⁡kx=11+e−2kx,{\displaystyle H(x)\approx {\tfrac {1}{2}}+{\tfrac {1}{2}}\tanh kx={\frac {1}{1+e^{-2kx}}},} where a largerkcorresponds to a sharper transition atx= 0. If we takeH(0) =⁠1/2⁠, equality holds in the limit:H(x)=limk→∞12(1+tanh⁡kx)=limk→∞11+e−2kx.{\displaystyle H(x)=\lim _{k\to \infty }{\tfrac {1}{2}}(1+\tanh kx)=\lim _{k\to \infty }{\frac {1}{1+e^{-2kx}}}.} There aremany other smooth, analytic approximationsto the step function.[1]Among the possibilities are:H(x)=limk→∞(12+1πarctan⁡kx)H(x)=limk→∞(12+12erf⁡kx){\displaystyle {\begin{aligned}H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{\pi }}\arctan kx\right)\\H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{2}}\operatorname {erf} kx\right)\end{aligned}}} These limits holdpointwiseand in the sense ofdistributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, thenconvergence holds in the sense of distributions too.) In general, anycumulative distribution functionof acontinuousprobability distributionthat is peaked around zero and has a parameter that controls forvariancecan serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations arecumulative distribution functionsof common probability distributions: thelogistic,Cauchyandnormaldistributions, respectively. Approximations to the Heaviside step function could be made throughSmooth transition functionlike1≤m→∞{\displaystyle 1\leq m\to \infty }:f(x)={12(1+tanh⁡(m2x1−x2)),|x|<11,x≥10,x≤−1{\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {1}{2}}\left(1+\tanh \left(m{\frac {2x}{1-x^{2}}}\right)\right)},&|x|<1\\\\1,&x\geq 1\\0,&x\leq -1\end{cases}}\end{aligned}}} Often anintegralrepresentation of the Heaviside step function is useful:H(x)=limε→0+−12πi∫−∞∞1τ+iεe−ixτdτ=limε→0+12πi∫−∞∞1τ−iεeixτdτ.{\displaystyle {\begin{aligned}H(x)&=\lim _{\varepsilon \to 0^{+}}-{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau +i\varepsilon }}e^{-ix\tau }d\tau \\&=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau -i\varepsilon }}e^{ix\tau }d\tau .\end{aligned}}} where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate. SinceHis usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen ofH(0). Indeed whenHis considered as adistributionor an element ofL∞(seeLpspace) it does not even make sense to talk of a value at zero, since such objects are only definedalmost everywhere. If using some analytic approximation (as in theexamples above) then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value. Also, H(x) + H(-x) = 1 for all x. An alternative form of the unit step, defined instead as a functionH:Z→R{\displaystyle H:\mathbb {Z} \rightarrow \mathbb {R} }(that is, taking in a discrete variablen), is: H[n]={0,n<0,1,n≥0,{\displaystyle H[n]={\begin{cases}0,&n<0,\\1,&n\geq 0,\end{cases}}} or using the half-maximum convention:[2] H[n]={0,n<0,12,n=0,1,n>0,{\displaystyle H[n]={\begin{cases}0,&n<0,\\{\tfrac {1}{2}},&n=0,\\1,&n>0,\end{cases}}} wherenis aninteger. Ifnis an integer, thenn< 0must imply thatn≤ −1, whilen> 0must imply that the function attains unity atn= 1. Therefore the "step function" exhibits ramp-like behavior over the domain of[−1, 1], and cannot authentically be a step function, using the half-maximum convention. Unlike the continuous case, the definition ofH[0]is significant. The discrete-time unit impulse is the first difference of the discrete-time step δ[n]=H[n]−H[n−1].{\displaystyle \delta [n]=H[n]-H[n-1].} This function is the cumulative summation of theKronecker delta: H[n]=∑k=−∞nδ[k]{\displaystyle H[n]=\sum _{k=-\infty }^{n}\delta [k]} where δ[k]=δk,0{\displaystyle \delta [k]=\delta _{k,0}} is thediscrete unit impulse function. Theramp functionis anantiderivativeof the Heaviside step function:∫−∞xH(ξ)dξ=xH(x)=max{0,x}.{\displaystyle \int _{-\infty }^{x}H(\xi )\,d\xi =xH(x)=\max\{0,x\}\,.} Thedistributional derivativeof the Heaviside step function is theDirac delta function:dH(x)dx=δ(x).{\displaystyle {\frac {dH(x)}{dx}}=\delta (x)\,.} TheFourier transformof the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we haveH^(s)=limN→∞∫−NNe−2πixsH(x)dx=12(δ(s)−iπp.v.⁡1s).{\displaystyle {\hat {H}}(s)=\lim _{N\to \infty }\int _{-N}^{N}e^{-2\pi ixs}H(x)\,dx={\frac {1}{2}}\left(\delta (s)-{\frac {i}{\pi }}\operatorname {p.v.} {\frac {1}{s}}\right).} Herep.v.⁠1/s⁠is thedistributionthat takes a test functionφto theCauchy principal valueof∫−∞∞φ(s)sds{\displaystyle \textstyle \int _{-\infty }^{\infty }{\frac {\varphi (s)}{s}}\,ds}. The limit appearing in the integral is also taken in the sense of (tempered) distributions. TheLaplace transformof the Heaviside step function is ameromorphic function. Using the unilateral Laplace transform we have:H^(s)=limN→∞∫0Ne−sxH(x)dx=limN→∞∫0Ne−sxdx=1s{\displaystyle {\begin{aligned}{\hat {H}}(s)&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}H(x)\,dx\\&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}\,dx\\&={\frac {1}{s}}\end{aligned}}} When the bilateral transform is used, the integral can be split in two parts and the result will be the same.
https://en.wikipedia.org/wiki/Heaviside_step_function
In the context ofartificial neural networks, therectifierorReLU (rectified linear unit) activation function[1][2]is anactivation functiondefined as the non-negative part of its argument, i.e., theramp function: wherex{\displaystyle x}is the input to aneuron. This is analogous tohalf-wave rectificationinelectrical engineering. ReLU is one of the most popular activation functions for artificial neural networks,[3]and finds application incomputer vision[4]andspeech recognition[5][6]usingdeep neural netsandcomputational neuroscience.[7][8] The ReLU was first used byAlston Householderin 1941 as a mathematical abstraction of biological neural networks.[9] Kunihiko Fukushimain 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks.[10][11]30 years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria.[12][13] Prior to 2010, most activation functions used were thelogistic sigmoid(which is inspired byprobability theory; seelogistic regression) and its more numerically efficient[14]counterpart, thehyperbolic tangent. Around 2010, the use of ReLU became common again. Jarrett et al. (2009) noted that rectification by eitherabsoluteor ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allowsaverage poolingwithout neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs.[15] Nair and Hinton (2010) made a theoretical argument that thesoftplusactivation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in aBoltzmann machinethat takesx{\displaystyle x}as input, and produces 1 as output with probabilityσ(x)=11+e−x{\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}}. They then considered extending its range of output by making infinitely many copies of itX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dots }, that all take the same input, offset by an amount0.5,1.5,2.5,…{\displaystyle 0.5,1.5,2.5,\dots }, then their outputs are added together as∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}. They then demonstrated that∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}is approximately equal toN(log⁡(1+ex),σ(x)){\displaystyle {\mathcal {N}}(\log(1+e^{x}),\sigma (x))}, which is also approximately equal toReLU⁡(N(x,σ(x))){\displaystyle \operatorname {ReLU} ({\mathcal {N}}(x,\sigma (x)))}, whereN{\displaystyle {\mathcal {N}}}stands for thegaussian distribution. They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constantk{\displaystyle k}multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance inrestricted Boltzmann machines.[16] Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performancewithoutunsupervised pre-training, especially on large, purely supervised tasks.[4] Advantages of ReLU include: Possible downsides can include: Leaky ReLUallows a small, positive gradient when the unit is inactive,[6]helping to mitigate the vanishing gradient problem. This gradient is defined by a parameterα{\displaystyle \alpha }, typically set to 0.01–0.3.[17][18] The same function can also be expressed without the piecewise notation as: Parametric ReLU (PReLU)takes this idea further by makingα{\displaystyle \alpha }a learnable parameter along with the other network parameters.[19] Note that forα≤1{\displaystyle \alpha \leq 1}, this is equivalent to and thus has a relation to "maxout" networks.[19] Concatenated ReLU (CReLU)preserves positive and negative phase information:[20] ExtendeD Exponential Linear Unit (DELU) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU.[21] In these formulas,a{\displaystyle a},b{\displaystyle b}andxc{\displaystyle x_{c}}arehyperparametervalues which could be set as default constraintsa=1{\displaystyle a=1},b=2{\displaystyle b=2}andxc=1.25643{\displaystyle x_{c}=1.25643}, as done in the original work. GELU is a smooth approximation to the rectifier: whereΦ(x)=P(X⩽x){\displaystyle \Phi (x)=P(X\leqslant x)}is thecumulative distribution functionof the standardnormal distribution. This activation function is illustrated in the figure at the start of this article. It has a "bump" to the left ofx< 0 and serves as the default activation for models such asBERT.[22] The SiLU (sigmoid linear unit) orswish function[23]is another smooth approximation which uses thesigmoid function, first introduced in the GELU paper:[22] A smooth approximation to the rectifier is theanalytic function which is called thesoftplus[24][4]orSmoothReLUfunction.[25]For large negativex{\displaystyle x}it is roughlyln⁡1{\displaystyle \ln 1}, so just above 0, while for large positivex{\displaystyle x}it is roughlyln⁡(ex){\displaystyle \ln(e^{x})}, so just abovex{\displaystyle x}. This function can be approximated as: By making the change of variablesx=yln⁡(2){\displaystyle x=y\ln(2)}, this is equivalent to A sharpness parameterk{\displaystyle k}may be included: The derivative of softplus is thelogistic function. The logisticsigmoid functionis a smooth approximation of the derivative of the rectifier, theHeaviside step function. The multivariable generalization of single-variable softplus is theLogSumExpwith the first argument set to zero: The LogSumExp function is and its gradient is thesoftmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning. Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.[26] In these formulas,α{\displaystyle \alpha }is ahyperparameterto be tuned with the constraintα≥0{\displaystyle \alpha \geq 0}. Given the same interpretation ofα{\displaystyle \alpha }, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the formf(x)=max(−α,x){\displaystyle f(x)=\max(-\alpha ,x)}. The mish function can also be used as a smooth approximation of the rectifier.[23]It is defined as wheretanh⁡(x){\displaystyle \tanh(x)}is thehyperbolic tangent, andsoftplus⁡(x){\displaystyle \operatorname {softplus} (x)}is thesoftplusfunction. Mish is non-monotonicandself-gated.[27]It was inspired bySwish, itself a variant ofReLU.[27] Squareplus[28]is the function whereb≥0{\displaystyle b\geq 0}is a hyperparameter that determines the "size" of the curved region nearx=0{\displaystyle x=0}. (For example, lettingb=0{\displaystyle b=0}yields ReLU, and lettingb=4{\displaystyle b=4}yields themetallic meanfunction.) Squareplus shares many properties with softplus: It ismonotonic, strictlypositive, approaches 0 asx→−∞{\displaystyle x\to -\infty }, approaches the identity asx→+∞{\displaystyle x\to +\infty }, and isC∞{\displaystyle C^{\infty }}smooth. However, squareplus can be computed using onlyalgebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability whenx{\displaystyle x}is large.
https://en.wikipedia.org/wiki/Softplus_function
TheSoboleva modified hyperbolic tangent, also known as(parametric) Soboleva modified hyperbolic tangent activation function([P]SMHTAF),[nb 1]is a specialS-shapedfunctionbased on thehyperbolic tangent, given by This function was originally proposed as "modified hyperbolic tangent"[nb 1]byUkrainianscientist Elena V. Soboleva (Елена В. Соболева) as a utility function formulti-objective optimizationandchoice modellingindecision-making.[1][2][3] The function has since been introduced intoneural networktheory and practice.[4] It was also used in economics for modelling consumption and investment,[5]to approximate current-voltage characteristics offield-effect transistorsandlight-emitting diodes,[6]to designantenna feeders,[7][predatory publisher]and analyzeplasmatemperatures and densities in thedivertorregion offusion reactors.[8] Derivative of the function is defined by the formula: smht′⁡(x)≐aeax+be−bxecx+e−dx−smht⁡(x)cecx−de−dxecx+e−dx{\displaystyle \operatorname {smht} '(x)\doteq {\frac {ae^{ax}+be^{-bx}}{e^{cx}+e^{-dx}}}-\operatorname {smht} (x){\frac {ce^{cx}-de^{-dx}}{e^{cx}+e^{-dx}}}} The following conditions are keeping the function limited ony-axes:a≤c,b≤d. A family of recurrence-generated parametric Soboleva modified hyperbolic tangent activation functions (NPSMHTAF, FPSMHTAF) was studied with parametersa=candb=d.[9]It is worth noting that in this case, the function is not sensitive to flipping the left and right-sides parameters: The function is sensitive to ratio of the denominator coefficients and often is used without coefficients in the numerator: Extremum estimates:xmin=12ln⁡β−1β+1;{\displaystyle x_{\min }={\frac {1}{2}}\ln {\frac {\beta -1}{\beta +1}};}xmax=12ln⁡α+1α−1.{\displaystyle x_{\max }={\frac {1}{2}}\ln {\frac {\alpha +1}{\alpha -1}}.} With parametersa=b=c=d= 1 the modified hyperbolic tangent function reduces to the conventionaltanh(x) function, whereas fora=b= 1 andc=d= 0, the term becomes equal tosinh(x).
https://en.wikipedia.org/wiki/Soboleva_modified_hyperbolic_tangent
Theswish functionis a family ofmathematical functiondefined as follows: whereβ{\displaystyle \beta }can be constant (usually set to 1) ortrainable. The swish family was designed to smoothlyinterpolatebetween a linear function and the ReLU function. When considering positive values, Swish is a particular case of doubly parameterized sigmoid shrinkage function defined in[2]: Eq 3. Variants of the swish function includeMish.[3] For β = 0, the function is linear: f(x) =x/2. For β = 1, the function is theSigmoid Linear Unit(SiLU). With β → ∞, the function converges toReLU. Thus, the swish family smoothlyinterpolatesbetween a linear function and the ReLU function.[1] Sinceswishβ⁡(x)=swish1⁡(βx)/β{\displaystyle \operatorname {swish} _{\beta }(x)=\operatorname {swish} _{1}(\beta x)/\beta }, all instances of swish have the same shape as the defaultswish1{\displaystyle \operatorname {swish} _{1}}, zoomed byβ{\displaystyle \beta }. One usually setsβ>0{\displaystyle \beta >0}. Whenβ{\displaystyle \beta }is trainable, this constraint can be enforced byβ=eb{\displaystyle \beta =e^{b}}, whereb{\displaystyle b}is trainable. swish1⁡(x)=x2+x24−x448+x6480+O(x8){\displaystyle \operatorname {swish} _{1}(x)={\frac {x}{2}}+{\frac {x^{2}}{4}}-{\frac {x^{4}}{48}}+{\frac {x^{6}}{480}}+O\left(x^{8}\right)} swish1⁡(x)=x2tanh⁡(x2)+x2swish1⁡(x)+swish−1⁡(x)=xtanh⁡(x2)swish1⁡(x)−swish−1⁡(x)=x{\displaystyle {\begin{aligned}\operatorname {swish} _{1}(x)&={\frac {x}{2}}\tanh \left({\frac {x}{2}}\right)+{\frac {x}{2}}\\\operatorname {swish} _{1}(x)+\operatorname {swish} _{-1}(x)&=x\tanh \left({\frac {x}{2}}\right)\\\operatorname {swish} _{1}(x)-\operatorname {swish} _{-1}(x)&=x\end{aligned}}} Becauseswishβ⁡(x)=swish1⁡(βx)/β{\displaystyle \operatorname {swish} _{\beta }(x)=\operatorname {swish} _{1}(\beta x)/\beta }, it suffices to calculate its derivatives for the default case.swish1′⁡(x)=x+sinh⁡(x)4cosh2⁡(x2)+12{\displaystyle \operatorname {swish} _{1}'(x)={\frac {x+\sinh(x)}{4\cosh ^{2}\left({\frac {x}{2}}\right)}}+{\frac {1}{2}}}soswish1′⁡(x)−12{\displaystyle \operatorname {swish} _{1}'(x)-{\frac {1}{2}}}is odd.swish1″⁡(x)=1−x2tanh⁡(x2)2cosh2⁡(x2){\displaystyle \operatorname {swish} _{1}''(x)={\frac {1-{\frac {x}{2}}\tanh \left({\frac {x}{2}}\right)}{2\cosh ^{2}\left({\frac {x}{2}}\right)}}}soswish1″⁡(x){\displaystyle \operatorname {swish} _{1}''(x)}is even. SiLU was first proposed alongside theGELUin 2016,[4]then again proposed in 2017 as theSigmoid-weighted Linear Unit(SiL) inreinforcement learning.[5][1]The SiLU/SiL was then again proposed as the SWISH over a year after its initial discovery, originally proposed without the learnable parameter β, so that β implicitly equaled 1. The swish paper was then updated to propose the activation with the learnable parameter β. In 2017, after performing analysis onImageNetdata, researchers fromGoogleindicated that using this function as anactivation functioninartificial neural networksimproves the performance, compared to ReLU and sigmoid functions.[1]It is believed that one reason for the improvement is that the swish function helps alleviate thevanishing gradient problemduringbackpropagation.[6]
https://en.wikipedia.org/wiki/Swish_function
Inprobability theoryandstatistics, theWeibull distribution/ˈwaɪbʊl/is a continuousprobability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page. The distribution is named after Swedish mathematicianWaloddi Weibull, who described it in detail in 1939,[1][2]although it was first identified byRené Maurice Fréchetand first applied byRosin & Rammler (1933)to describe aparticle size distribution. Theprobability density functionof a Weibullrandom variableis[3][4] wherek> 0 is theshape parameterand λ > 0 is thescale parameterof the distribution. Itscomplementary cumulative distribution functionis astretched exponential function. The Weibull distribution is related to a number of other probability distributions; in particular, itinterpolatesbetween theexponential distribution(k= 1) and theRayleigh distribution(k= 2 andλ=2σ{\displaystyle \lambda ={\sqrt {2}}\sigma }).[5] If the quantity,x,is a "time-to-failure", the Weibull distribution gives a distribution for which thefailure rateis proportional to a power of time. Theshapeparameter,k, is that power plus one, and so this parameter can be interpreted directly as follows:[6] In the field ofmaterials science, the shape parameterkof a distribution of strengths is known as theWeibull modulus. In the context ofdiffusion of innovations, the Weibull distribution is a "pure" imitation/rejection model. Applications inmedical statisticsandeconometricsoften adopt a different parameterization.[8][9]The shape parameterkis the same as above, while the scale parameter isb=λ−k{\displaystyle b=\lambda ^{-k}}. In this case, forx≥ 0, the probability density function is the cumulative distribution function is the quantile function is the hazard function is and the mean is A second alternative parameterization can also be found.[10][11]The shape parameterkis the same as in the standard case, while the scale parameterλis replaced with a rate parameterβ= 1/λ. Then, forx≥ 0, the probability density function is the cumulative distribution function is the quantile function is and the hazard function is In all three parameterizations, the hazard is decreasing for k < 1, increasing for k > 1 and constant for k = 1, in which case the Weibull distribution reduces to an exponential distribution. The form of the density function of the Weibull distribution changes drastically with the value ofk. For 0 <k< 1, the density function tends to ∞ asxapproaches zero from above and is strictly decreasing. Fork= 1, the density function tends to 1/λasxapproaches zero from above and is strictly decreasing. Fork> 1, the density function tends to zero asxapproaches zero from above, increases until its mode and decreases after it. The density function has infinite negative slope atx= 0 if 0 <k< 1, infinite positive slope atx= 0 if 1 <k< 2 and null slope atx= 0 ifk> 2. Fork= 1 the density has a finite negative slope atx= 0. Fork= 2 the density has a finite positive slope atx= 0. Askgoes to infinity, the Weibull distribution converges to aDirac delta distributioncentered atx= λ. Moreover, the skewness and coefficient of variation depend only on the shape parameter. A generalization of the Weibull distribution is thehyperbolastic distribution of type III. Thecumulative distribution functionfor the Weibull distribution is forx≥ 0, andF(x;k; λ) = 0 forx< 0. Ifx= λ thenF(x;k; λ) = 1 −e−1≈ 0.632 for all values ofk. Vice versa: atF(x;k;λ) = 0.632 the value ofx≈λ. The quantile (inverse cumulative distribution) function for the Weibull distribution is for 0 ≤p< 1. Thefailure rateh(or hazard function) is given by TheMean time between failuresMTBFis Themoment generating functionof thelogarithmof a Weibull distributedrandom variableis given by[12] whereΓis thegamma function. Similarly, thecharacteristic functionof logXis given by In particular, thenthraw momentofXis given by Themeanandvarianceof a Weibullrandom variablecan be expressed as and The skewness is given by whereΓi=Γ(1+i/k){\displaystyle \Gamma _{i}=\Gamma (1+i/k)}, which may also be written as where the mean is denoted byμand the standard deviation is denoted byσ. The excesskurtosisis given by whereΓi=Γ(1+i/k){\displaystyle \Gamma _{i}=\Gamma (1+i/k)}. The kurtosis excess may also be written as: A variety of expressions are available for the moment generating function ofXitself. As apower series, since the raw moments are already known, one has Alternatively, one can attempt to deal directly with the integral If the parameterkis assumed to be a rational number, expressed ask=p/qwherepandqare integers, then this integral can be evaluated analytically.[a]Withtreplaced by −t, one finds whereGis theMeijer G-function. Thecharacteristic functionhas also been obtained byMuraleedharan et al. (2007). Thecharacteristic functionandmoment generating functionof 3-parameter Weibull distribution have also been derived byMuraleedharan & Soares (2014)by a direct approach. LetX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}be independent and identically distributed Weibull random variables with scale parameterλ{\displaystyle \lambda }and shape parameterk{\displaystyle k}. If the minimum of thesen{\displaystyle n}random variables isZ=min(X1,X2,…,Xn){\displaystyle Z=\min(X_{1},X_{2},\ldots ,X_{n})}, then the cumulative probability distribution ofZ{\displaystyle Z}is given by That is,Z{\displaystyle Z}will also be Weibull distributed with scale parametern−1/kλ{\displaystyle n^{-1/k}\lambda }and with shape parameterk{\displaystyle k}. Fix someα>0{\displaystyle \alpha >0}. Let(π1,...,πn){\displaystyle (\pi _{1},...,\pi _{n})}be nonnegative, and not all zero, and letg1,...,gn{\displaystyle g_{1},...,g_{n}}be independent samples ofWeibull(1,α−1){\displaystyle {\text{Weibull}}(1,\alpha ^{-1})}, then[13] Theinformation entropyis given by[14] whereγ{\displaystyle \gamma }is theEuler–Mascheroni constant. The Weibull distribution is themaximum entropy distributionfor a non-negative real random variate with a fixedexpected valueofxkequal toλkand a fixed expected value of ln(xk) equal to ln(λk) −γ{\displaystyle \gamma }. TheKullback–Leibler divergencebetween two Weibull distributions is given by[15] The fit of a Weibull distribution to data can be visually assessed using a Weibull plot.[16]The Weibull plot is a plot of theempirical cumulative distribution functionF^(x){\displaystyle {\widehat {F}}(x)}of data on special axes in a type ofQ–Q plot. The axes areln⁡(−ln⁡(1−F^(x))){\displaystyle \ln(-\ln(1-{\widehat {F}}(x)))}versusln⁡(x){\displaystyle \ln(x)}. The reason for this change of variables is the cumulative distribution function can be linearized: which can be seen to be in the standard form of a straight line. Therefore, if the data came from a Weibull distribution then a straight line is expected on a Weibull plot. There are various approaches to obtaining the empirical distribution function from data. One method is to obtain the vertical coordinate for each point using wherei{\displaystyle i}is the rank of the data point andn{\displaystyle n}is the number of data points.[17][18]Another common estimator[19]is Linear regression can also be used to numerically assess goodness of fit and estimate the parameters of the Weibull distribution. The gradient informs one directly about the shape parameterk{\displaystyle k}and the scale parameterλ{\displaystyle \lambda }can also be inferred. Thecoefficient of variationof Weibull distribution depends only on the shape parameter:[20] Equating the sample quantitiess2/x¯2{\displaystyle s^{2}/{\bar {x}}^{2}}toσ2/μ2{\displaystyle \sigma ^{2}/\mu ^{2}}, the moment estimate of the shape parameterk{\displaystyle k}can be read off either from a look up table or a graph ofCV2{\displaystyle CV^{2}}versusk{\displaystyle k}. A more accurate estimate ofk^{\displaystyle {\hat {k}}}can be found using a root finding algorithm to solve The moment estimate of the scale parameter can then be found using the first moment equation as Themaximum likelihood estimatorfor theλ{\displaystyle \lambda }parameter givenk{\displaystyle k}is[20] The maximum likelihood estimator fork{\displaystyle k}is the solution forkof the following equation[21] This equation definesk^{\displaystyle {\widehat {k}}}only implicitly, one must generally solve fork{\displaystyle k}by numerical means. Whenx1>x2>⋯>xN{\displaystyle x_{1}>x_{2}>\cdots >x_{N}}are theN{\displaystyle N}largest observed samples from a dataset of more thanN{\displaystyle N}samples, then the maximum likelihood estimator for theλ{\displaystyle \lambda }parameter givenk{\displaystyle k}is[21] Also given that condition, the maximum likelihood estimator fork{\displaystyle k}is[citation needed] Again, this being an implicit function, one must generally solve fork{\displaystyle k}by numerical means. The Weibull distribution is used[citation needed] f(x;k,λ,θ)=kλ(x−θλ)k−1e−(x−θλ)k{\displaystyle f(x;k,\lambda ,\theta )={k \over \lambda }\left({x-\theta \over \lambda }\right)^{k-1}e^{-\left({x-\theta \over \lambda }\right)^{k}}\,} X=(Wλ)k{\displaystyle X=\left({\frac {W}{\lambda }}\right)^{k}} fFrechet(x;k,λ)=kλ(xλ)−1−ke−(x/λ)−k=fWeibull(x;−k,λ).{\displaystyle f_{\rm {Frechet}}(x;k,\lambda )={\frac {k}{\lambda }}\left({\frac {x}{\lambda }}\right)^{-1-k}e^{-(x/\lambda )^{-k}}=f_{\rm {Weibull}}(x;-k,\lambda ).} f(x;P80,m)={1−eln⁡(0.2)(xP80)mx≥0,0x<0,{\displaystyle f(x;P_{\rm {80}},m)={\begin{cases}1-e^{\ln \left(0.2\right)\left({\frac {x}{P_{\rm {80}}}}\right)^{m}}&x\geq 0,\\0&x<0,\end{cases}}} F(x;k,λ)={∫0∞1νF(x;1,λν)(Γ(1k+1)Nk(ν))dν,1≥k>0;or∫0∞1sF(x;2,2λs)(2πΓ(1k+1)Vk(s))ds,2≥k>0;{\displaystyle F(x;k,\lambda )={\begin{cases}\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\,F(x;1,\lambda \nu )\left(\Gamma \left({\frac {1}{k}}+1\right){\mathfrak {N}}_{k}(\nu )\right)\,d\nu ,&1\geq k>0;{\text{or }}\\\displaystyle \int _{0}^{\infty }{\frac {1}{s}}\,F(x;2,{\sqrt {2}}\lambda s)\left({\sqrt {\frac {2}{\pi }}}\,\Gamma \left({\frac {1}{k}}+1\right)V_{k}(s)\right)\,ds,&2\geq k>0;\end{cases}}}
https://en.wikipedia.org/wiki/Weibull_distribution
Ineconometrics, thetruncated normal hurdle modelis a variant of theTobit modeland was first proposed by Cragg in 1971.[1] In a standard Tobit model, represented asy=(xβ+u)1[xβ+u>0]{\displaystyle y=(x\beta +u)1[x\beta +u>0]}, whereu|x∼N(0,σ2){\displaystyle u|x\sim N(0,\sigma ^{2})}This model construction implicitly imposes two first order assumptions:[2] However, these two implicit assumptions are too strong and inconsistent with many contexts ineconomics. For instance, when we need to decide whether toinvestand build a factory, the constructioncostmight be more influential than the productprice; but once we have already built the factory, the product price is definitely more influential to therevenue. Hence, the implicit assumption (2) doesn't match this context.[4]The essence of this issue is that the standard Tobit implicitly models a very strong link between the participation decision(y=0{\displaystyle (y=0}ory>0){\displaystyle y>0)}and the amount decision (the magnitude ofy{\displaystyle y}wheny>0{\displaystyle y>0}). If a corner solution model is represented in a general form:y=s⋅w,{\displaystyle y=s\centerdot w,}, wheres{\displaystyle s}is the participate decision andw{\displaystyle w}is the amount decision, standard Tobit model assumes: To make the model compatible with more contexts, a natural improvement is to assume: w=xβ+e,{\displaystyle w=x\beta +e,}where the error term (e{\displaystyle e}) is distributed as a truncated normal distribution with a density asφ(⋅)/Φ(xβσ)/σ;{\displaystyle \varphi (\cdot )/\Phi \left({\frac {x\beta }{\sigma }}\right)/\sigma ;} s{\displaystyle s}andw{\displaystyle w}are independent conditional onx{\displaystyle x}. This is called Truncated Normal Hurdle Model, which is proposed in Cragg (1971).[1]By adding one more parameter and detach the amount decision with the participation decision, the model can fit more contexts. Under this model setup, thedensityof they{\displaystyle y}givenx{\displaystyle x}can be written as: From this density representation, it is obvious that it will degenerate to the standard Tobit model whenγ=β/σ.{\displaystyle \gamma =\beta /\sigma .}This also shows that Truncated Normal Hurdle Model is more general than the standard Tobit model. The Truncated Normal Hurdle Model is usually estimated through MLE. The log-likelihood function can be written as: From the log-likelihood function,γ{\displaystyle \gamma }can be estimated by aprobit modeland(β,σ){\displaystyle (\beta ,\sigma )}can be estimated by a truncated normal regression model.[5]Based on the estimates, consistent estimates for the Average Partial Effect can be estimated correspondingly.
https://en.wikipedia.org/wiki/Truncated_normal_hurdle_model
Empiricalmethods Prescriptiveand policy Alimited dependent variableis a variable whose range of possible values is "restricted in some important way."[1]Ineconometrics, the term is often used when estimation of the relationship between thelimiteddependent variableof interest and other variables requires methods that take this restriction into account. For example, this may arise when the variable of interest is constrained to lie between zero and one, as in the case of aprobability, or is constrained to be positive, as in the case of wages or hours worked. Limited dependent variablemodels include:[2] ThisEconometrics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Limited_dependent_variable
Truncated regression modelsare a class ofmodelsin which thesamplehas beentruncatedfor certain ranges of thedependent variable. That means observations with values in the dependent variable below or above certain thresholds are systematically excluded from the sample. Therefore, whole observations are missing, so that neither the dependent nor the independent variable is known. This is in contrast tocensored regression modelswhere only the value of the dependent variable is clustered at a lower threshold, an upper threshold, or both, while the value forindependent variablesis available.[1] Sample truncation is a pervasive issue in quantitative social sciences when usingobservational data, and consequently the development of suitable estimation techniques has long been of interest ineconometricsand related disciplines.[2]In the 1970s,James Heckmannoted the similarity between truncated and otherwise non-randomly selected samples, and developed theHeckman correction.[3][4] Estimation of truncated regression models is usually done via parametric maximum likelihood method. More recently, various semi-parametric and non-parametric generalisation were proposed in the literature, e.g., based on the local least squares approach[5]or the local maximum likelihood approach,[6]which are kernel based methods. ThisEconometrics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Truncated_regression_model
Adynamic unobserved effects modelis astatistical modelused ineconometricsforpanel analysis. It is characterized by the influence of previous values of thedependent variableon its present value, and by the presence of unobservableexplanatory variables. The term “dynamic” here means the dependence of the dependent variable on its past history; this is usually used to model the “state dependence” in economics. For instance, for a person who cannot find a job this year, it will be harder to find a job next year because her present lack of a job will be a negative signal for the potential employers. “Unobserved effects” means that one or some of the explanatory variables are unobservable: for example, consumption choice of one flavor of ice cream over another is a function of personal preference, but preference is unobservable. In a panel datatobit model,[1][2]if the outcomeYi,t{\displaystyle Y_{i,t}}partially depends on the previous outcome historyYi,0,…,Yt−1{\displaystyle Y_{i,0},\ldots ,Y_{t-1}}this tobit model is called "dynamic". For instance, taking a person who finds a job with a high salary this year, it will be easier for her to find a job with a high salary next year because the fact that she has a high-wage job this year will be a very positive signal for the potential employers. The essence of this type of dynamic effect is the state dependence of the outcome. The "unobservable effects" here refers to the factor which partially determines the outcome of individual but cannot be observed in the data. For instance, the ability of a person is very important in job-hunting, but it is not observable for researchers. A typical dynamic unobserved effects tobit model can be represented as In this specific model,ρyi,t−1{\displaystyle \rho y_{i,t-1}}is the dynamic effect part andci{\displaystyle c_{i}}is the unobserved effect part whose distribution is determined by the initial outcome of individualiand some exogenous features of individuali. Based on this setup, the likelihood function conditional on{yi,0}i−1N{\displaystyle \{y_{i,0}\}_{i-1}^{N}}can be given as For the initial values{yi,0}i−1N{\displaystyle \{y_{i,0}\}_{i-1}^{N}}, there are two different ways to treat them in the construction of the likelihood function: treating them as constant or imposing a distribution on them and calculate out the unconditional likelihood function. But whichever way is chosen to treat the initial values in the likelihood function, we cannot get rid of the integration inside the likelihood function when estimating the model by maximum likelihood estimation (MLE). Expectation Maximum (EM) algorithm is usually a good solution for this computation issue.[3]Based on the consistent point estimates from MLE, Average Partial Effect (APE)[4]can be calculated correspondingly.[5] A typical dynamic unobserved effects model with abinarydependent variable is represented[6]as: where ciis an unobservable explanatory variable, zitare explanatory variables which are exogenous conditional on the ci, and G(∙) is acumulative distribution function. In this type of model, economists have a special interest in ρ, which is used to characterize the state dependence. For example,yi,tcan be a woman's choice whether to work or not,zitincludes thei-th individual's age, education level, number of children, and other factors.cican be some individual specific characteristic which cannot be observed by economists.[7]It is a reasonable conjecture that one's labor choice in periodtshould depend on his or her choice in periodt− 1 due to habit formation or other reasons. This dependence is characterized by parameterρ. There are severalMLE-based approaches to estimateδandρconsistently. The simplest way is to treatyi,0as non-stochastic and assumeciisindependentwithzi. Then by integratingP(yi,t, yi,t-1, … , yi,1| yi,0, zi, ci)against the density ofci, we can obtain the conditional density P(yi,t, yi,t-1, ... , yi,1|yi,0, zi). The objective function for the conditional MLE can be represented as:∑i=1N{\displaystyle \sum _{i=1}^{N}}log (P (yi,t, yi,t-1, … , yi,1| yi,0, zi)). Treatingyi,0as non-stochastic implicitly assumes the independence ofyi,0onzi. But in most cases in reality,yi,0depends onciandcialso depends onzi. An improvement on the approach above is to assume a density ofyi,0conditional on (ci, zi) and conditional likelihoodP(yi,t, yi,t-1, … , yt,1,yi,0| ci, zi)can be obtained. By integrating this likelihood against the density ofciconditional onzi, we can obtain the conditional densityP(yi,t, yi,t-1, … , yi,1, yi,0| zi). The objective function for theconditional MLE[8]is∑i=1N{\displaystyle \sum _{i=1}^{N}}log (P (yi,t, yi,t-1, … , yi,1| yi,0, zi)). Based on the estimates for (δ, ρ) and the corresponding variance, values of the coefficients can be tested[9]and the average partial effect can be calculated.[10]
https://en.wikipedia.org/wiki/Dynamic_unobserved_effects_model#Censored_dependent_variable
Instatistics, aprobit modelis a type ofregressionwhere thedependent variablecan take only two values, for example married or not married. The word is aportmanteau, coming fromprobability+unit.[1]The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type ofbinary classificationmodel. Aprobitmodel is a popular specification for abinary response model. As such it treats the same set of problems as doeslogistic regressionusing similar techniques. When viewed in thegeneralized linear modelframework, the probit model employs aprobitlink function.[2]It is most often estimated using themaximum likelihoodprocedure,[3]such an estimation being called aprobit regression. Suppose a response variableYisbinary, that is it can have onlytwo possible outcomeswhich we will denote as 1 and 0. For example,Ymay represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector ofregressorsX, which are assumed to influence the outcomeY. Specifically, we assume that the model takes the form wherePis theprobabilityandΦ{\displaystyle \Phi }is the cumulative distribution function (CDF) of thestandard normal distribution. The parametersβare typically estimated bymaximum likelihood. It is possible to motivate the probit model as alatent variable model. Suppose there exists an auxiliary random variable whereε~N(0, 1). ThenYcan be viewed as an indicator for whether this latent variable is positive: The use of the standard normal distribution causes noloss of generalitycompared with the use of a normal distribution with an arbitrary mean and standard deviation, because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount. To see that the two models are equivalent, note that Suppose data set{yi,xi}i=1n{\displaystyle \{y_{i},x_{i}\}_{i=1}^{n}}containsnindependentstatistical unitscorresponding to the model above. For the single observation, conditional on the vector of inputs of that observation, we have: wherexi{\displaystyle x_{i}}is a vector ofK×1{\displaystyle K\times 1}inputs, andβ{\displaystyle \beta }is aK×1{\displaystyle K\times 1}vector of coefficients. The likelihood of a single observation(yi,xi){\displaystyle (y_{i},x_{i})}is then In fact, ifyi=1{\displaystyle y_{i}=1}, thenL(β;yi,xi)=Φ(xiTβ){\displaystyle {\mathcal {L}}(\beta ;y_{i},x_{i})=\Phi (x_{i}^{\operatorname {T} }\beta )}, and ifyi=0{\displaystyle y_{i}=0}, thenL(β;yi,xi)=1−Φ(xiTβ){\displaystyle {\mathcal {L}}(\beta ;y_{i},x_{i})=1-\Phi (x_{i}^{\operatorname {T} }\beta )}. Since the observations are independent and identically distributed, then the likelihood of the entire sample, or thejoint likelihood, will be equal to the product of the likelihoods of the single observations: The joint log-likelihood function is thus The estimatorβ^{\displaystyle {\hat {\beta }}}which maximizes this function will beconsistent, asymptotically normal andefficientprovided thatE⁡[XXT]{\displaystyle \operatorname {E} [XX^{\operatorname {T} }]}exists and is not singular. It can be shown that this log-likelihood function is globallyconcaveinβ{\displaystyle \beta }, and therefore standard numerical algorithms for optimization will converge rapidly to the unique maximum. Asymptotic distributionforβ^{\displaystyle {\hat {\beta }}}is given by where andφ=Φ′{\displaystyle \varphi =\Phi '}is the Probability Density Function (PDF) of standard normal distribution. Semi-parametric and non-parametric maximum likelihood methods for probit-type and other related models are also available.[4] This method can be applied only when there are many observations of response variableyi{\displaystyle y_{i}}having the same value of the vector of regressorsxi{\displaystyle x_{i}}(such situation may be referred to as "many observations per cell"). More specifically, the model can be formulated as follows. Suppose amongnobservations{yi,xi}i=1n{\displaystyle \{y_{i},x_{i}\}_{i=1}^{n}}there are onlyTdistinct values of the regressors, which can be denoted as{x(1),…,x(T)}{\displaystyle \{x_{(1)},\ldots ,x_{(T)}\}}. Letnt{\displaystyle n_{t}}be the number of observations withxi=x(t),{\displaystyle x_{i}=x_{(t)},}andrt{\displaystyle r_{t}}the number of such observations withyi=1{\displaystyle y_{i}=1}. We assume that there are indeed "many" observations per each "cell": for eacht,limn→∞nt/n=ct>0{\displaystyle t,\lim _{n\rightarrow \infty }n_{t}/n=c_{t}>0}. Denote ThenBerkson's minimum chi-squareestimator is ageneralized least squaresestimator in a regression ofΦ−1(p^t){\displaystyle \Phi ^{-1}({\hat {p}}_{t})}onx(t){\displaystyle x_{(t)}}with weightsσ^t−2{\displaystyle {\hat {\sigma }}_{t}^{-2}}: It can be shown that this estimator is consistent (asn→∞ andTfixed), asymptotically normal and efficient.[citation needed]Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated countsrt{\displaystyle r_{t}},nt{\displaystyle n_{t}}, andx(t){\displaystyle x_{(t)}}(for example in the analysis of voting behavior). Gibbs samplingof a probit model is possible with the introduction of normally distributed latent variablesz, which are observed as 1 if positive and 0 otherwise. This approach was introduced in Albert and Chib (1993),[5]which demonstrated how Gibbs sampling could be applied to binary and polychotomous response models within a Bayesian framework. Under a multivariate normalprior distributionover the weights, the model can be described as From this, Albert and Chib (1993)[5]derive the following full conditional distributions in the Gibbs sampling algorithm: The result forβ{\displaystyle {\boldsymbol {\beta }}}is given in the article onBayesian linear regression, although specified with different notation, while the conditional posterior distributions of the latent variables follow atruncated normal distributionwithin the given ranges. The notation[zi<0]{\displaystyle [z_{i}<0]}is theIverson bracket, sometimes writtenI(zi<0){\displaystyle {\mathcal {I}}(z_{i}<0)}or similar. Thus, knowledge of the observed outcomes serves to restrict the support of the latent variables. Sampling of the weightsβ{\displaystyle {\boldsymbol {\beta }}}given the latent vectorz{\displaystyle \mathbf {z} }from the multinormal distribution is standard. For sampling the latent variables from the truncated normal posterior distributions, one can take advantage of the inverse-cdf method, implemented in the followingRvectorized function, making it straightforward to implement the method. The suitability of an estimated binary model can be evaluated by counting the number of true observations equaling 1, and the number equaling zero, for which the model assigns a correct predicted classification by treating any estimated probability above 1/2 (or, below 1/2), as an assignment of a prediction of 1 (or, of 0). SeeLogistic regression § Modelfor details. Consider the latent variable model formulation of the probit model. When thevarianceofε{\displaystyle \varepsilon }conditional onx{\displaystyle x}is not constant but dependent onx{\displaystyle x}, then theheteroscedasticityissue arises. For example, supposey∗=β0+B1x1+ε{\displaystyle y^{*}=\beta _{0}+B_{1}x_{1}+\varepsilon }andε∣x∼N(0,x12){\displaystyle \varepsilon \mid x\sim N(0,x_{1}^{2})}wherex1{\displaystyle x_{1}}is a continuous positive explanatory variable. Under heteroskedasticity, the probit estimator forβ{\displaystyle \beta }is usually inconsistent, and most of the tests about the coefficients are invalid. More importantly, the estimator forP(y=1∣x){\displaystyle P(y=1\mid x)}becomes inconsistent, too. To deal with this problem, the original model needs to be transformed to be homoskedastic. For instance, in the same example,1[β0+β1x1+ε>0]{\displaystyle 1[\beta _{0}+\beta _{1}x_{1}+\varepsilon >0]}can be rewritten as1[β0/x1+β1+ε/x1>0]{\displaystyle 1[\beta _{0}/x_{1}+\beta _{1}+\varepsilon /x_{1}>0]}, whereε/x1∣x∼N(0,1){\displaystyle \varepsilon /x_{1}\mid x\sim N(0,1)}. Therefore,P(y=1∣x)=Φ(β1+β0/x1){\displaystyle P(y=1\mid x)=\Phi (\beta _{1}+\beta _{0}/x_{1})}and running probit on(1,1/x1){\displaystyle (1,1/x_{1})}generates a consistent estimator for theconditional probabilityP(y=1∣x).{\displaystyle P(y=1\mid x).} When the assumption thatε{\displaystyle \varepsilon }is normally distributed fails to hold, then a functional formmisspecificationissue arises: if the model is still estimated as a probit model, the estimators of the coefficientsβ{\displaystyle \beta }are inconsistent. For instance, ifε{\displaystyle \varepsilon }follows alogistic distributionin the true model, but the model is estimated by probit, the estimates will be generally smaller than the true value. However, the inconsistency of the coefficient estimates is practically irrelevant because the estimates for thepartial effects,∂P(y=1∣x)/∂xi′{\displaystyle \partial P(y=1\mid x)/\partial x_{i'}}, will be close to the estimates given by the true logit model.[6] To avoid the issue of distribution misspecification, one may adopt a general distribution assumption for the error term, such that many different types of distribution can be included in the model. The cost is heavier computation and lower accuracy for the increase of the number of parameter.[7]In most of the cases in practice where the distribution form is misspecified, the estimators for the coefficients are inconsistent, but estimators for the conditional probability and the partial effects are still very good.[citation needed] One can also take semi-parametric or non-parametric approaches, e.g., via local-likelihood or nonparametricquasi-likelihoodmethods, which avoid assumptions on a parametric form for the index function and is robust to the choice of the link function (e.g., probit or logit).[4] The probit model is usually credited toChester Bliss, who coined the term "probit" in 1934,[8]and toJohn Gaddum(1933), who systematized earlier work.[9]However, the basic model dates to theWeber–Fechner lawbyGustav Fechner, published inFechner (1860), and was repeatedly rediscovered until the 1930s; seeFinney (1971, Chapter 3.6) andAitchison & Brown (1957, Chapter 1.2).[9] A fast method for computingmaximum likelihoodestimates for the probit model was proposed byRonald Fisheras an appendix to Bliss' work in 1935.[10]
https://en.wikipedia.org/wiki/Probit_model
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Deep_Learning
Theneocortex, also called theneopallium,isocortex, or thesix-layered cortex, is a set of layers of themammaliancerebral cortexinvolved in higher-order brain functions such assensory perception, cognition, generation ofmotor commands,[1]spatial reasoning, andlanguage.[2]The neocortex is further subdivided into thetrue isocortexand theproisocortex.[3] In thehuman brain, thecerebral cortexconsists of the larger neocortex and the smallerallocortex, respectively taking up 90% and 10%.[4]The neocortex is made up ofsix layers, labelled from the outermost inwards, I to VI. The term is fromcortex,Latin, "bark" or "rind", combined withneo-,Greek, "new".Neopalliumis a similar hybrid, from Latinpallium, "cloak".Isocortexandallocortexare hybrids with Greekisos, "same", andallos, "other". The neocortex is the most developed in its organisation and number of layers, of the cerebral tissues.[5]The neocortex consists of thegrey matter, or neuronal cell bodies andunmyelinatedfibers, surrounding the deeperwhite matter(myelinatedaxons) in thecerebrum. This is a very thin layer though, about 2–4 mm thick.[6]There are two types of cortex in the neocortex, theproisocortexand the true isocortex. The pro-isocortex is a transitional area between the true isocortex and theperiallocortex(part of theallocortex). It is found in thecingulate cortex(part of thelimbic system), inBrodmann's areas24,25,30and32, theinsulaand theparahippocampal gyrus. Of all the mammals studied to date (including humans), a species ofoceanic dolphinknown as thelong-finned pilot whalehas been found to have the most neocortical neurons.[7] The neocortex is smooth inrodentsand other small mammals, whereas inelephants,dolphinsandprimatesand other larger mammals it has deep grooves (sulci) and ridges (gyri). These folds allow the surface area of the neocortex to be greatly increased. All human brains have the same overall pattern of main gyri and sulci, although they differ in detail from one person to another.[8]The mechanism by which the gyri form during embryogenesis is not entirely clear, and there are several competing hypotheses that explain gyrification, such as axonal tension,[9]cortical buckling[10]or differences in cellular proliferation rates in different areas of the cortex.[11] The neocortex contains both excitatory (~80%) and inhibitory (~20%)neurons, named for their effect on other neurons.[12]The human neocortex consists of hundreds of different types of cells.[13]The structure of the neocortex is relatively uniform (hence the alternative names "iso-" and "homotypic" cortex), consisting of six horizontal layers segregated principally bycelltype andneuronalconnections.[14]However, there are many exceptions to this uniformity; for example, layer IV is small or missing in theprimary motor cortex. There is some canonical circuitry within the cortex; for example,pyramidal neuronsin the upper layers II and III project theiraxonsto other areas of neocortex, while those in the deeper layers V and VI often project out of the cortex, e.g. to thethalamus,brainstem, andspinal cord. Neurons in layer IV receive the majority of thesynaptic connectionsfrom outside the cortex (mostly from thalamus), and themselves make short-range, local connections to other cortical layers.[12]Thus, layer IV is the main recipient of incoming sensory information and distributes it to the other layers for further processing. The neocortex is often described as being arranged in vertical structures calledcortical columns, patches of neocortex with a diameter of roughly 0.5 mm (and a depth of 2 mm, i.e., spanning all six layers). These columns are often thought of as the basic repeating functional units of the neocortex, but their many definitions, in terms of anatomy, size, or function, are generally not consistent with each other, leading to a lack of consensus regarding their structure or function or even whether it makes sense to try to understand the neocortex in terms of columns.[15] The neocortex is derived embryonically from the dorsaltelencephalon, which is therostralpart of theforebrain. The neocortex is divided into regions demarcated by the cranial sutures in the skull above, intofrontal,parietal,occipital, andtemporallobes, which perform different functions. For example, the occipital lobe contains theprimary visual cortex, and the temporal lobe contains theprimary auditory cortex. Further subdivisions or areas of neocortex are responsible for more specific cognitive processes. In humans, thefrontal lobecontains areas devoted to abilities that are enhanced in or unique to our species, such as complex language processing localized to theventrolateral prefrontal cortex(Broca's area).[12]In humans and other primates, social and emotional processing is localized to theorbitofrontal cortex. The neocortex has also been shown to play an influential role in sleep, memory and learning processes.Semantic memoriesappear to be stored in the neocortex, specifically the anterolateraltemporal lobeof the neocortex.[16]It is also involved ininstrumental conditioning; responsible for transmitting sensory information and information about plans for movement to thebasal ganglia.[16]The firing rate of neurons in the neocortex also has an effect onslow-wave sleep. When the neurons are at rest and arehyperpolarizing, a period of inhibition occurs during a slowoscillation, called the down state. When the neurons of the neocortex are in the excitatorydepolarizingphase and are firing briefly at a high rate, a period of excitation occurs during a slow oscillation, called the up state.[16] Lesions that develop inneurodegenerative disorders, such asAlzheimer's disease, interrupt the transfer of information from the sensory neocortex to the prefrontal neocortex. This disruption of sensory information contributes to the progressive symptoms seen in neurodegenerative disorders such as changes in personality, decline in cognitive abilities, anddementia.[17]Damage to the neocortex of the anterolateral temporal lobe results insemantic dementia, which is the loss of memory of factual information (semantic memories). These symptoms can also be replicated bytranscranial magnetic stimulationof this area. If damage is sustained to this area, patients do not developanterograde amnesiaand are able to recallepisodic information.[18] The neocortex is the newest part of thecerebral cortexto evolve (hence the prefixneomeaning new); the other part of the cerebral cortex is theallocortex. The cellular organization of the allocortex is different from the six-layered neocortex. In humans, 90% of the cerebral cortex and 76% of the entire brain is neocortex.[12] For a species to develop a larger neocortex, the brain must evolve in size so that it is large enough to support the region. Body size, basalmetabolic rateand life history are factors affecting brain evolution and thecoevolutionof neocortex size and group size.[19]The neocortex increased in size in response to pressures for greater cooperation and competition in early ancestors. With the size increase, there was greater voluntary inhibitory control of social behaviors resulting in increased social harmony.[20] The six-layer cortex appears to be a distinguishing feature of mammals; it has been found in the brains of all mammals, but not in any other animals.[2]There is some debate,[21][22]however, as to the cross-speciesnomenclature forneocortex. Inavians, for instance, there are clear examples of cognitive processes that are thought to be neocortical in nature, despite the lack of the distinctive six-layer neocortical structure.[23]Evidence suggest theavian palliumto be broadly equivalent to the mammalian neocortex.[24][25][26]In a similar manner,reptiles, such asturtles, have primary sensory cortices. A consistent, alternative name has yet to be agreed upon. The neocortex ratio of a species is the ratio of the size of the neocortex to the rest of the brain. A high neocortex ratio is thought to correlate with a number of social variables such asgroup sizeand the complexity of social mating behaviors.[27]Humans have a large neocortex as a percentage of total brain matter when compared with other mammals. For example, there is only a 30:1 ratio of neocortical gray matter to the size of themedulla oblongatain the brainstem of chimpanzees, while the ratio is 60:1 in humans.[28]
https://en.wikipedia.org/wiki/Neocortex#Layers
Inbiochemistryandpharmacology, theHill equationrefers to two closely related equations that reflect the binding ofligandsto macromolecules, as a function of the ligandconcentration. Aligandis "a substance that forms a complex with a biomolecule to serve a biological purpose", and amacromoleculeis a very large molecule, such as a protein, with a complex structure of components. Protein-ligand binding typically changes the structure of the target protein, thereby changing its function in a cell. The distinction between the two Hill equations is whether they measureoccupancyorresponse. TheHill equationreflects the occupancy of macromolecules: the fraction that is saturated or bound by theligand.[1][2][nb 1]This equation is formally equivalent to theLangmuir isotherm.[3]Conversely, theHill equationproper reflects the cellular or tissue response to the ligand: the physiological output of the system, such as muscle contraction. The Hill equation was originally formulated byArchibald Hillin 1910 to describe thesigmoidalO2binding curve ofhemoglobin.[4] The binding of aligandto amacromoleculeis often enhanced if there are already other ligands present on the same macromolecule (this is known ascooperative binding). The Hill equation is useful for determining the degree ofcooperativityof the ligand(s) binding to the enzyme or receptor. TheHill coefficientprovides a way to quantify the degree of interaction between ligand binding sites.[5] The Hill equation (for response) is important in the construction ofdose-response curves. The Hill equation is commonly expressed in the following ways:[2][7][8] where The special case wheren=1{\displaystyle n=1}is aMonod equation. In pharmacology,θ{\displaystyle \theta }is often written aspAR{\displaystyle p_{{\ce {AR}}}}, whereA{\displaystyle {\ce {A}}}is the ligand, equivalent to L, andR{\displaystyle {\ce {R}}}is the receptor.θ{\displaystyle \theta }can be expressed in terms of the total amount of receptor and ligand-bound receptor concentrations:θ=[LR][Rtotal]{\displaystyle \theta ={\frac {\ce {[LR]}}{\ce {[R_{\rm {total}}]}}}}.Kd{\displaystyle K_{d}}is equal to the ratio of the dissociation rate of the ligand-receptor complex to its association rate (Kd=kdka{\textstyle K_{\rm {d}}={k_{\rm {d}} \over k_{\rm {a}}}}).[8]Kd is the equilibrium constant for dissociation.KA{\textstyle K_{A}}is defined so that(KA)n=Kd=kdka{\textstyle (K_{A})^{n}=K_{\rm {d}}={k_{\rm {d}} \over k_{\rm {a}}}}, this is also known as the microscopicdissociation constantand is the ligand concentration occupying half of the binding sites. In recent literature, this constant is sometimes referred to asKD{\textstyle K_{D}}.[8] TheGaddumequation is a further generalisation of the Hill-equation, incorporating the presence of a reversible competitive antagonist.[1]The Gaddum equation is derived similarly to the Hill-equation but with 2 equilibria: both the ligand with the receptor and the antagonist with the receptor. Hence, the Gaddum equation has 2 constants: the equilibrium constants of the ligand and that of the antagonist The Hill plot is the rearrangement of the Hill equation into a straight line. Taking the reciprocal of both sides of the Hill equation, rearranging, and inverting again yields:θ1−θ=[L]nKd=[L]n(KA)n{\displaystyle {\theta \over 1-\theta }={[{\ce {L}}]^{n} \over K_{d}}={[{\ce {L}}]^{n} \over (K_{A})^{n}}}. Taking the logarithm of both sides of the equation leads to an alternative formulation of the Hill-Langmuir equation: This last form of the Hill equation is advantageous because a plot oflog⁡(θ1−θ){\textstyle \log \left({\theta \over 1-\theta }\right)}versuslog⁡[L]{\displaystyle \log {[{\ce {L}}]}}yields alinear plot, which is called aHill plot.[7][8]Because the slope of a Hill plot is equal to the Hill coefficient for the biochemical interaction, the slope is denoted bynH{\displaystyle n_{H}}. A slope greater than one thus indicates positively cooperative binding between the receptor and the ligand, while a slope less than one indicates negatively cooperative binding. Transformations of equations into linear forms such as this were very useful before the widespread use of computers, as they allowed researchers to determine parameters by fitting lines to data. However, these transformations affect error propagation, and this may result in undue weight to error in data points near 0 or 1.[nb 2]This impacts the parameters of linear regression lines fitted to the data. Furthermore, the use of computers enables more robust analysis involvingnonlinear regression. A distinction should be made between quantification of drugs binding to receptors and drugs producing responses. There may not necessarily be a linear relationship between the two values. In contrast to this article's previous definition of the Hill equation, theIUPHARdefines the Hill equation in terms of the tissue response(E){\displaystyle (E)}, as[1]EEmax=[A]nEC50n+[A]n=11+(EC50[A])n{\displaystyle {\begin{aligned}{\frac {E}{E_{\mathrm {max} }}}&={\frac {[A]^{n}}{{\text{EC}}_{50}^{n}+[A]^{n}}}\\&={\frac {1}{1+\left({\frac {{\text{EC}}_{50}}{[A]}}\right)^{n}}}\end{aligned}}}where[A]{\displaystyle {\ce {[A]}}}is the drug concentration,n{\displaystyle n}is the Hill coefficient, andEC50{\displaystyle {\text{EC}}_{50}}is the drug concentration that produces a 50% maximal response. Dissociation constants (in the previous section) relate to ligand binding, whileEC50{\displaystyle {\text{EC}}_{50}}reflects tissue response. This form of the equation can reflect tissue/cell/population responses to drugs and can be used to generatedose response curves. The relationship betweenKd{\displaystyle K_{d}}and EC50 may be quite complex as a biological response will be the sum of myriad factors; a drug will have a different biological effect if more receptors are present, regardless of its affinity. The Del-Castillo Katz model is used to relate the Hill equation to receptor activation by including a second equilibrium of the ligand-bound receptor to anactivatedform of the ligand-bound receptor. Statistical analysis of response as a function of stimulus may be performed by regression methods such as theprobit modelorlogit model, or other methods such as theSpearman–Kärber method.[9]Empirical models based on nonlinear regression are usually preferred over the use of some transformation of the data that linearizes the dose-response relationship.[10] The Hill coefficient is a measure ofultrasensitivity(i.e. how steep is the response curve). The Hill coefficient,n{\displaystyle n}ornH{\displaystyle n_{H}}, may describe cooperativity (or possibly other biochemical properties, depending on the context in which the Hill equation is being used). When appropriate,[clarification needed]the value of the Hill coefficient describes the cooperativity of ligand binding in the following way: The Hill coefficient can be calculated approximately in terms of thecooperativity indexof Taketa and Pogell[12]as follows:[13] whereEC90{\displaystyle {\ce {EC90}}}andEC10{\displaystyle {\ce {EC10}}}are the input values needed to produce the 10% and 90% of the maximal response, respectively. The most common form of the Hill equation is its irreversible form. However, when building computational models a reversible form is often required in order to model product inhibition. For this reason, Hofmeyr and Cornish-Bowden devised thereversible Hill equation.[14] The Hill coefficient is also intimately connected to theelasticity coefficientwhere the Hill coefficient can be shown to equal: n=εsv11−θ{\displaystyle n=\varepsilon _{s}^{v}{\frac {1}{1-\theta }}} whereθ{\displaystyle \theta }is the fractional saturation,ES/Et{\displaystyle ES/E_{t}}, andεsv{\displaystyle \varepsilon _{s}^{v}}the elasticity coefficient. This is derived by taking the slope of the Hill equation: n=dlog⁡θ1−θdlog⁡s{\displaystyle n={\frac {d\log {\frac {\theta }{1-\theta }}}{d\log s}}} and expanding the slope using the quotient rule. The result shows that the elasticity can never exceedn{\displaystyle n}since the equation above can be rearranged to: εsv=n(1−θ){\displaystyle \varepsilon _{s}^{v}=n(1-\theta )} The Hill equation is used extensively in pharmacology to quantify the functional parameters of a drug[citation needed]and are also used in other areas of biochemistry. The Hill equation can be used to describe dose-response relationships, for exampleion channelopen-probability (P-open) vs. ligand concentration.[15] The Hill equation can be applied in modelling the rate at which a gene product is produced when its parent gene is being regulated bytranscription factors(e.g.,activatorsand/orrepressors).[11]Doing so is appropriate when a gene is regulated by multiple binding sites for transcription factors, in which case the transcription factors may bind the DNA in a cooperative fashion.[16] If the production of protein from geneXis up-regulated (activated) by a transcription factorY, then the rate of production of proteinXcan be modeled as a differential equation in terms of the concentration of activatedYprotein: wherekis the maximal transcription rate of geneX. Likewise, if the production of protein from geneYis down-regulated (repressed) by a transcription factorZ, then the rate of production of proteinYcan be modeled as a differential equation in terms of the concentration of activatedZprotein: wherekis the maximal transcription rate of geneY. Because of its assumption that ligand molecules bind to a receptor simultaneously, the Hill equation has been criticized as a physically unrealistic model.[5]Moreover, the Hill coefficient should not be considered a reliable approximation of the number of cooperative ligand binding sites on a receptor[5][17]except when the binding of the first and subsequent ligands results in extreme positive cooperativity.[5] Unlike more complex models, the relatively simple Hill equation provides little insight into underlying physiological mechanisms of protein-ligand interactions. This simplicity, however, is what makes the Hill equation a useful empirical model, since its use requires littlea prioriknowledge about the properties of either the protein or ligand being studied.[2]Nevertheless, other, more complex models of cooperative binding have been proposed.[7]For more information and examples of such models, seeCooperative binding. Global sensitivity measure such as Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure.[18] There is a link between Hill Coefficient and Response coefficient, as follows. Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked.[13]
https://en.wikipedia.org/wiki/Hill_equation_(biochemistry)
TheHubbert curveis an approximation of the production rate of a resource over time. It is a symmetriclogistic distributioncurve,[1]often confused with the "normal"gaussian function. It first appeared in "Nuclear Energy and the Fossil Fuels," geologistM. King Hubbert's 1956 presentation to theAmerican Petroleum Institute, as an idealized symmetric curve, during his tenure at theShell Oil Company.[1]It has gained a high degree of popularity in the scientific community for predicting thedepletionof various natural resources.[dubious–discuss][citation needed]The curve is the main component ofHubbert peak theory, which has led to the rise ofpeak oilconcerns. Basing his calculations on the peak of oil well discovery in 1948, Hubbert used his model in 1956 to create a curve which predicted that oil production in thecontiguous United Stateswould peak around 1970.[1] The prototypical Hubbert curve is aprobability density functionof alogistic distributioncurve. It is not agaussian function(which is used to plotnormal distributions), but the two have a similar appearance. The density of a Hubbert curve approaches zero more slowly than a gaussian function: The graph of a Hubbert curve consists of three key elements: The actual shape of a graph of real world production trends is determined by various factors, such as development of enhanced production techniques, availability of competing resources, and government regulations on production or consumption. Because of such factors, real world Hubbert curves are often not symmetrical. Using the curve, Hubbert modeled the rate of petroleum production for several regions, determined by the rate of new oil well discovery, and extrapolated a world production curve.[1]The relative steepness of decline in this projection is the main concern in peak oil discussions. This is because a steep drop in the production implies that global oil production will decline so rapidly that the world will not have enough time to develop sources of energy to replace the energy now used from oil, possibly leading to drastic social and economic impacts. Hubbert models have been used to predict the production trends of various resources, such asnatural gas(Hubbert's attempt in the late 1970s resulted in an inaccurate prediction that natural gas production would fall dramatically in the 1980s),Coal,fissionable materials,Helium, transition metals (such ascopper), andwater. At least one researcher has attempted to create a Hubbert curve for thewhalingindustry and caviar,[2]while another applied it tocod.[3] After the predicted early-1970s peak of oil production in the U.S., production declined over the following 35 years in a pattern closely matching the Hubbert curve. However, new extraction methods began reversing this trend beginning in the mid-2000s decade, with production reaching 10.07 million b/d in November 2017 – the highest monthly level of crude oil production in U.S. history. As such, the Hubbert curve has to be calculated separately for different oil provinces, whose exploration has started at a different time, and oil extracted by new techniques, sometimes calledunconventional oil, resulting in individual Hubbert cycles.[4]The Hubbert Curve for US oil production is generally measured in years.
https://en.wikipedia.org/wiki/Hubbert_curve
Instatistics,Smooth Transition Autoregressive(STAR)modelsare typically applied totime seriesdata as an extension ofautoregressive models, in order to allow for higher degree of flexibility in model parameters through asmooth transition. Given a time series of dataxt, the STAR model is a tool for understanding and, perhaps, predicting future values in this series, assuming that the behaviour of the series changes depending on the value of thetransition variable. The transition might depend on thepast valuesof thexseries (similar to theSETAR models), or exogenous variables. The model consists of2autoregressive(AR) parts linked by the transition function. The model is usually referred to as theSTAR(p) models proceeded by the letter describing the transition function (see below) andpis the order of theautoregressivepart. Most popular transition function include exponential function and first and second-order logistic functions. They give rise to Logistic STAR (LSTAR) and Exponential STAR (ESTAR) models. Consider a simple AR(p) model for atime seriesyt where: written in a following vector form: where: STAR models were introduced and comprehensively developed by Kung-sik Chan and Howell Tong in 1986 (esp. p. 187), in which the same acronym was used. It originally stands for Smooth Threshold AutoRegressive. For some background history, see Tong (2011, 2012). The models can be thought of in terms of extension of autoregressive models discussed above, allowing for changes in the model parameters according to the value of atransition variablezt. Chan and Tong (1986) rigorously proved that the family of STAR models includes the SETAR model as a limiting case by showing the uniform boundedness and equicontinuity with respect to the switching parameter. Without this proof, to say that STAR models nest the SETAR model lacks justification. Unfortunately, whether one should use a SETAR model or a STAR model for one's data has been a matter of subjective judgement, taste and inclination in much of the literature. Fortunately, the test procedure, based on David Cox's test of separate family of hypotheses and developed by Gao, Ling and Tong (2018, Statistica Sinica, volume 28, 2857-2883) is now available to address this issue. Such a test is important before adopting a STAR model because, among other issues, the parameter controlling its rate of switching is notoriously data-hungry. Defined in this way, STAR model can be presented as follows: where: They can be understood as two-regime SETAR model with smooth transition between regimes, or ascontinuumof regimes. In both cases the presence of the transition function is the defining feature of the model as it allows for changes in values of the parameters. Three basic transition functions and the name of resulting models are:
https://en.wikipedia.org/wiki/STAR_model
Inbiochemistry,Michaelis–Menten kinetics, named afterLeonor MichaelisandMaud Menten, is the simplest case ofenzyme kinetics, applied to enzyme-catalysed reactions involving the transformation of one substrate into one product. It takes the form of adifferential equationdescribing thereaction ratev{\displaystyle v}(rate of formation ofproductP, with concentrationp{\displaystyle p}) as a function ofa{\displaystyle a}, theconcentrationof thesubstrateA (using the symbols recommended by theIUBMB).[1][2][3][4]Its formula is given by theMichaelis–Menten equation: V{\displaystyle V}, which is often written asVmax{\displaystyle V_{\max }},[5]represents thelimiting rateapproached by the system at saturating substrate concentration for a given enzyme concentration. TheMichaelis constantKm{\displaystyle K_{\mathrm {m} }}has units of concentration, and for a given reaction is equal to the concentration of substrate at which the reaction rate is half ofV{\displaystyle V}.[6]Biochemical reactions involving a single substrate are often assumed to follow Michaelis–Menten kinetics, without regard to the model's underlying assumptions. Only a small proportion of enzyme-catalysed reactions have just one substrate, but the equation still often applies if only one substrate concentration is varied. The plot ofv{\displaystyle v}againsta{\displaystyle a}has often been called a "Michaelis–Menten plot", even recently,[7][8][9]but this is misleading, becauseMichaelisandMentendid not use such a plot. Instead, they plottedv{\displaystyle v}againstlog⁡a{\displaystyle \log a}, which has some advantages over the usual ways of plotting Michaelis–Menten data. It hasv{\displaystyle v}as the dependent variable, and thus does not distort the experimental errors inv{\displaystyle v}.MichaelisandMentendid not attempt to estimateV{\displaystyle V}directly from the limit approached at highlog⁡a{\displaystyle \log a}, something difficult to do accurately with data obtained with modern techniques, and almost impossible with their data. Instead they took advantage of the fact that the curve is almost straight in the middle range and has a maximum slope of0.576V{\displaystyle 0.576V}i.e.0.25ln⁡10⋅V{\displaystyle 0.25\ln 10\cdot V}. With an accurate value ofV{\displaystyle V}it was easy to determinelog⁡Km{\displaystyle \log K_{\mathrm {m} }}from the point on the curve corresponding to0.5V{\displaystyle 0.5V}. This plot is virtually never used today for estimatingV{\displaystyle V}andKm{\displaystyle K_{\mathrm {m} }}, but it remains of major interest because it has another valuable property: it allows the properties ofisoenzymescatalysing the same reaction, but active in very different ranges of substrate concentration, to be compared on a single plot. For example, the four mammalian isoenzymes ofhexokinaseare half-saturated by glucose at concentrations ranging from about 0.02 mM for hexokinase A (brain hexokinase) to about 50 mM for hexokinase D ("glucokinase", liver hexokinase), more than a 2000-fold range. It would be impossible to show a kinetic comparison between the four isoenzymes on one of the usual plots, but it is easily done on a semi-logarithmic plot.[10] A decade beforeMichaelisandMenten,Victor Henrifound that enzyme reactions could be explained by assuming a binding interaction between the enzyme and the substrate.[11]His work was taken up by Michaelis and Menten, who investigated thekineticsofinvertase, an enzyme that catalyzes thehydrolysisofsucroseintoglucoseandfructose.[12]In 1913 they proposed a mathematical model of the reaction.[13]It involves anenzymeE binding to a substrate A to form acomplexEA that releases aproductP regenerating the original form of the enzyme.[6]This may be represented schematically as wherek+1{\displaystyle k_{\mathrm {+1} }}(forward rate constant),k−1{\displaystyle k_{\mathrm {-1} }}(reverse rate constant), andkcat{\displaystyle k_{\mathrm {cat} }}(catalytic rate constant) denote therate constants,[14]the double arrows between A (substrate) and EA (enzyme-substrate complex) represent the fact that enzyme-substrate binding is areversibleprocess, and the single forward arrow represents the formation of P (product). Under certainassumptions– such as the enzyme concentration being much less than the substrate concentration – the rate of product formation is given by in whiche0{\displaystyle e_{0}}is the initial enzyme concentration. Thereaction orderdepends on the relative size of the two terms in the denominator. At low substrate concentrationa≪Km{\displaystyle a\ll K_{\mathrm {m} }}, so that the ratev=kcate0aKm{\displaystyle v={\frac {k_{\mathrm {cat} }e_{0}a}{K_{\mathrm {m} }}}}varies linearly with substrate concentrationa{\displaystyle a}(first-order kineticsina{\displaystyle a}).[15]However at highera{\displaystyle a}, witha≫Km{\displaystyle a\gg K_{\mathrm {m} }}, the reaction approaches independence ofa{\displaystyle a}(zero-order kinetics ina{\displaystyle a}),[15]asymptoticallyapproaching the limiting rateVmax=kcate0{\displaystyle V_{\mathrm {max} }=k_{\mathrm {cat} }e_{0}}. This rate, which is never attained, refers to the hypothetical case in which all enzyme molecules are bound to substrate.kcat{\displaystyle k_{\mathrm {cat} }}, known as theturnover numberorcatalytic constant, normally expressed in s–1, is the limiting number of substrate molecules converted to product per enzyme molecule per unit of time. Further addition of substrate would not increase the rate, and the enzyme is said to be saturated. The Michaelis constantKm{\displaystyle K_{\mathrm {m} }}is not affected by the concentration or purity of an enzyme.[16]Its value depends both on the identity of the enzyme and that of the substrate, as well as conditions such as temperature and pH. The model is used in a variety of biochemical situations other than enzyme-substrate interaction, includingantigen–antibody binding,DNA–DNA hybridization, andprotein–protein interaction.[17][18]It can be used to characterize a generic biochemical reaction, in the same way that theLangmuir equationcan be used to model genericadsorptionof biomolecular species.[18]When an empirical equation of this form is applied to microbial growth, it is sometimes called aMonod equation. Michaelis–Mentenkinetics have also been applied to a variety of topics outside of biochemical reactions,[14]includingalveolarclearance of dusts,[19]therichness of speciespools,[20]clearance ofblood alcohol,[21]thephotosynthesis-irradiancerelationship, and bacterialphageinfection.[22] The equation can also be used to describe the relationship betweenion channelconductivityandligandconcentration,[23]and also, for example, to limiting nutrients and phytoplankton growth in the global ocean.[24] Thespecificity constantkcat/Km{\displaystyle k_{\text{cat}}/K_{\mathrm {m} }}(also known as thecatalytic efficiency) is a measure of how efficiently an enzyme converts a substrate into product. Although it is the ratio ofkcat{\displaystyle k_{\text{cat}}}andKm{\displaystyle K_{\mathrm {m} }}it is a parameter in its own right, more fundamental thanKm{\displaystyle K_{\mathrm {m} }}.Diffusion limited enzymes, such asfumarase, work at the theoretical upper limit of108– 1010M−1s−1, limited by diffusion of substrate into theactive site.[25] If we symbolize the specificity constant for a particular substrate A askA=kcat/Km{\displaystyle k_{\mathrm {A} }=k_{\text{cat}}/K_{\mathrm {m} }}the Michaelis–Menten equation can be written in terms ofkA{\displaystyle k_{\mathrm {A} }}andKm{\displaystyle K_{\mathrm {m} }}as follows: At small values of the substrate concentration this approximates to a first-order dependence of the rate on the substrate concentration: Conversely it approaches a zero-order dependence ona{\displaystyle a}when the substrate concentration is high: The capacity of an enzyme to distinguish between two competing substrates that both follow Michaelis–Menten kinetics depends only on the specificity constant, and not on eitherkcat{\displaystyle k_{\text{cat}}}orKm{\displaystyle K_{\mathrm {m} }}alone. PuttingkA{\displaystyle k_{\mathrm {A} }}for substrateA{\displaystyle \mathrm {A} }andkA′{\displaystyle k_{\mathrm {A'} }}for a competing substrateA′{\displaystyle \mathrm {A'} }, then the two rates when both are present simultaneously are as follows: Although both denominators contain the Michaelis constants they are the same, and thus cancel when one equation is divided by the other: and so the ratio of rates depends only on the concentrations of the two substrates and their specificity constants. As the equation originated withHenri, not withMichaelisandMenten, it is more accurate to call it the Henri–Michaelis–Menten equation,[26]though it was Michaelis and Menten who realized that analysing reactions in terms of initial rates would be simpler, and as a result more productive, than analysing the time course of reaction, as Henri had attempted. Although Henri derived the equation he made no attempt to apply it. In addition, Michaelis and Menten understood the need for buffers to control the pH, but Henri did not. Parameter values vary widely between enzymes. Some examples are as follows:[27] In their analysis, Michaelis and Menten (and also Henri) assumed that the substrate is in instantaneouschemical equilibriumwith the complex, which implies[13][28] in whicheis the concentration offreeenzyme (not the total concentration) andxis the concentration of enzyme-substrate complex EA. Conservation of enzyme requires that[28] wheree0{\displaystyle e_{0}}is now thetotalenzyme concentration. After combining the two expressions some straightforward algebra leads to the following expression for the concentration of the enzyme-substrate complex: whereKdiss=k−1/k+1{\displaystyle K_{\mathrm {diss} }=k_{-1}/k_{+1}}is thedissociation constantof the enzyme-substrate complex. Hence the rate equation is the Michaelis–Menten equation,[28] wherek+2{\displaystyle k_{+2}}corresponds to the catalytic constantkcat{\displaystyle k_{\mathrm {cat} }}and the limiting rate isVmax=k+2e0=kcate0{\displaystyle V_{\mathrm {max} }=k_{+2}e_{0}=k_{\mathrm {cat} }e_{0}}. Likewise with the assumption of equilibrium the Michaelis constantKm=Kdiss{\displaystyle K_{\mathrm {m} }=K_{\mathrm {diss} }}. When studyingureaseat about the same time as Michaelis and Menten were studying invertase,Donald Van Slykeand G. E. Cullen[29]made essentially the opposite assumption, treating the first step not as an equilibrium but as an irreversible second-order reaction with rate constantk+1{\displaystyle k_{+1}}. As their approach is never used today it is sufficient to give their final rate equation: and to note that it is functionally indistinguishable from the Henri–Michaelis–Menten equation. One cannot tell from inspection of the kinetic behaviour whetherKm{\displaystyle K_{\mathrm {m} }}is equal tok+2/k+1{\displaystyle k_{+2}/k_{+1}}or tok−1/k+1{\displaystyle k_{-1}/k_{+1}}or to something else. G. E. BriggsandJ. B. S. Haldaneundertook an analysis that harmonized the approaches of Michaelis and Menten and of Van Slyke and Cullen,[30][31]and is taken as the basic approach to enzyme kinetics today. They assumed that the concentration of the intermediate complex does not change on the time scale over which product formation is measured.[32]This assumption means thatk+1ea=k−1x+kcatx=(k−1+kcat)x{\displaystyle k_{+1}ea=k_{-1}x+k_{\mathrm {cat} }x=(k_{-1}+k_{\mathrm {cat} })x}. The resulting rate equation is as follows: where This is the generalized definition of the Michaelis constant.[33] All of the derivations given treat the initial binding step in terms of thelaw of mass action, which assumes freediffusionthrough the solution. However, in the environment of a living cell where there is a high concentration ofproteins, thecytoplasmoften behaves more like a viscousgelthan a free-flowing liquid, limiting molecular movements bydiffusionand altering reaction rates.[34]Note, however that although this gel-like structure severely restricts large molecules like proteins its effect on small molecules, like many of the metabolites that participate in central metabolism, is very much smaller.[35]In practice, therefore, treating the movement of substrates in terms of diffusion is not likely to produce major errors. Nonetheless, Schnell and Turner consider it more appropriate to model the cytoplasm as afractal, in order to capture its limited-mobility kinetics.[36] Determining the parameters of the Michaelis–Menten equation typically involves running a series ofenzyme assaysat varying substrate concentrationsa{\displaystyle a}, and measuring the initial reaction ratesv{\displaystyle v}, i.e. the reaction rates are measured after a time period short enough for it to be assumed that the enzyme-substrate complex has formed, but that the substrate concentration remains almost constant, and so the equilibrium or quasi-steady-state approximation remain valid.[37]By plotting reaction rate against concentration, and usingnonlinear regressionof the Michaelis–Menten equation with correct weighting based on known error distribution properties of the rates, the parameters may be obtained. Before computing facilities to perform nonlinear regression became available, graphical methods involving linearisation of the equation were used. A number of these were proposed, including theEadie–Hofstee plotofv{\displaystyle v}againstv/a{\displaystyle v/a},[38][39]theHanes plotofa/v{\displaystyle a/v}againsta{\displaystyle a},[40]and theLineweaver–Burk plot(also known as thedouble-reciprocal plot) of1/v{\displaystyle 1/v}against1/a{\displaystyle 1/a}.[41]Of these,[42]the Hanes plot is the most accurate whenv{\displaystyle v}is subject to errors with uniform standard deviation.[43]From the point of view of visualizaing the data the Eadie–Hofstee plot has an important property: the entire possible range ofv{\displaystyle v}values from0{\displaystyle 0}toV{\displaystyle V}occupies a finite range of ordinate scale, making it impossible to choose axes that conceal a poor experimental design. However, while useful for visualization, all three linear plots distort the error structure of the data and provide less precise estimates ofv{\displaystyle v}andKm{\displaystyle K_{\mathrm {m} }}than correctly weighted non-linear regression. Assuming an errorε(v){\displaystyle \varepsilon (v)}onv{\displaystyle v}, an inverse representation leads to an error ofε(v)/v2{\displaystyle \varepsilon (v)/v^{2}}on1/v{\displaystyle 1/v}(Propagation of uncertainty), implying that linear regression of the double-reciprocal plot should include weights ofv4{\displaystyle v^{4}}. This was well understood by Lineweaver and Burk,[41]who had consulted the eminent statisticianW. Edwards Demingbefore analysing their data.[44]Unlike nearly all workers since, Burk made an experimental study of the error distribution, finding it consistent with a uniform standard error inv{\displaystyle v}, before deciding on the appropriate weights.[45]This aspect of the work of Lineweaver and Burk received virtually no attention at the time, and was subsequently forgotten. Thedirect linear plotis a graphical method in which the observations are represented by straight lines in parameter space, with axesKm{\displaystyle K_{\mathrm {m} }}andV{\displaystyle V}: each line is drawn with an intercept of−a{\displaystyle -a}on theKm{\displaystyle K_{\mathrm {m} }}axis andv{\displaystyle v}on theV{\displaystyle V}axis. The point of intersection of the lines for different observations yields the values ofKm{\displaystyle K_{\mathrm {m} }}andV{\displaystyle V}.[46] Many authors, for example Greco and Hakala,[47]have claimed that non-linear regression is always superior to regression of the linear forms of the Michaelis–Menten equation. However, that is correct only if the appropriate weighting scheme is used, preferably on the basis of experimental investigation, something that is almost never done. As noted above, Burk[45]carried out the appropriate investigation, and found that the error structure of his data was consistent with a uniform standard deviation inv{\displaystyle v}. More recent studies found that a uniform coefficient of variation (standard deviation expressed as a percentage) was closer to the truth with the techniques in use in the 1970s.[48][49]However, this truth may be more complicated than any dependence onv{\displaystyle v}alone can represent.[50] Uniform standard deviation of1/v{\displaystyle 1/v}. If the rates are considered to have a uniform standard deviation the appropriate weight for everyv{\displaystyle v}value for non-linear regression is 1. If the double-reciprocal plot is used each value of1/v{\displaystyle 1/v}should have a weight ofv4{\displaystyle v^{4}}, whereas if the Hanes plot is used each value ofa/v{\displaystyle a/v}should have a weight ofv4/a2{\displaystyle v^{4}/a^{2}}. Uniform coefficient variation of1/v{\displaystyle 1/v}. If the rates are considered to have a uniform coefficient variation the appropriate weight for everyv{\displaystyle v}value for non-linear regression isv2{\displaystyle v^{2}}. If the double-reciprocal plot is used each value of1/v{\displaystyle 1/v}should have a weight ofv2{\displaystyle v^{2}}, whereas if the Hanes plot is used each value ofa/v{\displaystyle a/v}should have a weight ofv2/a2{\displaystyle v^{2}/a^{2}}. Ideally thev{\displaystyle v}in each of these cases should be the true value, but that is always unknown. However, after a preliminary estimation one can use the calculated valuesv^{\displaystyle {\hat {v}}}for refining the estimation. In practice the error structure of enzyme kinetic data is very rarely investigated experimentally, therefore almost never known, but simply assumed. It is, however, possible to form an impression of the error structure from internal evidence in the data.[51]This is tedious to do by hand, but can readily be done in the computer. Santiago Schnelland Claudio Mendoza suggested a closed form solution for the time course kinetics analysis of the Michaelis–Menten kinetics based on the solution of theLambert W function.[52]Namely, whereWis the Lambert W function and The above equation, known nowadays as the Schnell-Mendoza equation,[53]has been used to estimateV{\displaystyle V}andKm{\displaystyle K_{\mathrm {m} }}from time course data.[54][55] Only a small minority of enzyme-catalysed reactions have just one substrate, and even if the number is increased by treating two-substrate reactions in which one substrate is water as one-substrate reactions the number is still small. One might accordingly suppose that the Michaelis–Menten equation, normally written with just one substrate, is of limited usefulness. This supposition is misleading, however. One of the common equations for a two-substrate reaction can be written as follows to expressv{\displaystyle v}in terms of two substrate concentrationsa{\displaystyle a}andb{\displaystyle b}: the other symbols represent kinetic constants. Suppose now thata{\displaystyle a}is varied withb{\displaystyle b}held constant. Then it is convenient to reorganize the equation as follows: This has exactly the form of the Michaelis–Menten equation withapparent valuesVapp{\displaystyle V^{\mathrm {app} }}andKmapp{\displaystyle K_{\mathrm {m} }^{\mathrm {app} }}defined as follows: The linear (simple) types of inhibition can be classified in terms of the general equation formixed inhibitionat an inhibitor concentrationi{\displaystyle i}: in whichKic{\displaystyle K_{\mathrm {ic} }}is thecompetitive inhibition constantandKiu{\displaystyle K_{\mathrm {iu} }}is theuncompetitive inhibition constant. This equation includes the other types of inhibition as special cases: Pure non-competitive inhibition is very rare, being mainly confined to effects of protons and some metal ions. Cleland recognized this, and he redefinednoncompetitiveto meanmixed.[57]Some authors have followed him in this respect, but not all, so when reading any publication one needs to check what definition the authors are using. In all cases the kinetic equations have the form of the Michaelis–Menten equation with apparent constants, as can be seen by writing the equation above as follows: with apparent valuesVapp{\displaystyle V^{\mathrm {app} }}andKmapp{\displaystyle K_{\mathrm {m} }^{\mathrm {app} }}defined as follows:
https://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten_kinetics
Inecology,r/Kselection theoryrelates to theselectionof combinations oftraitsin an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of reduced individualparental investmentofr-strategists, or on a reduced quantity of offspring with a corresponding increased parental investment ofK-strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as "cheap" or "expensive", a comment on the expendable nature of the offspring and parental commitment made.[1]The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood. The terminology ofr/K-selection was coined by the ecologistsRobert MacArthurandE. O. Wilsonin 1967[2]based on their work onisland biogeography;[3]although the concept of the evolution of life history strategies has a longer history[4](see e.g.plant strategies). The theory was popular in the 1970s and 1980s, when it was used as aheuristicdevice, but lost importance in the early 1990s, when it was criticized by several empirical studies.[5][6]Alife-historyparadigm has replaced ther/Kselection paradigm, but continues to incorporate its important themes as a subset of life history theory.[7]Some scientists now prefer to use the termsfastversusslowlife history as a replacement for, respectively,rversusKreproductive strategy.[8] Inr/Kselection theory, selective pressures arehypothesisedto driveevolutionin one of two generalized directions:r- orK-selection.[2]These terms,randK, are drawn from standard ecologicalformulaas illustrated in the simplifiedVerhulst modelofpopulation dynamics:[9] whereNis thepopulation,ris the maximumgrowth rate,Kis thecarrying capacityof the local environment, and⁠dN/dt⁠(thederivativeof population sizeNwith respect to timet) is the rate of change in population with time. Thus, the equation relates the growth rate of the populationNto the current population size, incorporating the effect of the two constant parametersrandK. (Note that when the population size is greater than the carrying capacity then 1 - N/K is negative, which indicates a population decline or negative growth.) The choice of the letterKcame from theGermanKapazitätsgrenze(capacity limit), whilercame fromrate. r-selected species are those that emphasize high growth rates, typically exploit less-crowdedecological niches, and produce manyoffspring, each of which has a relatively low probability of surviving to adulthood (i.e., highr, lowK).[10]A typicalrspecies is the dandelion (genusTaraxacum). In unstable or unpredictable environments,r-selection predominates due to the ability toreproducerapidly. There is little advantage in adaptations that permit successful competition with other organisms, because the environment is likely to change again. Among the traits that are thought to characterizer-selection are highfecundity, smallbody size, early maturity onset, shortgeneration time, and the ability todisperseoffspring widely. Organisms whose life history is subject tor-selection are often referred to asr-strategists orr-selected. Groups of organisms known for exhibitingr-selected traits arebacteria,diatoms,insects,grasses,cephalopods,fowl, androdents. By contrast,K-selected species display traits associated with living at densities close to carrying capacity and typically are strong competitors in such crowded niches, thatinvestmore heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., lowr, highK). Inscientific literature,r-selected species are occasionally referred to as "opportunistic" whereasK-selected species are described as "equilibrium".[10] In stable or predictable environments,K-selection predominates as the ability tocompetesuccessfully for limited resources is crucial and populations ofK-selected organisms typically are very constant in number and close to the maximum that the environment can bear (unliker-selected populations, where population sizes can change much more rapidly). Traits that are thought to be characteristic ofK-selection include large body size, longlife expectancy, and the production of fewer offspring, which often requireextensive parental careuntil they mature. Organisms whose life history is subject toK-selection are often referred to asK-strategists orK-selected.[11]Organisms withK-selected traits include large organisms such aselephants,sharks,humans, andwhales, but also smaller long-lived organisms such asArctic terns,[12]parrots, andeagles. Although some organisms are identified as primarilyr- orK-strategists, the majority of organisms do not follow this pattern. For instance, trees have traits such as longevity and strong competitiveness that characterise them asK-strategists. In reproduction, however, trees typically produce thousands of offspring and disperse them widely, traits characteristic ofr-strategists.[13] Similarly,reptilessuch assea turtlesdisplay bothr- andK-traits: Although sea turtles are large organisms with long lifespans (provided they reach adulthood), they produce large numbers of unnurtured offspring. Ther/Kdichotomy can be re-expressed as a continuous spectrum using the economic concept ofdiscounted future returns, withr-selection corresponding to large discount rates andK-selection corresponding to small discount rates.[14] In areas of major ecological disruption or sterilisation (such as after a majorvolcaniceruption, as atKrakatoaorMount St. Helens),r- andK-strategists play distinct roles in theecological successionthat regenerates theecosystem. Because of their higher reproductive rates and ecological opportunism, primary colonisers typically arer-strategists and they are followed by a succession of increasingly competitivefloraandfauna. The ability of an environment to increase energetic content, through photosynthetic capture of solar energy, increases with the increase in complexbiodiversityasrspecies proliferate to reach a peak possible withKstrategies.[15] Eventually a new equilibrium is approached (sometimes referred to as aclimax community), withr-strategists gradually being replaced byK-strategists which are more competitive and better adapted to the emerging micro-environmental characteristics of thelandscape. Traditionally, biodiversity was considered maximized at this stage, with introductions of new species resulting in the replacement andlocal extinctionofendemicspecies.[16]However, theintermediate disturbance hypothesisposits that intermediate levels of disturbance in a landscape create patches at different levels of succession, promoting coexistence of colonizers and competitors at the regional scale. While usually applied at the level of species,r/Kselection theory is also useful in studying the evolution of ecological andlife historydifferences between subspecies, for instance the African honey bee,A. m. scutellata, and the Italian bee,A. m. ligustica.[17]At the other end of the scale, it has also been used to study theevolutionary ecologyof whole groups of organisms, such asbacteriophages.[18]Other researchers have proposed that the evolution of humaninflammatory responsesis related tor/Kselection.[19] Some researchers, such asLee Ellis,J. Philippe Rushton, andAurelio José Figueredo, have attempted to applyr/Kselection theory to various human behaviors, includingcrime,[20]sexual promiscuity, fertility,IQ, and other traits related tolife history theory.[21][22]Rushton developed "differentialKtheory" to attempt to explain variations in behavior acrosshuman races.[22][23]DifferentialKtheory has been debunked as being devoid of empirical basis, and has also been described as a key example ofscientific racism.[24][25][26] Althoughr/Kselection theory became widely used during the 1970s,[27][28][29][30]it also began to attract more critical attention.[31][32][33][34]In particular, a review in 1977 by the ecologistStephen C. Stearnsdrew attention to gaps in the theory, and to ambiguities in the interpretation of empirical data for testing it.[35] In 1981, a review of ther/Kselection literature by Parry demonstrated that there was no agreement among researchers using the theory about the definition ofr- andK-selection, which led him to question whether the assumption of a relation between reproductive expenditure and packaging of offspring was justified.[36]A 1982 study by Templeton and Johnson showed that in a population ofDrosophila mercatorumunderK-selection the population actually produced a higher frequency of traits typically associated withr-selection.[37]Several other studies contradicting the predictions ofr/Kselection theory were also published between 1977 and 1994.[38][39][40][41] When Stearns reviewed the status of the theory again in 1992,[42]he noted that from 1977 to 1982 there was an average of 42 references to the theory per year in the BIOSIS literature search service, but from 1984 to 1989 the average dropped to 16 per year and continued to decline. He concluded thatr/Ktheory was a once useful heuristic that no longer serves a purpose in life history theory.[43] More recently, thepanarchytheories ofadaptive capacityandresiliencepromoted byC. S. Hollingand Lance Gunderson have revived interest in the theory, and use it as a way of integrating social systems, economics, and ecology.[44] Writing in 2002, Reznick and colleagues reviewed the controversy regardingr/Kselection theory and concluded that: The distinguishing feature of ther- andK-selection paradigm was the focus on density-dependent selection as the important agent of selection on organisms' life histories. This paradigm was challenged as it became clear that other factors, such as age-specific mortality, could provide a more mechanistic causative link between an environment and an optimal life history (Wilbur et al. 1974;[31]Stearns 1976,[45]1977[35]). Ther- andK-selection paradigm was replaced by new paradigm that focused on age-specific mortality (Stearns, 1976;[45]Charlesworth, 1980[46]). This new life-history paradigm has matured into one that uses age-structured models as a framework to incorporate many of the themes important to ther–Kparadigm. Alternative approaches are now available both for studying life history evolution (e.g.Leslie matrixfor an age-structured population) and for density-dependent selection (e.g. variable densitylottery model[47]).
https://en.wikipedia.org/wiki/R/K_selection_theory
(−1/b){E[ln⁡(X)]−ln⁡(η)}{\displaystyle (-1/b)\{\mathrm {E} [\ln(X)]-\ln(\eta )\}\,}whereX=ηe−bx{\displaystyle X=\eta e^{-bx}\,}and (1/b2)(E{[ln⁡(X)]2}−(E[ln⁡(X)])2){\displaystyle (1/b^{2})(\mathrm {E} \{[\ln(X)]^{2}\}-(\mathrm {E} [\ln(X)])^{2})\,} Theshifted Gompertz distributionis the distribution of the larger of two independentrandom variablesone of which has anexponential distributionwith parameterb{\displaystyle b}and the other has aGumbel distributionwith parametersη{\displaystyle \eta }andb{\displaystyle b}. In its original formulation the distribution was expressed referring to the Gompertz distribution instead of the Gumbel distribution but, since the Gompertz distribution is a reverted Gumbel distribution, the labelling can be considered as accurate. It has been used as a model ofadoption of innovations. It was proposed by Bemmaor (1994).[1]Some of its statistical properties have been studied further by Jiménez and Jodrá (2009)[2]and Jiménez Torres (2014).[3] It has been used to predict the growth and decline of social networks and on-line services and shown to be superior to theBass modelandWeibull distribution(Bauckhage andKersting2014).[4] Theprobability density functionof the shifted Gompertz distribution is: whereb≥0{\displaystyle b\geq 0}is ascale parameterandη≥0{\displaystyle \eta \geq 0}is ashape parameter. In the context of diffusion of innovations,b{\displaystyle b}can be interpreted as the overall appeal of the innovation andη{\displaystyle \eta }is the propensity to adopt in the propensity-to-adopt paradigm. The largerb{\displaystyle b}is, the stronger the appeal and the largerη{\displaystyle \eta }is, the smaller the propensity to adopt. The distribution can be reparametrized according to the external versus internal influence paradigm withp=f(0;b,η)=be−η{\displaystyle p=f(0;b,\eta )=be^{-\eta }}as the coefficient of external influence andq=b−p{\displaystyle q=b-p}as the coefficient of internal influence. Hence: Whenq=0{\displaystyle q=0}, the shifted Gompertz distribution reduces to an exponential distribution. Whenp=0{\displaystyle p=0}, the proportion of adopters is nil: the innovation is a complete failure. The shape parameter of the probability density function is equal toq/p{\displaystyle q/p}. Similar to the Bass model, the hazard ratez(x;p,q){\displaystyle z(x;p,q)}is equal top{\displaystyle p}whenx{\displaystyle x}is equal to0{\displaystyle 0}; it approachesp+q{\displaystyle p+q}asx{\displaystyle x}gets close to∞{\displaystyle \infty }. See Bemmaor and Zheng[5]for further analysis. Thecumulative distribution functionof the shifted Gompertz distribution is: Equivalently, The shifted Gompertz distribution is right-skewed for all values ofη{\displaystyle \eta }. It is more flexible than theGumbel distribution. The hazard rate is a concave function ofF(x;b,η){\displaystyle F(x;b,\eta )}which increases frombe−η{\displaystyle be^{-\eta }}tob{\displaystyle b}: its curvature is all the steeper asη{\displaystyle \eta }is large. In the context of the diffusion of innovations, the effect of word of mouth (i.e., the previous adopters) on the likelihood to adopt decreases as the proportion of adopters increases. (For comparison, in the Bass model, the effect remains the same over time). The parameterq=b(1−e−η){\displaystyle q=b(1-e^{-\eta })}captures the growth of the hazard rate whenx{\displaystyle x}varies from0{\displaystyle 0}to∞{\displaystyle \infty }. The shifted Gompertz density function can take on different shapes depending on the values of the shape parameterη{\displaystyle \eta }: Whenη{\displaystyle \eta }varies according to agamma distributionwith shape parameterα{\displaystyle \alpha }and scale parameterβ{\displaystyle \beta }(mean =αβ{\displaystyle \alpha \beta }), the distribution ofx{\displaystyle x}is Gamma/Shifted Gompertz (G/SG). Whenα{\displaystyle \alpha }is equal to one, the G/SG reduces to theBass model(Bemmaor 1994). The three-parameter G/SG has been applied by Dover, Goldenberg and Shapira (2009)[6]and Van den Bulte and Stremersch (2004)[7]among others in the context of the diffusion of innovations. The model is discussed in Chandrasekaran and Tellis (2007).[8]Similar to the shifted Gompertz distribution, the G/SG can either be represented according to the propensity-to-adopt paradigm or according to the innovation-imitation paradigm. In the latter case, it includes three parameters:p,q{\displaystyle p,q}andα{\displaystyle \alpha }withp=f(0;b,β,α)=b/(1+β)α{\displaystyle p=f(0;b,\beta ,\alpha )=b/(1+\beta )^{\alpha }}andq=b−p{\displaystyle q=b-p}. The parameterα{\displaystyle \alpha }modifies the curvature of the hazard rate as expressed as a function ofF(x;p,q,α){\displaystyle F(x;p,q,\alpha )}: whenα{\displaystyle \alpha }is less than 0.5, it decreases to a minimum prior to increasing at an increasing rate asF(x;p,q,α<1/2){\displaystyle F(x;p,q,\alpha <1/2)}increases, it is convex whenα{\displaystyle \alpha }is less than one and larger or equal to 0.5, linear whenα{\displaystyle \alpha }is equal to one, and concave whenα{\displaystyle \alpha }is larger than one. Here are some special cases of the G/SG distribution in the case of homogeneity (across the population) with respect to the likelihood to adopt at a given time: with: One can compare the parametersp{\displaystyle p}andq{\displaystyle q}across the values ofα{\displaystyle \alpha }as they capture the same notions. In all the cases, the hazard rate is either constant or a monotonically increasing function ofF(x;p,q,α){\displaystyle F(x;p,q,\alpha )}(positive word of mouth). As the diffusion curve is all the more skewed asα{\displaystyle \alpha }becomes large, we expectq{\displaystyle q}to decrease as the level of right-skew increases.
https://en.wikipedia.org/wiki/Shifted_Gompertz_distribution
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Insociology, atipping pointis a point in time when a group—or many group members—rapidly and dramatically changes its behavior by widely adopting a previously rare practice.[1] The phrase was first used in sociology byMorton Grodzinswhen he adopted the phrase fromphysicswhere it referred to the adding a small amount of weight to a balanced object until the additional weight caused the object to suddenly and completely topple, or tip. Grodzins studied integrating American neighborhoods in the early 1960s. He discovered that most of thewhitefamilies remained in the neighborhood as long as the comparative number ofblackfamilies remained very small. But, at a certain point, when "one too many" black families arrived, the remaining white families would move outen massein a process known aswhite flight. He called that moment the "tipping point". The idea was expanded and built upon byNobel Prize-winnerThomas Schellingin 1971.[2]A similar idea underliesMark Granovetter'sthreshold modelof collective behavior. The phrase has extended beyond its original meaning and been applied to any process in which, beyond a certain point, the rate of the process increases dramatically. It has been applied in many fields, fromeconomicstohuman ecology[3]toepidemiology. It can also be compared tophase transitioninphysicsor the propagation of populations in an unbalancedecosystem. Journalists and academics have applied the phrase to dramatic changes in governments, such as during theArab Spring.[4]The concept of a tipping point is described in an article in anacademic journal, theJournal of Democracy, titled "China at the Tipping Point? Foreseeing the unforeseeable": Regime transitions belong to that paradoxical class of events which are inevitable but not predictable. Other examples arebank runs, currency inflations, strikes, migrations, riots, and revolutions. In retrospect, such events are explainable, even overdetermined. In prospect, however, their timing and character are impossible to anticipate. Such events seem to come closer and closer but do not occur, even when all the conditions are ripe—until suddenly they do.[5] American journalists atNPRhave used it to describean influx of sexual assault allegations, saying that a tipping point has been passed regarding societal tolerance ofsexual harassmentandfeminism.[6] Mathematically, the angle of repose may be seen as abifurcation. Incontrol theory, the concept ofpositive feedbackdescribes the same phenomenon, with the problem of balancing aninverted pendulumbeing the classic embodiment. The concept has also been applied to the popular acceptance of new technologies, for example being used to explain the success ofVHSoverBetamax.[citation needed] The concept of social tipping points has been applied to analyze globaldecarbonizationpathways and the ability to activate contagious and fast-spreading processes of social and technological change that would accelerate carbon emission reductions needed to achieve the goals of theParis Agreement.[7] A study suggests, "path dependencies, increasing returns to scale and learning-by-doing cost reductions can produce sudden, tipping-point-like transitions that cannot be extrapolated from past system behaviour", and that "historically, technological innovation and government policies often motivated by energy security concerns have also, in notable cases, spurred rapidshifts in energy systems". Moreover, "social norms that shape individual behaviour and preferences can exhibit similar tipping-point style dynamics", which could affect "the regulatory and market conditions in which energy technologies compete". When social norms of sustainability are costly – or at least detrimental rather than beneficial – for individuals to violate, this may substantially increase the probability that an individual engages in pro-environmental behaviour.[8] Removing all subsidies from fossil fuels could intervene thetipping pointsoccur. And researchers also stress the importance of building carbon-neutral cities,[9]which could educate the general public and drive consumer interest in emerging clean technologies. The term was popularized in application to daily life byMalcolm Gladwell's 2000 bestselling bookThe Tipping Point: How Little Things Can Make a Big Difference.
https://en.wikipedia.org/wiki/Tipping_point_(sociology)
Ingeometry, thegeometric medianof adiscrete point setin aEuclidean spaceis the point minimizing the sum of distances to the sample points. This generalizes themedian, which has the property of minimizing the sum of distances or absolute differences for one-dimensional data. It is also known as thespatial median,[1]Euclidean minisum point,[1]Torricelli point,[2]or1-median. It provides a measure ofcentral tendencyin higher dimensions and it is a standard problem infacility location, i.e., locating a facility to minimize the cost of transportation.[3] The geometric median is an importantestimatoroflocationin statistics,[4]because it minimizes the sum of theL2distancesof the samples.[5]It is to be compared to the mean, which minimizes the sum of thesquaredL2distances; and to the coordinate-wise median which minimizes the sum of theL1distances. The more generalk-median problemasks for the location ofkcluster centers minimizing the sum ofL2distances from each sample point to its nearest center. The special case of the problem for three points in the plane (that is,m= 3 andn= 2 in the definition below) is sometimes also known asFermat's problem; it arises in the construction of minimalSteiner trees, and was originally posed as a problem byPierre de Fermatand solved byEvangelista Torricelli.[6]Its solution is now known as theFermat pointof the triangle formed by the three sample points.[7]The geometric median may in turn be generalized to the problem of minimizing the sum ofweighteddistances, known as theWeber problemafterAlfred Weber's discussion of the problem in his 1909 book on facility location.[1]Some sources instead call Weber's problem theFermat–Weber problem,[8]but others use this name for the unweighted geometric median problem.[9] Wesolowsky (1993)provides a survey of the geometric median problem. SeeFekete, Mitchell & Beurer (2005)for generalizations of the problem to non-discrete point sets. Formally, for a given set ofmpointsXm=x1,x2,…,xm{\displaystyle \mathbb {X} ^{m}=x_{1},x_{2},\dots ,x_{m}\,}with eachxi∈Rn{\displaystyle x_{i}\in \mathbb {R} ^{n}}, the geometric median is defined as the sum of theL2distances minimizer Here,arg minmeans the value of the argumenty{\displaystyle y}which minimizes the sum. In this case, it is the pointy{\displaystyle y}inn-dimensional Euclidean space from where the sum of allEuclidean distancesto thexi{\displaystyle x_{i}}'s is minimum. Despite the geometric median's being an easy-to-understand concept, computing it poses a challenge. Thecentroidorcenter of mass, defined similarly to the geometric median as minimizing the sum of thesquaresof the distances to each point, can be found by a simple formula — its coordinates are the averages of the coordinates of the points — but it has been shown that noexplicit formula, nor an exact algorithm involving only arithmetic operations andkth roots, can exist in general for the geometric median. Therefore, only numerical or symbolic approximations to the solution of this problem are possible under thismodel of computation.[15] However, it is straightforward to calculate an approximation to the geometric median using an iterative procedure in which each step produces a more accurate approximation. Procedures of this type can be derived from the fact that the sum of distances to the sample points is aconvex function, since the distance to each sample point is convex and the sum of convex functions remains convex. Therefore, procedures that decrease the sum of distances at each step cannot get trapped in alocal optimum. One common approach of this type, calledWeiszfeld's algorithmafter the work ofEndre Weiszfeld,[16]is a form ofiteratively re-weighted least squares. This algorithm defines a set of weights that are inversely proportional to the distances from the current estimate to the sample points, and creates a new estimate that is the weighted average of the sample according to these weights. That is, This method converges for almost all initial positions, but may fail to converge when one of its estimates falls on one of the given points. It can be modified to handle these cases so that it converges for all initial points.[12] Bose, Maheshwari & Morin (2003)describe more sophisticated geometric optimization procedures for finding approximately optimal solutions to this problem.Cohen et al. (2016)show how to compute the geometric median to arbitrary precision in nearlylinear time. Note also that the problem can be formulated as thesecond-order cone program which can be solved in polynomial time usingcommon optimization solvers. Ifyis distinct from all the given points,xi, thenyis the geometric median if and only if it satisfies: This is equivalent to: which is closely related to Weiszfeld's algorithm. In general,yis the geometric median if and only if there are vectorsuisuch that: where forxi≠y, and forxi=y, An equivalent formulation of this condition is It can be seen as a generalization of the median property, in the sense that any partition of the points, in particular as induced by any hyperplane throughy, has the same and opposite sum of positivedirectionsfromyon each side. In the one dimensional case, the hyperplane is the pointyitself, and the sum of directions simplifies to the (directed) counting measure. The geometric median can be generalized from Euclidean spaces to generalRiemannian manifolds(and evenmetric spaces) using the same idea which is used to define theFréchet meanon a Riemannian manifold.[17][18]LetM{\displaystyle M}be a Riemannian manifold with corresponding distance functiond(⋅,⋅){\displaystyle d(\cdot ,\cdot )}, letw1,…,wn{\displaystyle w_{1},\ldots ,w_{n}}ben{\displaystyle n}weights summing to 1, and letx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}ben{\displaystyle n}observations fromM{\displaystyle M}. Then we define the weighted geometric medianm{\displaystyle m}(or weighted Fréchet median) of the data points as If all the weights are equal, we say simply thatm{\displaystyle m}is the geometric median.
https://en.wikipedia.org/wiki/Geometric_median
Quantile regressionis a type ofregression analysisused in statistics and econometrics. Whereas themethod of least squaresestimates the conditionalmeanof the response variable across values of the predictor variables, quantile regression estimates the conditionalmedian(or otherquantiles) of the response variable. [There is also a method for predicting the conditionalgeometric meanof the response variable,[1].] Quantile regression is an extension of linear regression used when the conditions of linear regression are not met. One advantage of quantile regression relative to ordinary least squares regression is that the quantile regression estimates are more robust against outliers in the response measurements. However, the main attraction of quantile regression goes beyond this and is advantageous when conditional quantile functions are of interest. Different measures ofcentral tendencyandstatistical dispersioncan be used to more comprehensively analyze the relationship between variables.[2] Inecology, quantile regression has been proposed and used as a way to discover more useful predictive relationships between variables in cases where there is no relationship or only a weak relationship between the means of such variables. The need for and success of quantile regression in ecology has been attributed to thecomplexityof interactions between different factors leading todatawith unequal variation of one variable for different ranges of another variable.[3] Another application of quantile regression is in the areas of growth charts, where percentile curves are commonly used to screen for abnormal growth.[4][5] The idea of estimating a median regression slope, a major theorem about minimizing sum of the absolute deviances and a geometrical algorithm for constructing median regression was proposed in 1760 byRuđer Josip Bošković, aJesuit Catholicpriest from Dubrovnik.[2]: 4[6]He was interested in the ellipticity of the earth, building on Isaac Newton's suggestion that its rotation could cause it to bulge at theequatorwith a corresponding flattening at the poles.[7]He finally produced the first geometric procedure for determining theequatorof a rotatingplanetfrom threeobservationsof a surface feature. More importantly for quantile regression, he was able to develop the first evidence of the least absolute criterion and preceded the least squares introduced byLegendrein 1805 by fifty years.[8] Other thinkers began building upon Bošković's idea such asPierre-Simon Laplace, who developed the so-called "methode de situation." This led toFrancis Edgeworth's plural median[9]- a geometric approach to median regression - and is recognized as the precursor of thesimplex method.[8]The works of Bošković, Laplace, and Edgeworth were recognized as a prelude toRoger Koenker's contributions to quantile regression. Median regression computations for larger data sets are quite tedious compared to the least squares method, for which reason it has historically generated a lack of popularity among statisticians, until the widespread adoption of computers in the latter part of the 20th century. Quantile regression expresses the conditional quantiles of a dependent variable as a linear function of the explanatory variables. Crucial to the practicality of quantile regression is that the quantiles can be expressed as the solution of a minimization problem, as we will show in this section before discussing conditional quantiles in the next section. LetY{\displaystyle Y}be a real-valued random variable withcumulative distribution functionFY(y)=P(Y≤y){\displaystyle F_{Y}(y)=P(Y\leq y)}. Theτ{\displaystyle \tau }th quantile of Y is given by whereτ∈(0,1).{\displaystyle \tau \in (0,1).} Define theloss functionasρτ(m)=m(τ−I(m<0)){\displaystyle \rho _{\tau }(m)=m(\tau -\mathbb {I} _{(m<0)})}, whereI{\displaystyle \mathbb {I} }is anindicator function. A specific quantile can be found by minimizing the expected loss ofY−u{\displaystyle Y-u}with respect tou{\displaystyle u}:[2](pp. 5–6): This can be shown by computing the derivative of the expected loss with respect tou{\displaystyle u}via an application of theLeibniz integral rule, setting it to 0, and lettingqτ{\displaystyle q_{\tau }}be the solution of This equation reduces to and then to If the solutionqτ{\displaystyle q_{\tau }}is not unique, then we have to take the smallest such solution to obtain theτ{\displaystyle \tau }th quantile of the random variableY. LetY{\displaystyle Y}be a discrete random variable that takes valuesyi=i{\displaystyle y_{i}=i}withi=1,2,…,9{\displaystyle i=1,2,\dots ,9}with equal probabilities. The task is to find the median of Y, and hence the valueτ=0.5{\displaystyle \tau =0.5}is chosen. Then the expected loss ofY−u{\displaystyle Y-u}is Since0.5/9{\displaystyle {0.5/9}}is a constant, it can be taken out of the expected loss function (this is only true ifτ=0.5{\displaystyle \tau =0.5}). Then, atu=3, Suppose thatuis increased by 1 unit. Then the expected loss will be changed by(3)−(6)=−3{\displaystyle (3)-(6)=-3}on changinguto 4. If,u=5, the expected loss is and any change inuwill increase the expected loss. Thusu=5 is the median. The Table below shows the expected loss (divided by0.5/9{\displaystyle {0.5/9}}) for different values ofu. Considerτ=0.5{\displaystyle \tau =0.5}and letqbe an initial guess forqτ{\displaystyle q_{\tau }}. The expected loss evaluated atqis In order to minimize the expected loss, we move the value ofqa little bit to see whether the expected loss will rise or fall. Suppose we increaseqby 1 unit. Then the change of expected loss would be The first term of the equation isFY(q){\displaystyle F_{Y}(q)}and second term of the equation is1−FY(q){\displaystyle 1-F_{Y}(q)}. Therefore, the change of expected loss function is negative if and only ifFY(q)<0.5{\displaystyle F_{Y}(q)<0.5}, that is if and only ifqis smaller than the median. Similarly, if we reduceqby 1 unit, the change of expected loss function is negative if and only ifqis larger than the median. In order to minimize the expected loss function, we would increase (decrease)qifqis smaller (larger) than the median, untilqreaches the median. The idea behind the minimization is to count the number of points (weighted with the density) that are larger or smaller thanqand then moveqto a point whereqis larger than100τ{\displaystyle 100\tau }% of the points. Theτ{\displaystyle \tau }sample quantile can be obtained by using animportance samplingestimate and solving the following minimization problem where the functionρτ{\displaystyle \rho _{\tau }}is the tilted absolute value function. The intuition is the same as for the population quantile. Theτ{\displaystyle \tau }th conditional quantile ofY{\displaystyle Y}givenX{\displaystyle X}is theτ{\displaystyle \tau }th quantile of theConditional probability distributionofY{\displaystyle Y}givenX{\displaystyle X}, We use a capitalQ{\displaystyle Q}to denote the conditional quantile to indicate that it is a random variable. In quantile regression for theτ{\displaystyle \tau }th quantile we make the assumption that theτ{\displaystyle \tau }th conditional quantile is given as a linear function of the explanatory variables: Given the distribution function ofY{\displaystyle Y},βτ{\displaystyle \beta _{\tau }}can be obtained by solving Solving the sample analog gives the estimator ofβ{\displaystyle \beta }. Note that whenτ=0.5{\displaystyle \tau =0.5}, the loss functionρτ{\displaystyle \rho _{\tau }}is proportional to the absolute value function, and thus median regression is the same as linear regression byleast absolute deviations. The mathematical forms arising from quantile regression are distinct from those arising in themethod of least squares. The method of least squares leads to a consideration of problems in aninner product space, involvingprojectiononto subspaces, and thus the problem of minimizing the squared errors can be reduced to a problem innumerical linear algebra. Quantile regression does not have this structure, and instead the minimization problem can be reformulated as alinear programmingproblem where Simplex methods[2]: 181orinterior point methods[2]: 190can be applied to solve the linear programming problem. Forτ∈(0,1){\displaystyle \tau \in (0,1)}, under some regularity conditions,β^τ{\displaystyle {\hat {\beta }}_{\tau }}isasymptotically normal: where Direct estimation of the asymptotic variance-covariance matrix is not always satisfactory. Inference for quantile regression parameters can be made with the regression rank-score tests or with the bootstrap methods.[10] Seeinvariant estimatorfor background on invariance or seeequivariance. For anya>0{\displaystyle a>0}andτ∈[0,1]{\displaystyle \tau \in [0,1]} For anyγ∈Rk{\displaystyle \gamma \in R^{k}}andτ∈[0,1]{\displaystyle \tau \in [0,1]} LetA{\displaystyle A}be anyp×p{\displaystyle p\times p}nonsingular matrix andτ∈[0,1]{\displaystyle \tau \in [0,1]} Ifh{\displaystyle h}is a nondecreasing function onR{\displaystyle \mathbb {R} }, the followinginvarianceproperty applies: Example (1): IfW=exp⁡(Y){\displaystyle W=\exp(Y)}andQY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}, thenQW|X(τ)=exp⁡(Xβτ){\displaystyle Q_{W|X}(\tau )=\exp(X\beta _{\tau })}. The mean regression does not have the same property sinceE⁡(ln⁡(Y))≠ln⁡(E⁡(Y)).{\displaystyle \operatorname {E} (\ln(Y))\neq \ln(\operatorname {E} (Y)).} The linear modelQY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}mis-specifies the true systematic relationQY|X(τ)=f(X,τ){\displaystyle Q_{Y|X}(\tau )=f(X,\tau )}whenf(⋅,τ){\displaystyle f(\cdot ,\tau )}is nonlinear. However,QY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}minimizes a weighted distanced tof(X,τ){\displaystyle f(X,\tau )}among linear models.[11]Furthermore, the slope parametersβτ{\displaystyle \beta _{\tau }}of the linear model can be interpreted as weighted averages of the derivatives∇f(X,τ){\displaystyle \nabla f(X,\tau )}so thatβτ{\displaystyle \beta _{\tau }}can be used for causal inference.[12]Specifically, the hypothesisH0:∇f(x,τ)=0{\displaystyle H_{0}:\nabla f(x,\tau )=0}for allx{\displaystyle x}implies the hypothesisH0:βτ=0{\displaystyle H_{0}:\beta _{\tau }=0}, which can be tested using the estimatorβτ^{\displaystyle {\hat {\beta _{\tau }}}}and its limit distribution. Thegoodness of fitfor quantile regression for theτ{\displaystyle \tau }quantile can be defined as:[13]R1(τ)=1−V^τV~τ,{\displaystyle R^{1}(\tau )=1-{\frac {{\hat {V}}_{\tau }}{{\tilde {V}}_{\tau }}},}whereV^τ{\displaystyle {\hat {V}}_{\tau }}is the minimized expected loss function under the full model, whileV~τ{\displaystyle {\tilde {V}}_{\tau }}is the expected loss function under the intercept-only model. Because quantile regression does not normally assume a parametric likelihood for the conditional distributions of Y|X, the Bayesian methods work with a working likelihood. A convenient choice is the asymmetric Laplacian likelihood,[14]because the mode of the resulting posterior under a flat prior is the usual quantile regression estimates. The posterior inference, however, must be interpreted with care. Yang, Wang and He[15]provided a posterior variance adjustment for valid inference. In addition, Yang and He[16]showed that one can have asymptotically valid posterior inference if the working likelihood is chosen to be the empirical likelihood. Beyondsimple linear regression, there are several machine learning methods that can be extended to quantile regression. A switch from the squared error to the tilted absolute value loss function (a.k.a. thepinball loss[17]) allows gradient descent-based learning algorithms to learn a specified quantile instead of the mean. It means that we can apply allneural networkanddeep learningalgorithms to quantile regression,[18][19]which is then referred to asnonparametricquantile regression.[20]Tree-based learning algorithms are also available for quantile regression (see, e.g., Quantile Regression Forests,[21]as a simple generalization ofRandom Forests). If the response variable is subject to censoring, the conditional mean is not identifiable without additional distributional assumptions, but the conditional quantile is often identifiable. For recent work on censored quantile regression, see: Portnoy[22]and Wang and Wang[23] Example (2): LetYc=max(0,Y){\displaystyle Y^{c}=\max(0,Y)}andQY|X=Xβτ{\displaystyle Q_{Y|X}=X\beta _{\tau }}. ThenQYc|X(τ)=max(0,Xβτ){\displaystyle Q_{Y^{c}|X}(\tau )=\max(0,X\beta _{\tau })}. This is the censored quantile regression model: estimated values can be obtained without making any distributional assumptions, but at the cost of computational difficulty,[24]some of which can be avoided by using a simple three step censored quantile regression procedure as an approximation.[25] For random censoring on the response variables, the censored quantile regression of Portnoy (2003)[22]provides consistent estimates of all identifiable quantile functions based on reweighting each censored point appropriately. Censored quantile regression has close links tosurvival analysis. The quantile regression loss needs to be adapted in the presence of heteroscedastic errors in order to beefficient.[26] Numerous statistical software packages include implementations of quantile regression:
https://en.wikipedia.org/wiki/Quantile_regression
Instatistics,linear regressionis amodelthat estimates the relationship between ascalarresponse (dependent variable) and one or more explanatory variables (regressororindependent variable). A model with exactly one explanatory variable is asimple linear regression; a model with two or more explanatory variables is amultiple linear regression.[1]This term is distinct frommultivariate linear regression, which predicts multiplecorrelateddependent variables rather than a single dependent variable.[2] In linear regression, the relationships are modeled usinglinear predictor functionswhose unknown modelparametersareestimatedfrom thedata. Most commonly, theconditional meanof the response given the values of the explanatory variables (or predictors) is assumed to be anaffine functionof those values; less commonly, the conditionalmedianor some otherquantileis used. Like all forms ofregression analysis, linear regression focuses on theconditional probability distributionof the response given the values of the predictors, rather than on thejoint probability distributionof all of these variables, which is the domain ofmultivariate analysis. Linear regression is also a type ofmachine learningalgorithm, more specifically asupervisedalgorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets.[3] Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4]This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine. Linear regression has many practical uses. Most applications fall into one of the following two broad categories: Linear regression models are often fitted using theleast squaresapproach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some othernorm(as withleast absolute deviationsregression), or by minimizing a penalized version of the least squarescost functionas inridge regression(L2-norm penalty) andlasso(L1-norm penalty). Use of theMean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many largeoutliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous. Given adata set{yi,xi1,…,xip}i=1n{\displaystyle \{y_{i},\,x_{i1},\ldots ,x_{ip}\}_{i=1}^{n}}ofnstatistical units, a linear regression model assumes that the relationship between the dependent variableyand the vector of regressorsxislinear. This relationship is modeled through adisturbance termorerror variableε—an unobservedrandom variablethat adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the formyi=β0+β1xi1+⋯+βpxip+εi=xiTβ+εi,i=1,…,n,{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},\qquad i=1,\ldots ,n,}whereTdenotes thetranspose, so thatxiTβis theinner productbetweenvectorsxiandβ. Often thesenequations are stacked together and written inmatrix notationas where Fitting a linear model to a given data set usually requires estimating the regression coefficientsβ{\displaystyle {\boldsymbol {\beta }}}such that the error termε=y−Xβ{\displaystyle {\boldsymbol {\varepsilon }}=\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}is minimized. For example, it is common to use the sum of squared errors‖ε‖22{\displaystyle \|{\boldsymbol {\varepsilon }}\|_{2}^{2}}as a measure ofε{\displaystyle {\boldsymbol {\varepsilon }}}for minimization. Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascenthiat various moments in timeti. Physics tells us that, ignoring thedrag, the relationship can be modeled as whereβ1determines the initial velocity of the ball,β2is proportional to thestandard gravity, andεiis due to measurement errors. Linear regression can be used to estimate the values ofβ1andβ2from the measured data. This model is non-linear in the time variable, but it is linear in the parametersβ1andβ2; if we take regressorsxi= (xi1,xi2)  = (ti,ti2), the model takes on the standard form Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.[citation needed] The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g.ordinary least squares): Violations of these assumptions can result in biased estimations ofβ, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods: A fitted linear regression model can be used to identify the relationship between a single predictor variablexjand the response variableywhen all the other predictor variables in the model are "held fixed". Specifically, the interpretation ofβjis theexpectedchange inyfor a one-unit change inxjwhen the other covariates are held fixed—that is, the expected value of thepartial derivativeofywith respect toxj. This is sometimes called theunique effectofxjony. In contrast, themarginal effectofxjonycan be assessed using acorrelation coefficientorsimple linear regressionmodel relating onlyxjtoy; this effect is thetotal derivativeofywith respect toxj. Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such asdummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "holdtifixed" and at the same time change the value ofti2). It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information inxj, so that once that variable is in the model, there is no contribution ofxjto the variation iny. Conversely, the unique effect ofxjcan be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation ofy, but they mainly explain variation in a way that is complementary to what is captured byxj. In this case, including the other variables in the model reduces the part of the variability ofythat is unrelated toxj, thereby strengthening the apparent relationship withxj. The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in anobservational study. The notion of a "unique effect" is appealing when studying acomplex systemwhere multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[9] Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed. The simplest case of a singlescalarpredictor variablexand a single scalar response variableyis known assimple linear regression. The extension to multiple and/orvector-valued predictor variables (denoted with a capitalX) is known asmultiple linear regression, also known asmultivariable linear regression(not to be confused withmultivariate linear regression).[10] Multiple linear regression is a generalization ofsimple linear regressionto the case of more than one independent variable, and aspecial caseof general linear models, restricted to one dependent variable. The basic model for multiple linear regression is for each observationi=1,…,n{\textstyle i=1,\ldots ,n}. In the formula above we considernobservations of one dependent variable andpindependent variables. Thus,Yiis theithobservation of the dependent variable,Xijisithobservation of thejthindependent variable,j= 1, 2, ...,p. The valuesβjrepresent parameters to be estimated, andεiis theithindependent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each ofm> 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: for all observations indexed asi= 1, ... ,nand for all dependent variables indexed asj = 1, ... ,m. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variableyis still a scalar. Another term,multivariate linear regression, refers to cases whereyis a vector, i.e., the same asgeneral linear regression. Model Assumptions to Check: 1. Linearity: Relationship between each predictor and outcome must be linear 2. Normality of residuals: Residuals should follow a normal distribution 3. Homoscedasticity: Constant variance of residuals across predicted values 4. Independence: Observations should be independent (not repeated measures) SPSS: Use partial plots, histograms, P-P plots, residual vs. predicted plots Thegeneral linear modelconsiders the situation when the response variable is not a scalar (for each observation) but a vector,yi. Conditional linearity ofE(y∣xi)=xiTB{\displaystyle E(\mathbf {y} \mid \mathbf {x} _{i})=\mathbf {x} _{i}^{\mathsf {T}}B}is still assumed, with a matrixBreplacing the vectorβof the classical linear regression model. Multivariate analogues ofordinary least squares(OLS) andgeneralized least squares(GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models"). Various models have been created that allow forheteroscedasticity, i.e. the errors for different response variables may have differentvariances. For example,weighted least squaresis a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See alsoWeighted linear least squares, andGeneralized least squares.)Heteroscedasticity-consistent standard errorsis an improved method for use with uncorrelated but potentially heteroscedastic errors. TheGeneralized linear model(GLM) is a framework for modeling response variables that are bounded or discrete. This is used, for example: Generalized linear models allow for an arbitrarylink function,g, that relates themeanof the response variable(s) to the predictors:E(Y)=g−1(XB){\displaystyle E(Y)=g^{-1}(XB)}. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the(−∞,∞){\displaystyle (-\infty ,\infty )}range of the linear predictor and the range of the response variable. Some common examples of GLMs are: Single index models[clarification needed]allow some degree of nonlinearity in the relationship betweenxandy, while preserving the central role of the linear predictorβ′xas in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimateβup to a proportionality constant.[11] Hierarchical linear models(ormultilevel regression) organizes the data into a hierarchy of regressions, for example whereAis regressed onB, andBis regressed onC. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels. Errors-in-variables models(or "measurement error models") extend the traditional linear regression model to allow the predictor variablesXto be observed with error. This error causes standard estimators ofβto become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero. In a multiple linear regression model parameterβj{\displaystyle \beta _{j}}of predictor variablexj{\displaystyle x_{j}}represents the individual effect ofxj{\displaystyle x_{j}}. It has an interpretation as the expected change in the response variabley{\displaystyle y}whenxj{\displaystyle x_{j}}increases by one unit with other predictor variables held constant. Whenxj{\displaystyle x_{j}}is strongly correlated with other predictor variables, it is improbable thatxj{\displaystyle x_{j}}can increase by one unit with other variables held constant. In this case, the interpretation ofβj{\displaystyle \beta _{j}}becomes problematic as it is based on an improbable condition, and the effect ofxj{\displaystyle x_{j}}cannot be evaluated in isolation. For a group of predictor variables, say,{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}, a group effectξ(w){\displaystyle \xi (\mathbf {w} )}is defined as a linear combination of their parameters wherew=(w1,w2,…,wq)⊺{\displaystyle \mathbf {w} =(w_{1},w_{2},\dots ,w_{q})^{\intercal }}is a weight vector satisfying∑j=1q|wj|=1{\textstyle \sum _{j=1}^{q}|w_{j}|=1}. Because of the constraint onwj{\displaystyle {w_{j}}},ξ(w){\displaystyle \xi (\mathbf {w} )}is also referred to as a normalized group effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}has an interpretation as the expected change iny{\displaystyle y}when variables in the groupx1,x2,…,xq{\displaystyle x_{1},x_{2},\dots ,x_{q}}change by the amountw1,w2,…,wq{\displaystyle w_{1},w_{2},\dots ,w_{q}}, respectively, at the same time with other variables (not in the group) held constant. It generalizes the individual effect of a variable to a group of variables in that (i{\displaystyle i}) ifq=1{\displaystyle q=1}, then the group effect reduces to an individual effect, and (ii{\displaystyle ii}) ifwi=1{\displaystyle w_{i}=1}andwj=0{\displaystyle w_{j}=0}forj≠i{\displaystyle j\neq i}, then the group effect also reduces to an individual effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}is said to be meaningful if the underlying simultaneous changes of theq{\displaystyle q}variables(x1,x2,…,xq)⊺{\displaystyle (x_{1},x_{2},\dots ,x_{q})^{\intercal }}is probable. Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by theleast squares regressiondue to themulticollinearityproblem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize allp{\displaystyle p}predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Lety′{\displaystyle y'}be the centredy{\displaystyle y}andxj′{\displaystyle x_{j}'}be the standardizedxj{\displaystyle x_{j}}. Then, the standardized linear regression model is Parametersβj{\displaystyle \beta _{j}}in the original model, includingβ0{\displaystyle \beta _{0}}, are simple functions ofβj′{\displaystyle \beta _{j}'}in the standardized model. The standardization of variables does not change their correlations, so{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is and its minimum-variance unbiased linear estimator is whereβ^j′{\displaystyle {\hat {\beta }}_{j}'}is the least squares estimator ofβj′{\displaystyle \beta _{j}'}. In particular, the average group effect of theq{\displaystyle q}standardized variables is which has an interpretation as the expected change iny′{\displaystyle y'}when allxj′{\displaystyle x_{j}'}in the strongly correlated group increase by(1/q){\displaystyle (1/q)}th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effectξA{\displaystyle \xi _{A}}is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimatorξ^A=1q(β^1′+β^2′+⋯+β^q′){\textstyle {\hat {\xi }}_{A}={\frac {1}{q}}({\hat {\beta }}_{1}'+{\hat {\beta }}_{2}'+\dots +{\hat {\beta }}_{q}')}, even when individually none of theβj′{\displaystyle \beta _{j}'}can be accurately estimated byβ^j′{\displaystyle {\hat {\beta }}_{j}'}. Not all group effects are meaningful or can be accurately estimated. For example,β1′{\displaystyle \beta _{1}'}is a special group effect with weightsw1=1{\displaystyle w_{1}=1}andwj=0{\displaystyle w_{j}=0}forj≠1{\displaystyle j\neq 1}, but it cannot be accurately estimated byβ^1′{\displaystyle {\hat {\beta }}'_{1}}. It is also not a meaningful effect. In general, for a group ofq{\displaystyle q}strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectorsw{\displaystyle \mathbf {w} }are at or near the centre of the simplex∑j=1qwj=1{\textstyle \sum _{j=1}^{q}w_{j}=1}(wj≥0{\displaystyle w_{j}\geq 0}) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated. Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of theq{\displaystyle q}variables via testingH0:ξA=0{\displaystyle H_{0}:\xi _{A}=0}versusH1:ξA≠0{\displaystyle H_{1}:\xi _{A}\neq 0}, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate. A group effect of the original variables{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}can be expressed as a constant times a group effect of the standardized variables{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.[12] InDempster–Shafer theory, or alinear belief functionin particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models. A large number of procedures have been developed forparameterestimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of aclosed-form solution,robustnesswith respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such asconsistencyand asymptoticefficiency. Some of the more common estimation techniques for linear regression are summarized below. Assuming that the independent variables arexi→=[x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}and the model's parameters areβ→=[β0,β1,…,βm]{\displaystyle {\vec {\beta }}=\left[\beta _{0},\beta _{1},\ldots ,\beta _{m}\right]}, then the model's prediction would be Ifxi→{\displaystyle {\vec {x_{i}}}}is extended toxi→=[1,x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[1,x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}thenyi{\displaystyle y_{i}}would become adot productof the parameter and the independent vectors, i.e. In the least-squares setting, the optimum parameter vector is defined as such that minimizes the sum of mean squared loss: Now putting the independent and dependent variables in matricesX{\displaystyle X}andY{\displaystyle Y}respectively, the loss function can be rewritten as: As the loss function isconvex, the optimum solution lies atgradientzero. The gradient of the loss function is (usingDenominator layout convention): Setting the gradient to zero produces the optimum parameter: Note:Theβ^{\displaystyle {\hat {\beta }}}obtained may indeed be the local minimum, one needs to differentiate once more to obtain theHessian matrixand show that it is positive definite. This is provided by theGauss–Markov theorem. Linear least squaresmethods include mainly: Maximum likelihood estimationcan be performed when the distribution of the error terms is known to belong to a certain parametric familyƒθofprobability distributions.[15]Whenfθis a normal distribution with zeromeanand variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a knowncovariance matrix. Let's denote each data point by(xi→,yi){\displaystyle ({\vec {x_{i}}},y_{i})}and the regression parameters asβ→{\displaystyle {\vec {\beta }}}, and the set of all data byD{\displaystyle D}and the cost function byL(D,β→)=∑i(yi−β→⋅xi→)2{\displaystyle L(D,{\vec {\beta }})=\sum _{i}(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}})^{2}}. As shown below the same optimal parameter that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}achieves maximum likelihood too.[16]Here the assumption is that the dependent variabley{\displaystyle y}is a random variable that follows aGaussian distribution, where the standard deviation is fixed and the mean is a linear combination ofx→{\displaystyle {\vec {x}}}:H(D,β→)=∏i=1nPr(yi|xi→;β→,σ)=∏i=1n12πσexp⁡(−(yi−β→⋅xi→)22σ2){\displaystyle {\begin{aligned}H(D,{\vec {\beta }})&=\prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\end{aligned}}} Now, we need to look for a parameter that maximizes this likelihood function. Since the logarithmic function is strictly increasing, instead of maximizing this function, we can also maximize its logarithm and find the optimal parameter that way.[16] I(D,β→)=log⁡∏i=1nPr(yi|xi→;β→,σ)=log⁡∏i=1n12πσexp⁡(−(yi−β→⋅xi→)22σ2)=nlog⁡12πσ−12σ2∑i=1n(yi−β→⋅xi→)2{\displaystyle {\begin{aligned}I(D,{\vec {\beta }})&=\log \prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\log \prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\\&=n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\end{aligned}}} The optimal parameter is thus equal to:[16] arg maxβ→I(D,β→)=arg maxβ→(nlog⁡12πσ−12σ2∑i=1n(yi−β→⋅xi→)2)=arg minβ→∑i=1n(yi−β→⋅xi→)2=arg minβ→L(D,β→)=β^→{\displaystyle {\begin{aligned}{\underset {\vec {\beta }}{\mbox{arg max}}}\,I(D,{\vec {\beta }})&={\underset {\vec {\beta }}{\mbox{arg max}}}\left(n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\right)\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\,L(D,{\vec {\beta }})\\&={\vec {\hat {\beta }}}\end{aligned}}} In this way, the parameter that maximizesH(D,β→){\displaystyle H(D,{\vec {\beta }})}is the same as the one that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}. This means that in linear regression, the result of the least squares method is the same as the result of the maximum likelihood estimation method.[16] Ridge regression[17][18][19]and other forms of penalized estimation, such asLasso regression,[5]deliberately introducebiasinto the estimation ofβin order to reduce thevariabilityof the estimate. The resulting estimates generally have lowermean squared errorthan the OLS estimates, particularly whenmulticollinearityis present or whenoverfittingis a problem. They are generally used when the goal is to predict the value of the response variableyfor values of the predictorsxthat have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias. Least absolute deviation(LAD) regression is arobust estimationtechnique in that it is less sensitive to the presence of outliers than OLS (but is lessefficientthan OLS when no outliers are present). It is equivalent to maximum likelihood estimation under aLaplace distributionmodel forε.[20] If we assume that error terms areindependentof the regressors,εi⊥xi{\displaystyle \varepsilon _{i}\perp \mathbf {x} _{i}}, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[21] Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines. Atrend linerepresents a trend, the long-term movement intime seriesdata after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line. Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data. Early evidence relatingtobacco smokingto mortality andmorbiditycame fromobservational studiesemploying regression analysis. In order to reducespurious correlationswhen analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those othersocio-economic factors. However, it is never possible to include all possibleconfoundingvariables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason,randomized controlled trialsare often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such asinstrumental variablesregression may be used to attempt to estimate causal relationships from observational data. Thecapital asset pricing modeluses linear regression as well as the concept ofbetafor analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets. Linear regression is the predominant empirical tool ineconomics. For example, it is used to predictconsumption spending,[24]fixed investmentspending,inventory investment, purchases of a country'sexports,[25]spending onimports,[25]thedemand to hold liquid assets,[26]labor demand,[27]andlabor supply.[27] Linear regression finds application in a wide range of environmental science applications such asland use,[28]infectious diseases,[29]andair pollution.[30]For example, linear regression can be used to predict the changing effects of car pollution.[31]One notable example of this application in infectious diseases is theflattening the curvestrategy emphasized early in the COVID-19 pandemic, where public health officials dealt with sparse data on infected individuals and sophisticated models of disease transmission to characterize the spread of COVID-19.[32] Linear regression is commonly used inbuilding sciencefield studies to derive characteristics of building occupants. In athermal comfortfield study, building scientists usually ask occupants' thermal sensation votes, which range from -3 (feeling cold) to 0 (neutral) to +3 (feeling hot), and measure occupants' surrounding temperature data. A neutral or comfort temperature can be calculated based on a linear regression between the thermal sensation vote and indoor temperature, and setting the thermal sensation vote as zero. However, there has been a debate on the regression direction: regressing thermal sensation votes (y-axis) against indoor temperature (x-axis) or the opposite: regressing indoor temperature (y-axis) against thermal sensation votes (x-axis).[33] Linear regression plays an important role in the subfield ofartificial intelligenceknown asmachine learning. The linear regression algorithm is one of the fundamentalsupervised machine-learningalgorithms due to its relative simplicity and well-known properties.[34] Isaac Newtonis credited with inventing "a certain technique known today aslinear regression analysis" in his work on equinoxes in 1700, and wrote down the first of the two normal equations of theordinary least squaresmethod.[35][36]The Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed byLegendre(1805) andGauss(1809) for the prediction of planetary movement.Queteletwas responsible for making the procedure well-known and for using it extensively in the social sciences.[37]
https://en.wikipedia.org/wiki/Linear_regression_model