text
stringlengths
16
172k
source
stringlengths
32
122
Ininformation retrieval,tf–idf(alsoTF*IDF,TFIDF,TF–IDF, orTf–idf), short forterm frequency–inverse document frequency, is a measure of importance of a word to adocumentin a collection orcorpus, adjusted for the fact that some words appear more frequently in general.[1]Like the bag-of-words model, it models a document as amultisetof words, withoutword order. It is a refinement over the simplebag-of-words model, by allowing the weight of words to depend on the rest of the corpus. It was often used as aweighting factorin searches of information retrieval,text mining, anduser modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2]Variations of the tf–idf weighting scheme were often used bysearch enginesas a central tool in scoring and ranking a document'srelevancegiven a userquery. One of the simplestranking functionsis computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model. Karen Spärck Jones(1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3] The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs. For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4] We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is. Term frequency,tf(t,d), is the relative frequency of termtwithin documentd, whereft,dis theraw countof a term in a document, i.e., the number of times that termtoccurs in documentd. Note the denominator is simply the total number of terms in documentd(counting each occurrence of the same term separately). There are various other ways to define term frequency:[5]: 128 Theinverse document frequencyis a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is thelogarithmically scaledinverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): with Then tf–idf is calculated as A high weight in tf–idf is reached by a high termfrequency(in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0. Idf was introduced as "term specificity" byKaren Spärck Jonesin a 1972 paper. Although it has worked well as aheuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to findinformation theoreticjustifications for it.[7] Spärck Jones's own explanation did not propose much theory, aside from a connection toZipf's law.[7]Attempts have been made to put idf on aprobabilisticfooting,[8]by estimating the probability that a given documentdcontains a termtas the relative document frequency, so that we can define idf as Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency. This probabilistic interpretation in turn takes the same form as that ofself-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriateevent spacesfor the requiredprobability distributions: not only documents need to be taken into account, but also queries and terms.[7] Both term frequency and inverse document frequency can be formulated in terms ofinformation theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distributionp(d,t){\displaystyle p(d,t)}is that: This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9] Theconditional entropyof a "randomly chosen" document in the corpusD{\displaystyle D}, conditional to the fact it contains a specific termt{\displaystyle t}(and assuming that all documents have equal probability to be chosen) is: In terms of notation,D{\displaystyle {\cal {D}}}andT{\displaystyle {\cal {T}}}are "random variables" corresponding to respectively draw a document or a term. Themutual informationcan be expressed as The last step is to expandpt{\displaystyle p_{t}}, the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain: This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9]Each Tf–idf hence carries the "bit of information" attached to a term x document pair. Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right. The calculation of tf–idf for the term "this" is performed as follows: In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller. An idf is constant per corpus, andaccountsfor the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this". So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents. The word "example" is more interesting - it occurs three times, but only in the second document: Finally, (using thebase 10 logarithm). The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10]The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11]and entire sentences.[12]However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13] A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14]TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15]idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.
https://en.wikipedia.org/wiki/Term_frequency–inverse_document_frequency
Asyntactic categoryis a syntactic unit that theories ofsyntaxassume.[1]Word classes, largely corresponding to traditionalparts of speech(e.g. noun, verb, preposition, etc.), are syntactic categories. Inphrase structure grammars, thephrasal categories(e.g.noun phrase,verb phrase,prepositional phrase, etc.) are also syntactic categories.Dependency grammars, however, do not acknowledge phrasal categories (at least not in the traditional sense).[2] Word classes considered as syntactic categories may be calledlexical categories, as distinct from phrasal categories. The terminology is somewhat inconsistent between the theoretical models of different linguists.[2]However, many grammars also draw a distinction betweenlexical categories(which tend to consist ofcontent words, or phrasesheadedby them) andfunctional categories(which tend to consist offunction wordsor abstract functional elements, or phrases headed by them). The termlexical categorytherefore has two distinct meanings. Moreover, syntactic categories should not be confused withgrammatical categories(also known as grammaticalfeatures), which are properties such astense,gender, etc. At least three criteria are used in defining syntactic categories: For instance, many nouns in English denote concrete entities, they are pluralized with the suffix-s, and they occur as subjects and objects in clauses. Many verbs denote actions or states, they are conjugated with agreement suffixes (e.g.-sof the third person singular in English), and in English they tend to show up in medial positions of the clauses in which they appear. The third criterion is also known asdistribution. The distribution of a given syntactic unit determines the syntactic category to which it belongs. The distributional behavior of syntactic units is identified by substitution.[3]Like syntactic units can be substituted for each other. Additionally, there are also informal criteria one can use in order to determine syntactic categories. For example, one informal means of determining if an item is lexical, as opposed to functional, is to see if it is left behind in "telegraphic speech" (that is, the way a telegram would be written; e.g.,Pants fire. Bring water, need help.)[4] The traditionalparts of speechare lexical categories, in one meaning of that term.[5]Traditional grammars tend to acknowledge approximately eight to twelve lexical categories, e.g. The lexical categories that a given grammar assumes will likely vary from this list. Certainly numerous subcategories can be acknowledged. For instance, one can view pronouns as a subtype of noun, and verbs can be divided intofinite verbsandnon-finite verbs(e.g. gerund, infinitive, participle, etc.). The central lexical categories give rise to corresponding phrasal categories:[6] In terms ofphrase structure rules, phrasal categories can occur to the left of the arrow while lexical categories cannot, e.g. NP → D N. Traditionally, a phrasal category should consist of two or more words, although conventions vary in this area.X-bar theory, for instance, often sees individual words corresponding to phrasal categories. Phrasal categories are illustrated with the following trees: The lexical and phrasal categories are identified according to the node labels, phrasal categories receiving the "P" designation. Dependency grammarsdo not acknowledge phrasal categories in the way thatphrase structure grammarsdo.[2]What this means is that the interaction between lexical and phrasal categories disappears, the result being that only the lexical categories are acknowledged.[7]The tree representations are simpler because the number of nodes and categories is reduced, e.g. The distinction between lexical and phrasal categories is absent here. The number of nodes is reduced by removing all nodes marked with "P". Note, however, that phrases can still be acknowledged insofar as any subtree that contains two or more words will qualify as a phrase. Many grammars draw a distinction betweenlexical categoriesandfunctional categories.[8]This distinction is orthogonal to the distinction between lexical categories and phrasal categories. In this context, the termlexical categoryapplies only to those parts of speech and their phrasal counterparts that form open classes and have full semantic content. The parts of speech that form closed classes and have mainly just functional content are calledfunctional categories: There is disagreement in certain areas, for instance concerning the status ofprepositions. The distinction between lexical and functional categories plays a big role in Chomskyan grammars (Transformational Grammar, Government and Binding Theory, Minimalist Program), where the role of the functional categories is large. Many phrasal categories are assumed that do not correspond directly to a specific part of speech, e.g.inflection phrase(IP), tense phrase (TP), agreement phrase (AgrP),focusphrase (FP), etc. (see alsoPhrase → Functional categories). In order to acknowledge such functional categories, one has to assume that the constellation is a primitive of the theory and that it exists separately from the words that appear. As a consequence, many grammar frameworks do not acknowledge such functional categories, e.g. Head Driven Phrase Structure Grammar, Dependency Grammar, etc. Early research suggested shifting away from the use of labelling, as they were considered to be non-optimal for the analysis of syntactic structure, and should therefore be eliminated.[9]Collins (2002) argued that, although labels such as Noun, Pronoun, Adjective and the like were unavoidable and undoubtedly useful for categorizing syntactic items, providing labels for the projections of those items, was not useful and was, in fact, detrimental to structural analysis, since there were disagreements and discussions about how exactly to label these projections. The labeling of projections such as Noun Phrases (NP), Verb Phrases (VP), and others have since been a topic of discussion amongst syntacticians, who have since then been working on labelling algorithms to solve the very problem brought up by Collins. In line with bothPhrase Structure RulesandX-bar theory, syntactic labelling plays an important role within Chomsky'sMinimalist Program (MP). Chomsky first developed the MP by means of creating a theoretical framework for generative grammar that can be applied universally among all languages. In contrast to Phrase Structure Rules and X-bar theory, many of the research and proposed theories done on labels are fairly recent and still ongoing.
https://en.wikipedia.org/wiki/Syntactic_category
Ininformation retrieval,Okapi BM25(BMis an abbreviation ofbest matching) is aranking functionused bysearch enginesto estimate therelevanceof documents to a given search query. It is based on theprobabilistic retrieval frameworkdeveloped in the 1970s and 1980s byStephen E. Robertson,Karen Spärck Jones, and others. The name of the actual ranking function isBM25. The fuller name,Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented atLondon'sCity University[1]in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), representTF-IDF-like retrieval functions used in document retrieval.[2] BM25 is abag-of-wordsretrieval function that ranks a set of documents based on the query terms appearing in each document, regardless of their proximity within the document. It is a family of scoring functions with slightly different components and parameters. One of the most prominent instantiations of the function is as follows. Given a queryQ, containing keywordsq1,...,qn{\displaystyle q_{1},...,q_{n}}, the BM25 score of a documentDis: wheref(qi,D){\displaystyle f(q_{i},D)}is the number of times that the keywordqi{\displaystyle q_{i}}occurs in the documentD,|D|{\displaystyle |D|}is the length of the documentDin words, andavgdlis the average document length in the text collection from which documents are drawn.k1{\displaystyle k_{1}}andbare free parameters, usually chosen, in absence of an advanced optimization, ask1∈[1.2,2.0]{\displaystyle k_{1}\in [1.2,2.0]}andb=0.75{\displaystyle b=0.75}.[3]IDF(qi){\displaystyle {\text{IDF}}(q_{i})}is the IDF (inverse document frequency) weight of the query termqi{\displaystyle q_{i}}. It is usually computed as: whereNis the total number of documents in the collection, andn(qi){\displaystyle n(q_{i})}is the number of documents containingqi{\displaystyle q_{i}}. There are several interpretations for IDF and slight variations on its formula. In the original BM25 derivation, the IDF component is derived from theBinary Independence Model. Here is an interpretation from information theory. Suppose a query termq{\displaystyle q}appears inn(q){\displaystyle n(q)}documents. Then a randomly picked documentD{\displaystyle D}will contain the term with probabilityn(q)N{\displaystyle {\frac {n(q)}{N}}}(whereN{\displaystyle N}is again the cardinality of the set of documents in the collection). Therefore, theinformationcontent of the message "D{\displaystyle D}containsq{\displaystyle q}" is: Now suppose we have two query termsq1{\displaystyle q_{1}}andq2{\displaystyle q_{2}}. If the two terms occur in documents entirely independently of each other, then the probability of seeing bothq1{\displaystyle q_{1}}andq2{\displaystyle q_{2}}in a randomly picked documentD{\displaystyle D}is: and the information content of such an event is: With a small variation, this is exactly what is expressed by the IDF component of BM25.
https://en.wikipedia.org/wiki/Okapi_BM25
Natural language processing(NLP) is a subfield ofcomputer scienceand especiallyartificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded innatural languageand is thus closely related toinformation retrieval,knowledge representationandcomputational linguistics, a subfield oflinguistics. Major tasks in natural language processing arespeech recognition,text classification,natural-language understanding, andnatural-language generation. Natural language processing has its roots in the 1950s.[1]Already in 1950,Alan Turingpublished an article titled "Computing Machinery and Intelligence" which proposed what is now called theTuring testas a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language. The premise of symbolic NLP is well-summarized byJohn Searle'sChinese roomexperiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts. Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction ofmachine learningalgorithms for language processing. This was due to both the steady increase in computational power (seeMoore's law) and the gradual lessening of the dominance ofChomskyantheories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguisticsthat underlies the machine-learning approach to language processing.[8] Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular:[18][19]such as by writing grammars or devising heuristic rules forstemming. Machine learningapproaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: Rule-based systems are commonly used: In the late 1980s and mid-1990s, the statistical approach ended a period ofAI winter, which was caused by the inefficiencies of the rule-based approaches.[20][21] The earliestdecision trees, producing systems of hardif–then rules, were still very similar to the old rule-based approaches. Only the introduction of hiddenMarkov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. A major drawback of statistical methods is that they require elaboratefeature engineering. Since 2015,[22]the statistical approach has been replaced by theneural networksapproach, usingsemantic networks[23]andword embeddingsto capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore. Neural machine translation, based on then-newly inventedsequence-to-sequencetransformations, made obsolete the intermediate steps, such as word alignment, previously necessary forstatistical machine translation. The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below. Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:[46] Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognitionrefers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses."[47]Cognitive scienceis the interdisciplinary, scientific study of the mind and its processes.[48]Cognitive linguisticsis an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics.[49]Especially during the age ofsymbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example,George Lakoffoffers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics,[50]with two defining aspects: Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar,[53]functional grammar,[54]construction grammar,[55]computational psycholinguistics and cognitive neuroscience (e.g.,ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences[56]of theACL). More recently, ideas of cognitive NLP have been revived as an approach to achieveexplainability, e.g., under the notion of "cognitive AI".[57]Likewise, ideas of cognitive NLP are inherent to neural modelsmultimodalNLP (although rarely made explicit)[58]and developments inartificial intelligence, specifically tools and technologies usinglarge language modelapproaches[59]and new directions inartificial general intelligencebased on thefree energy principle[60]by British neuroscientist and theoretician at University College LondonKarl J. Friston.
https://en.wikipedia.org/wiki/Natural_language_processing
Thecurse of dimensionalityrefers to various phenomena that arise when analyzing and organizing data inhigh-dimensional spacesthat do not occur in low-dimensional settings such as thethree-dimensionalphysical spaceof everyday experience. The expression was coined byRichard E. Bellmanwhen considering problems indynamic programming.[1][2]The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. Dimensionally cursed phenomena occur in domains such asnumerical analysis,sampling,combinatorics,machine learning,data mininganddatabases. The common theme of these problems is that when the dimensionality increases, thevolumeof the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient. In some problems, each variable can take one of several discrete values, or the range of possible values is divided to give a finite number of possibilities. Taking the variables together, a huge number of combinations of values must be considered. This effect is also known as thecombinatorial explosion. Even in the simplest case ofd{\displaystyle d}binary variables, the number of possible combinations already is2d{\displaystyle 2^{d}},exponential in the dimensionality. Naively, each additional dimension doubles the effort needed to try all combinations. There is an exponential increase in volume associated with adding extra dimensions to amathematical space. For example, 102= 100 evenly spaced sample points suffice to sample aunit interval(try to visualize a "1-dimensional" cube) with no more than 10−2= 0.01 distance between points; an equivalent sampling of a 10-dimensionalunit hypercubewith a lattice that has a spacing of 10−2= 0.01 between adjacent points would require 1020= [(102)10] sample points. In general, with a spacing distance of 10−nthe 10-dimensional hypercube appears to be a factor of 10n(10−1)= [(10n)10/(10n)] "larger" than the 1-dimensional hypercube, which is the unit interval. In the above examplen= 2: when using a sampling distance of 0.01 the 10-dimensional hypercube appears to be 1018"larger" than the unit interval. This effect is a combination of the combinatorics problems above and the distance function problems explained below. When solving dynamicoptimizationproblems by numericalbackward induction, the objective function must be computed for each combination of values. This is a significant obstacle when the dimension of the "state variable" is large.[3] Inmachine learningproblems that involve learning a "state-of-nature" from a finite number of data samples in a high-dimensionalfeature spacewith each feature having a range of possible values, typically an enormous amount of training data is required to ensure that there are several samples with each combination of values. In an abstract sense, as the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially.[4] A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation.[5]Inmachine learningand insofar as predictive performance is concerned, thecurse of dimensionalityis used interchangeably with thepeaking phenomenon,[5]which is also known asHughes phenomenon.[6]This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily.[7][8][9] Nevertheless, in the context of asimpleclassifier (e.g.,linear discriminant analysisin the multivariate Gaussian model under the assumption of a common known covariance matrix), Zollanvari,et al., showed both analytically and empirically that as long as the relative cumulative efficacy of an additional feature set (with respect to features that are already part of the classifier) is greater (or less) than the size of this additional feature set, the expected error of the classifier constructed using these additional features will be less (or greater) than the expected error of the classifier constructed without them. In other words, both the size of additional features and their (relative) cumulative discriminatory effect are important in observing a decrease or increase in the average predictive power.[10] Inmetric learning, higher dimensions can sometimes allow a model to achieve better performance. After normalizing embeddings to the surface of a hypersphere, FaceNet achieves the best performance using 128 dimensions as opposed to 64, 256, or 512 dimensions in one ablation study.[11]A loss function for unitary-invariant dissimilarity between word embeddings was found to be minimized in high dimensions.[12] Indata mining, the curse of dimensionality refers to a data set with too many features. Consider the first table, which depicts 200 individuals and 2000 genes (features) with a 1 or 0 denoting whether or not they have a genetic mutation in that gene. A data mining application to this data set may be finding the correlation between specific genetic mutations and creating a classification algorithm such as adecision treeto determine whether an individual has cancer or not. A common practice of data mining in this domain would be to createassociation rulesbetween genetic mutations that lead to the development of cancers. To do this, one would have to loop through each genetic mutation of each individual and find other genetic mutations that occur over a desired threshold and create pairs. They would start with pairs of two, then three, then four until they result in an empty set of pairs. The complexity of this algorithm can lead to calculating all permutations of gene pairs for each individual or row. Given the formula for calculating the permutations of n items with a group size of r is:n!(n−r)!{\displaystyle {\frac {n!}{(n-r)!}}}, calculating the number of three pair permutations of any given individual would be 7988004000 different pairs of genes to evaluate for each individual. The number of pairs created will grow by an order of factorial as the size of the pairs increase. The growth is depicted in the permutation table (see right). As we can see from the permutation table above, one of the major problems data miners face regarding the curse of dimensionality is that the space of possible parameter values grows exponentially or factorially as the number of features in the data set grows. This problem critically affects both computational time and space when searching for associations or optimal features to consider. Another problem data miners may face when dealing with too many features is that the number of false predictions or classifications tends to increase as the number of features grows in the data set. In terms of the classification problem discussed above, keeping every data point could lead to a higher number offalse positives and false negativesin the model. This may seem counterintuitive, but consider the genetic mutation table from above, depicting all genetic mutations for each individual. Each genetic mutation, whether they correlate with cancer or not, will have some input or weight in the model that guides the decision-making process of the algorithm. There may be mutations that areoutliersor ones that dominate the overall distribution of genetic mutations when in fact they do not correlate with cancer. These features may be working against one's model, making it more difficult to obtain optimal results. This problem is up to the data miner to solve, and there is no universal solution. The first step any data miner should take is to explore the data, in an attempt to gain an understanding of how it can be used to solve the problem. One must first understand what the data means, and what they are trying to discover before they can decide if anything must be removed from the data set. Then they can create or use afeature selectionordimensionality reductionalgorithm to remove samples or features from the data set if they deem it necessary. One example of such methods is theinterquartile rangemethod, used to removeoutliersin a data set by calculating the standard deviation of a feature or occurrence. When a measure such as aEuclidean distanceis defined using many coordinates, there is little difference in the distances between different pairs of points. One way to illustrate the "vastness" of high-dimensional Euclidean space is to compare the proportion of an inscribedhyperspherewith radiusr{\displaystyle r}and dimensiond{\displaystyle d}, to that of ahypercubewith edges of length2r.{\displaystyle 2r.}The volume of such a sphere is2rdπd/2dΓ(d/2){\displaystyle {\frac {2r^{d}\pi ^{d/2}}{d\;\Gamma (d/2)}}}, whereΓ{\displaystyle \Gamma }is thegamma function, while the volume of the cube is(2r)d{\displaystyle (2r)^{d}}. As the dimensiond{\displaystyle d}of the space increases, the hypersphere becomes an insignificant volume relative to that of the hypercube. This can clearly beseenby comparing the proportions as the dimensiond{\displaystyle d}goes to infinity: Furthermore, the distance between the center and the corners isrd{\displaystyle r{\sqrt {d}}}, which increases without bound for fixed r. In this sense when points areuniformly generatedin a high-dimensional hypercube, almost all points are much farther thanr{\displaystyle r}units away from the centre. In high dimensions, the volume of thed-dimensional unit hypercube (with coordinates of the vertices±1{\displaystyle \pm 1}) is concentrated near a sphere with the radiusd/3{\displaystyle {\sqrt {d}}/{\sqrt {3}}}for large dimensiond. Indeed, for each coordinatexi{\displaystyle x_{i}}the average value ofxi2{\displaystyle x_{i}^{2}}in the cube is[13] The variance ofxi2{\displaystyle x_{i}^{2}}for uniform distribution in the cube is Therefore, the squared distance from the origin,r2=∑ixi2{\textstyle r^{2}=\sum _{i}x_{i}^{2}}has the average valued/3 and variance 4d/45. For larged, distribution ofr2/d{\displaystyle r^{2}/d}is close to thenormal distributionwith the mean 1/3 and the standard deviation2/45d{\displaystyle 2/{\sqrt {45d}}}according to thecentral limit theorem. Thus, when uniformly generating points in high dimensions, both the "middle" of the hypercube, and the corners are empty, and all the volume is concentrated near the surface of a sphere of "intermediate" radiusd/3{\textstyle {\sqrt {d/3}}}. This also helps to understand thechi-squared distribution. Indeed, the (non-central) chi-squared distribution associated to a random point in theinterval[-1, 1] is the same as the distribution of the length-squared of a random point in thed-cube. By the law of large numbers, this distribution concentrates itself in a narrow band arounddtimes the standard deviation squared (σ2) of the original derivation. This illuminates the chi-squared distribution and also illustrates that most of the volume of thed-cube concentrates near the boundary of a sphere of radiusσd{\displaystyle \sigma {\sqrt {d}}}. A further development of this phenomenon is as follows. Any fixed distribution on thereal numbersinduces a product distribution on points inRd{\displaystyle \mathbb {R} ^{d}}. For any fixedn, it turns out that the difference between the minimum and the maximum distance between a random reference pointQand a list ofnrandom data pointsP1,...,Pnbecome indiscernible compared to the minimum distance:[14] This is often cited as distance functions losing their usefulness (for the nearest-neighbor criterion in feature-comparison algorithms, for example) in high dimensions. However, recent research has shown this to only hold in the artificial scenario when the one-dimensional distributionsR{\displaystyle \mathbb {R} }areindependent and identically distributed.[15]When attributes are correlated, data can become easier and provide higher distance contrast and thesignal-to-noise ratiowas found to play an important role, thusfeature selectionshould be used.[15] More recently, it has been suggested that there may be a conceptual flaw in the argument that contrast-loss creates a curse in high dimensions. Machine learning can be understood as the problem of assigning instances to their respective generative process of origin, with class labels acting as symbolic representations of individual generative processes. The curse's derivation assumes all instances are independent, identical outcomes of a single high dimensional generative process. If there is only one generative process, there would exist only one (naturally occurring) class and machine learning would be conceptually ill-defined in both high and low dimensions. Thus, the traditional argument that contrast-loss creates a curse, may be fundamentally inappropriate. In addition, it has been shown that when the generative model is modified to accommodate multiple generative processes, contrast-loss can morph from a curse to a blessing, as it ensures that the nearest-neighbor of an instance is almost-surely its most closely related instance. From this perspective, contrast-loss makes high dimensional distances especially meaningful and not especially non-meaningful as is often argued.[16] The effect complicatesnearest neighbor searchin high dimensional space. It is not possible to quickly reject candidates by using the difference in one coordinate as a lower bound for a distance based on all the dimensions.[17][18] However, it has recently been observed that the mere number of dimensions does not necessarily result in difficulties,[19]sincerelevantadditional dimensions can also increase the contrast. In addition, for the resulting ranking it remains useful to discern close and far neighbors. Irrelevant ("noise") dimensions, however, reduce the contrast in the manner described above. Intime series analysis, where the data are inherently high-dimensional, distance functions also work reliably as long as thesignal-to-noise ratiois high enough.[20] Another effect of high dimensionality on distance functions concernsk-nearest neighbor (k-NN)graphsconstructed from adata setusing a distance function. As the dimension increases, theindegreedistribution of thek-NNdigraphbecomesskewedwith a peak on the right because of the emergence of a disproportionate number ofhubs, that is, data-points that appear in many morek-NN lists of other data-points than the average.[21]This phenomenon can have a considerable impact on various techniques forclassification(including thek-NN classifier),semi-supervised learning, andclustering,[22]and it also affectsinformation retrieval.[23] In a 2012 survey, Zimek et al. identified the following problems when searching foranomaliesin high-dimensional data:[15] Many of the analyzed specialized methods tackle one or another of these problems, but there remain many open research questions. Surprisingly and despite the expected "curse of dimensionality" difficulties, common-sense heuristics based on the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems.[24]The term "blessing of dimensionality" was introduced in the late 1990s.[24]Donohoin his "Millennium manifesto" clearly explained why the "blessing of dimensionality" will form a basis of future data mining.[25]The effects of the blessing of dimensionality were discovered in many applications and found their foundation in theconcentration of measure phenomena.[26]One example of the blessing of dimensionality phenomenon is linear separability of a random point from a large finite random set with high probability even if this set is exponentially large: the number of elements in this random set can grow exponentially with dimension. Moreover, this linear functional can be selected in the form of the simplest linearFisher discriminant. This separability theorem was proven for a wide class of probability distributions: general uniformly log-concave distributions, product distributions in a cube and many other families (reviewed recently in[26]). "The blessing of dimensionality and the curse of dimensionality are two sides of the same coin."[27]For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing),[28]but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse).[29] Zimek et al.[15]noted that while the typical formalizations of the curse of dimensionality affecti.i.d.data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that thesignal-to-noise ratiomatters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping.
https://en.wikipedia.org/wiki/Curse_of_dimensionality
The distinction betweensubjectivityandobjectivityis a basic idea ofphilosophy, particularlyepistemologyandmetaphysics. Various understandings of this distinction have evolved through the work of countless philosophers over centuries. One basic distinction is: Both ideas have been given various and ambiguous definitions by differing sources as the distinction is often a given but not the specific focal point of philosophical discourse.[2]The two words are usually regarded asopposites, though complications regarding the two have been explored in philosophy: for example, the view of particular thinkers that objectivity is an illusion and does not exist at all, or that a spectrum joins subjectivity and objectivity with a gray area in-between, or that the problem of other minds is best viewed through the concept ofintersubjectivity, developing since the 20th century. The distinction between subjectivity and objectivity is often related to discussions ofconsciousness,agency,personhood,philosophy of mind,philosophy of language,reality,truth, andcommunication(for example innarrative communicationandjournalism). The root of the wordssubjectivityandobjectivityaresubjectandobject, philosophical terms that mean, respectively, an observer and a thing being observed. The wordsubjectivitycomes fromsubjectin a philosophical sense, meaning an individual who possesses unique conscious experiences, such as perspectives, feelings, beliefs, and desires,[1][3]or who (consciously) acts upon or wields power over some other entity (anobject).[4] Aristotle's teacherPlatoconsideredgeometryto be a condition ofhis idealist philosophyconcerned withuniversaltruth.[clarification needed]In Plato'sRepublic,Socratesopposes the sophistThrasymachus'srelativistic account of justice, and argues that justice is mathematical in its conceptual structure, and that ethics was therefore a precise and objective enterprise with impartial standards for truth and correctness, like geometry.[5]The rigorous mathematical treatment Plato gave to moral concepts set the tone for the western tradition of moral objectivism that came after him.[6][citation needed]His contrasting between objectivity andopinionbecame the basis for philosophies intent on resolving the questions ofreality,truth, andexistence. He saw opinions as belonging to the shifting sphere ofsensibilities, as opposed to a fixed, eternal and knowableincorporeality. Where Plato distinguished betweenhow we know thingsand theirontologicalstatus,subjectivismsuch asGeorge Berkeley's depends onperception.[7]InPlatonicterms, a criticism of subjectivism is that it is difficult to distinguish between knowledge, opinions, andsubjectiveknowledge.[8] Platonic idealism is a form ofmetaphysicalobjectivism, holding that theideas exist independentlyfrom the individual. Berkeley'sempiricalidealism, on the other hand, holds thatthings only exist as they are perceived. Both approaches boast an attempt at objectivity. Plato's definition of objectivity can be found inhis epistemology, which is based onmathematics, andhis metaphysics, where knowledge of the ontological status of objects and ideas is resistant to change.[7] In Western philosophy, the idea of subjectivity is thought to have its roots in the works of theEuropean EnlightenmentthinkersDescartesandKantthough it could also stem as far back as theAncient GreekphilosopherAristotle's work relating to the soul.[9][2]The idea of subjectivity is often seen as a peripheral to other philosophical concepts, namelyskepticism,individualsand individuality, andexistentialism.[2][9]The questions surrounding subjectivity have to do with whether or not people can escape the subjectivity of their own human existence and whether or not there is an obligation to try to do so.[1] Important thinkers who focused on this area of study include Descartes,Locke, Kant,Hegel,Kierkegaard,Husserl,Foucault,Derrida,Nagel, andSartre.[1] Subjectivity was rejected by Foucault and Derrida in favor ofconstructionism,[1]but Sartre embraced and continued Descartes' work in the subject by emphasizing subjectivity inphenomenology.[1][10]Sartre believed that, even within the material force of human society, the ego was an essentially transcendent being—posited, for instance, in his opusBeing and Nothingnessthrough his arguments about the 'being-for-others' and the 'for-itself' (i.e., an objective and subjective human being).[10] The innermost core ofsubjectivityresides in a unique act of whatFichtecalled "self-positing", where each subject is a point of absoluteautonomy, which means that it cannot be reduced to a moment in the network ofcausesand effects.[11] One way that subjectivity has been conceptualized by philosophers such as Kierkegaard is in the context ofreligion.[1]Religious beliefs can vary quite extremely from person to person, but people often think that whatever they believe is the truth. Subjectivity as seen by Descartes and Sartre was a matter of what was dependent on consciousness, so, because religious beliefs require the presence of a consciousness that can believe, they must be subjective.[1]This is in contrast to what has been proven by purelogicorhard sciences, which does not depend on the perception of people, and is therefore considered objective.[1]Subjectivity is what relies on personal perception regardless of what is proven or objective.[1] Many philosophical arguments within this area of study have to do with moving from subjective thoughts to objective thoughts with many different methods employed to get from one to the other along with a variety of conclusions reached.[1]This is exemplified by Descartes deductions that move from reliance on subjectivity to somewhat of a reliance on God for objectivity.[1][12]Foucault and Derrida denied the idea of subjectivity in favor of their ideas ofconstructsin order to account for differences in human thought.[1]Instead of focusing on the idea of consciousness and self-consciousness shaping the way humans perceive the world, these thinkers would argue that it is instead the world that shapes humans, so they would see religion less as a belief and more as a cultural construction.[1] Others like Husserl and Sartre followed the phenomenological approach.[1]This approach focused on the distinct separation of the human mind and the physical world, where the mind is subjective because it can take liberties like imagination and self-awareness where religion might be examined regardless of any kind of subjectivity.[10]The philosophical conversation around subjectivity remains one that struggles with the epistemological question of what is real, what is made up, and what it would mean to be separated completely from subjectivity.[1] In opposition to philosopherRené Descartes' method ofpersonal deduction[clarification needed], natural philosopherIsaac Newtonapplied the relatively objectivescientific methodto look forevidencebefore forming a hypothesis.[13]Partially in response toKant'srationalism, logicianGottlob Fregeapplied objectivity to his epistemological and metaphysical philosophies. If reality exists independently ofconsciousness, then it would logically include a plurality ofindescribableforms. Objectivity requires a definition oftruthformed by propositions withtruth value. An attempt of forming an objectiveconstructincorporatesontological commitmentsto the reality of objects.[14] The importance of perception in evaluating and understanding objective reality is debated in theobserver effectof quantum mechanics.Directornaïve realistsrely on perception as key in observing objective reality, whileinstrumentalistshold that observations are useful in predicting objective reality. The concepts that encompass these ideas are important in thephilosophy of science.Philosophies of mindexplore whether objectivity relies onperceptual constancy.[15] Historyas a discipline has wrestled with notions of objectivity from its very beginning. While its object of study is commonly thought to bethe past, the only thing historians have to work with are different versions of stories based on individualperceptionsofrealityandmemory. Several history streams developed to devise ways to solve this dilemma: Historians likeLeopold von Ranke(19th century) have advocated for the use of extensiveevidence–especiallyarchivedphysical paper documents– to recover the bygone past, claiming that, as opposed to people's memories, objects remain stable in what they say about the era they witnessed, and therefore represent a better insight intoobjective reality.[16]In the 20th century, theAnnales Schoolemphasized the importance of shifting focus away from the perspectives of influentialmen–usually politicians around whose actionsnarrativesofthe pastwere shaped–, and putting it on the voices of ordinary people.[17]Postcolonialstreams of history challenge the colonial-postcolonialdichotomyand critiqueEurocentric academiapractices, such as the demand for historians from colonized regions to anchor their local narratives to events happening in the territories of their colonizers to earncredibility.[18]All the streams explained above try to uncover whose voice is more or less truth-bearing and how historians can stitch together versions of it to best explain what "actually happened." The anthropologistMichel-Rolph Trouillotdeveloped the concepts of historicity 1 and 2 to explain the difference between thematerialityofsocio-historicalprocesses (H1) and the narratives that are told about the materiality of socio-historical processes (H2).[19]This distinction hints that H1 would be understood as thefactualreality that elapses and is captured with the concept of "objective truth", and that H2 is the collection ofsubjectivitiesthathumanityhas stitched together to grasp the past. Debates aboutpositivism,relativism, andpostmodernismare relevant to evaluating these concepts' importance and the distinction between them. In his book "Silencing the past",Trouillotwrote about the power dynamics at play in history-making, outlining four possible moments in whichhistorical silencescan be created: (1) making ofsources(who gets to know how to write, or to have possessions that are later examined ashistorical evidence), (2) making ofarchives(whatdocumentsare deemed important to save and which are not, how to classify materials, and how to order them within physical ordigitalarchives), (3) making of narratives (whichaccounts of historyare consulted, which voices are givencredibility), and (4) the making of history (the retrospective construction of whatThe Pastis).[19] Because history (official,public,familial, personal) informs current perceptions and how we make sense ofthe present, whose voice gets to be included in it –and how– has direct consequences in material socio-historical processes. Thinking of current historical narratives asimpartialdepictions of the totality of events unfolded in the past by labeling them as "objective" risks sealing historical understanding. Acknowledging that history is never objective and always incomplete has a meaningful opportunity to supportsocial justiceefforts. Under said notion, voices that have been silenced are placed on an equal footing to the grand and popular narratives of the world, appreciated for their unique insight of reality through theirsubjectivelens. Subjectivity is an inherently social mode that comes about through innumerable interactions within society. As much as subjectivity is a process ofindividuation, it is equally a process of socialization, the individual never being isolated in a self-contained environment, but endlessly engaging in interaction with the surrounding world. Culture is a living totality of the subjectivity of any given society constantly undergoing transformation.[20]Subjectivity is both shaped by it and shapes it in turn, but also by other things like the economy, political institutions, communities, as well as the natural world. Though the boundaries of societies and their cultures are indefinable and arbitrary, the subjectivity inherent in each one is palatable and can be recognized as distinct from others. Subjectivity is in part a particular experience or organization ofreality, which includes how one views and interacts with humanity, objects, consciousness, and nature, so the difference between different cultures brings about an alternate experience of existence that forms life in a different manner. A common effect on an individual of this disjunction between subjectivities isculture shock, where the subjectivity of the other culture is considered alien and possibly incomprehensible or even hostile. Political subjectivityis an emerging concept in social sciences and humanities.[4]Political subjectivity is a reference to the deep embeddedness of subjectivity in the socially intertwined systems of power and meaning. "Politicality", writesSadeq RahimiinMeaning, Madness and Political Subjectivity, "is not an added aspect of the subject, but indeed the mode of being of the subject, that is, precisely what the subjectis."[21] Scientific objectivityis practicing science while intentionally reducingpartiality, biases, or external influences. Moral objectivity is the concept of moral or ethical codes being compared to one another through a set of universal facts or a universal perspective and not through differing conflicting perspectives.[22] Journalistic objectivityis the reporting of facts and news with minimal personal bias or in an impartial or politically neutral manner.
https://en.wikipedia.org/wiki/Subjectivity
Instatistics, apower lawis afunctional relationshipbetween two quantities, where arelative changein one quantity results in a relative change in the other quantity proportional to the change raised to a constantexponent: one quantity varies as a power of another. The change is independent of the initial size of those quantities. For instance, the area of a square has a power law relationship with the length of its side, since if the length is doubled, the area is multiplied by 22, while if the length is tripled, the area is multiplied by 32, and so on.[1] The distributions of a wide variety of physical, biological, and human-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on themoonand ofsolar flares,[2]cloud sizes,[3]the foraging pattern of various species,[4]the sizes of activity patterns of neuronal populations,[5]the frequencies ofwordsin most languages, frequencies offamily names, thespecies richnessincladesof organisms,[6]the sizes ofpower outages, volcanic eruptions,[7]human judgments of stimulus intensity[8][9]and many other quantities.[10]Empirical distributions can only fit a power law for a limited range of values, because a pure power law would allow for arbitrarily large or small values.Acoustic attenuationfollows frequency power-laws within wide frequency bands for many complex media.Allometric scaling lawsfor relationships between biological variables are among the best known power-law functions in nature. The power-law model does not obey the treasured paradigm of statistical completeness. Especially probability bounds, the suspected cause of typical bending and/or flattening phenomena in the high- and low-frequency graphical segments, are parametrically absent in the standard model.[11] One attribute of power laws is theirscale invariance. Given a relationf(x)=ax−k{\displaystyle f(x)=ax^{-k}}, scaling the argumentx{\displaystyle x}by a constant factorc{\displaystyle c}causes only a proportionate scaling of the function itself. That is, where∝{\displaystyle \propto }denotesdirect proportionality. That is, scaling by a constantc{\displaystyle c}simply multiplies the original power-law relation by the constantc−k{\displaystyle c^{-k}}. Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of bothf(x){\displaystyle f(x)}andx{\displaystyle x}, and the straight-line on thelog–log plotis often called thesignatureof a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws.[citation needed]Thus, accurately fitting andvalidating power-lawmodels is an active area of research in statistics; see below. A power-lawx−k{\displaystyle x^{-k}}has a well-definedmeanoverx∈[1,∞){\displaystyle x\in [1,\infty )}only ifk>2{\displaystyle k>2}, and it has a finitevarianceonly ifk>3{\displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable ofblack swanbehavior.[2]This can be seen in the following thought experiment:[12]imagine a room with your friends and estimate the average monthly income in the room. Now imagine theworld's richest personentering the room, with a monthly income of about 1billionUS$. What happens to the average income in the room? Income is distributed according to a power-law known as thePareto distribution(for example, the net worth of Americans is distributed according to a power law with an exponent of 2). On the one hand, this makes it incorrect to apply traditional statistics that are based onvarianceandstandard deviation(such asregression analysis).[13]On the other hand, this also allows for cost-efficient interventions.[12]For example, given that car exhaust is distributed according to a power-law among cars (very few cars contribute to most contamination) it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially.[14] The median does exist, however: for a power lawx–k, with exponent⁠k>1{\displaystyle k>1}⁠, it takes the value 21/(k– 1)xmin, wherexminis the minimum value for which the power law holds.[2] The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example,phase transitionsin thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as thecritical exponentsof the system. Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approachcriticality—can be shown, viarenormalization grouptheory, to share the same fundamental dynamics. For instance, the behavior of water and CO2at their boiling points fall in the same universality class because they have identical critical exponents.[citation needed][clarification needed]In fact, almost all material phase transitions are described by a small set of universality classes. Similar observations have been made, though not as comprehensively, for variousself-organized criticalsystems, where the critical point of the system is anattractor. Formally, this sharing of dynamics is referred to asuniversality, and systems with precisely the same critical exponents are said to belong to the sameuniversality class. Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them.[15]The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems;[16]see alsouniversalityabove. The ubiquity of power-law relations in physics is partly due todimensional constraints, while incomplex systems, power laws are often thought to be signatures of hierarchy or of specificstochastic processes. A few notable examples of power laws arePareto's lawof income distribution, structural self-similarity offractals,scaling laws in biological systems, andscaling laws in cities. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, includingphysics,computer science,linguistics,geophysics,neuroscience,systematics,sociology,economicsand more. However, much of the recent interest in power laws comes from the study ofprobability distributions: The distributions of a wide variety of quantities seem to follow the power-law form, at least in their upper tail (large events). The behavior of these large events connects these quantities to the study oftheory of large deviations(also calledextreme value theory), which considers the frequency of extremely rare events likestock market crashesand largenatural disasters. It is primarily in the study of statistical distributions that the name "power law" is used. In empirical contexts, an approximation to a power-lawo(xk){\displaystyle o(x^{k})}often includes a deviation termε{\displaystyle \varepsilon }, which can represent uncertainty in the observed values (perhaps measurement or sampling errors) or provide a simple way for observations to deviate from the power-law function (perhaps for stochastic reasons): Mathematically, a strict power law cannot be a probability distribution, but a distribution that is a truncatedpower functionis possible:p(x)=Cx−α{\displaystyle p(x)=Cx^{-\alpha }}forx>xmin{\displaystyle x>x_{\text{min}}}where the exponentα{\displaystyle \alpha }(Greek letteralpha, not to be confused with scaling factora{\displaystyle a}used above) is greater than 1 (otherwise the tail has infinite area), the minimum valuexmin{\displaystyle x_{\text{min}}}is needed otherwise the distribution has infinite area asxapproaches 0, and the constantCis a scaling factor to ensure that the total area is 1, as required by a probability distribution. More often one uses an asymptotic power law – one that is only true in the limit; seepower-law probability distributionsbelow for details. Typically the exponent falls in the range2<α<3{\displaystyle 2<\alpha <3}, though not always.[10] More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).[17]Among them are: A broken power law is apiecewise function, consisting of two or more power laws, combined with a threshold. For example, with two power laws:[49] The pieces of a broken power law can be smoothly spliced together to construct a smoothly broken power law. There are different possible ways to splice together power laws. One example is the following:[50]ln⁡(yy0+a)=c0ln⁡(xx0)+∑i=1nci−ci−1filn⁡(1+(xxi)fi){\displaystyle \ln \left({\frac {y}{y_{0}}}+a\right)=c_{0}\ln \left({\frac {x}{x_{0}}}\right)+\sum _{i=1}^{n}{\frac {c_{i}-c_{i-1}}{f_{i}}}\ln \left(1+\left({\frac {x}{x_{i}}}\right)^{f_{i}}\right)}where0<x0<x1<⋯<xn{\displaystyle 0<x_{0}<x_{1}<\cdots <x_{n}}. When the function is plotted as alog-log plotwith horizontal axis beingln⁡x{\displaystyle \ln x}and vertical axis beingln⁡(y/y0+a){\displaystyle \ln(y/y_{0}+a)}, the plot is composed ofn+1{\displaystyle n+1}linear segments with slopesc0,c1,...,cn{\displaystyle c_{0},c_{1},...,c_{n}}, separated atx=x1,...,xn{\displaystyle x=x_{1},...,x_{n}}, smoothly spliced together. The size offi{\displaystyle f_{i}}determines the sharpness of splicing between segmentsi−1,i{\displaystyle i-1,i}. A power law with an exponential cutoff is simply a power law multiplied by an exponential function:[10] In a looser sense, a power-lawprobability distributionis a distribution whose density function (or mass function in the discrete case) has the form, for large values ofx{\displaystyle x},[52] whereα>1{\displaystyle \alpha >1}, andL(x){\displaystyle L(x)}is aslowly varying function, which is any function that satisfieslimx→∞L(rx)/L(x)=1{\displaystyle \lim _{x\rightarrow \infty }L(r\,x)/L(x)=1}for any positive factorr{\displaystyle r}. This property ofL(x){\displaystyle L(x)}follows directly from the requirement thatp(x){\displaystyle p(x)}be asymptotically scale invariant; thus, the form ofL(x){\displaystyle L(x)}only controls the shape and finite extent of the lower tail. For instance, ifL(x){\displaystyle L(x)}is the constant function, then we have a power law that holds for all values ofx{\displaystyle x}. In many cases, it is convenient to assume a lower boundxmin{\displaystyle x_{\mathrm {min} }}from which the law holds. Combining these two cases, and wherex{\displaystyle x}is a continuous variable, the power law has the form of thePareto distribution where the pre-factor toα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is thenormalizing constant. We can now consider several properties of this distribution. For instance, itsmomentsare given by which is only well defined form<α−1{\displaystyle m<\alpha -1}. That is, all momentsm≥α−1{\displaystyle m\geq \alpha -1}diverge: whenα≤2{\displaystyle \alpha \leq 2}, the average and all higher-order moments are infinite; when2<α<3{\displaystyle 2<\alpha <3}, the mean exists, but the variance and higher-order moments are infinite, etc. For finite-size samples drawn from such distribution, this behavior implies that thecentral momentestimators (like the mean and the variance) for diverging moments will never converge – as more data is accumulated, they continue to grow. These power-law probability distributions are also called Pareto-type distributions, distributions with Pareto tails, or distributions with regularly varying tails. A modification, which does not satisfy the general form above, with an exponential cutoff,[10]is In this distribution, the exponential decay terme−λx{\displaystyle \mathrm {e} ^{-\lambda x}}eventually overwhelms the power-law behavior at very large values ofx{\displaystyle x}. This distribution does not scale[further explanation needed]and is thus not asymptotically as a power law; however, it does approximately scale over a finite region before the cutoff. The pure form above is a subset of this family, withλ=0{\displaystyle \lambda =0}. This distribution is a common alternative to the asymptotic power-law distribution because it naturally captures finite-size effects. TheTweedie distributionsare a family of statistical models characterized byclosureunder additive and reproductive convolution as well as under scale transformation. Consequently, these models all express a power-law relationship between the variance and the mean. These models have a fundamental role as foci of mathematicalconvergencesimilar to the role that thenormal distributionhas as a focus in thecentral limit theorem. This convergence effect explains why the variance-to-mean power law manifests so widely in natural processes, as withTaylor's lawin ecology and with fluctuation scaling[53]in physics. It can also be shown that this variance-to-mean power law, when demonstrated by themethod of expanding bins, implies the presence of 1/fnoise and that 1/fnoise can arise as a consequence of this Tweedie convergence effect.[54] Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or ParetoQ–Q plots),[citation needed]mean residual life plots[55][56]andlog–log plots. Another, more robust graphical method uses bundles of residual quantile functions.[57](Please keep in mind that power-law distributions are also called Pareto-type distributions.) It is assumed here that a random sample is obtained from a probability distribution, and that we want to know if the tail of the distribution follows a power law (in other words, we want to know if the distribution has a "Pareto tail"). Here, the random sample is called "the data". Pareto Q–Q plots compare thequantilesof the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted pointsasymptotically convergeto a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail indexα{\displaystyle \alpha }(also called Pareto index) is close to 0, because Pareto Q–Q plots are not designed to identify distributions with slowly varying tails.[57] On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than thei-th order statistic versus thei-th order statistic, fori= 1, ...,n, where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to stabilize about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots.[58] Log–log plotsare an alternative way of graphically examining the tail of a distribution using a random sample. Taking the logarithm of a power law of the formf(x)=axk{\displaystyle f(x)=ax^{k}}results in:[59] which forms a straight line with slopek{\displaystyle k}on a log-log scale. Caution has to be exercised however as a log–log plot is necessary but insufficient evidence for a power law relationship, as many non power-law distributions will appear as straight lines on a log–log plot.[10][60]This method consists of plotting the logarithm of an estimator of the probability that a particular number of the distribution occurs versus the logarithm of that particular number. Usually, this estimator is the proportion of times that the number occurs in the data set. If the points in the plot tend to converge to a straight line for large numbers in the x axis, then the researcher concludes that the distribution has a power-law tail. Examples of the application of these types of plot have been published.[61]A disadvantage of these plots is that, in order for them to provide reliable results, they require huge amounts of data. In addition, they are appropriate only for discrete (or grouped) data. Another graphical method for the identification of power-law probability distributions using random samples has been proposed.[57]This methodology consists of plotting abundle for the log-transformed sample. Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residualquantile functions(RQFs), also called residual percentile functions,[62][63][64][65][66][67][68]which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values ofα{\displaystyle \alpha }, and do not demand the collection of much data).[citation needed]In addition, other types of tail behavior can be identified using bundle plots. In general, power-law distributions are plotted ondoubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary)cumulative distribution(ccdf) that is, thesurvival function,P(x)=Pr(X>x){\displaystyle P(x)=\mathrm {Pr} (X>x)}, The cdf is also a power-law function, but with a smaller scaling exponent. For data, an equivalent form of the cdf is the rank-frequency approach, in which we first sort then{\displaystyle n}observed values in ascending order, and plot them against the vector[1,n−1n,n−2n,…,1n]{\displaystyle \left[1,{\frac {n-1}{n}},{\frac {n-2}{n}},\dots ,{\frac {1}{n}}\right]}. Although it can be convenient to log-bin the data, or otherwise smooth the probability density (mass) function directly, these methods introduce an implicit bias in the representation of the data, and thus should be avoided.[10][69]The survival function, on the other hand, is more robust to (but not without) such biases in the data and preserves the linear signature on doubly logarithmic axes. Though a survival function representation is favored over that of the pdf while fitting a power law to the data with the linear least square method, it is not devoid of mathematical inaccuracy. Thus, while estimating exponents of a power law distribution, maximum likelihood estimator is recommended. There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yieldunbiased and consistent answers. Some of the most reliable techniques are often based on the method ofmaximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent.[10] For real-valued,independent and identically distributeddata, we fit a power-law distribution of the form to the datax≥xmin{\displaystyle x\geq x_{\min }}, where the coefficientα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is included to ensure that the distribution isnormalized. Given a choice forxmin{\displaystyle x_{\min }}, the log likelihood function becomes: The maximum of this likelihood is found by differentiating with respect to parameterα{\displaystyle \alpha }, setting the result equal to zero. Upon rearrangement, this yields the estimator equation: where{xi}{\displaystyle \{x_{i}\}}are then{\displaystyle n}data pointsxi≥xmin{\displaystyle x_{i}\geq x_{\min }}.[2][70]This estimator exhibits a small finite sample-size bias of orderO(n−1){\displaystyle O(n^{-1})}, which is small whenn> 100. Further, the standard error of the estimate isσ=α^−1n+O(n−1){\displaystyle \sigma ={\frac {{\hat {\alpha }}-1}{\sqrt {n}}}+O(n^{-1})}. This estimator is equivalent to the popular[citation needed]Hill estimatorfromquantitative financeandextreme value theory.[citation needed] For a set ofninteger-valued data points{xi}{\displaystyle \{x_{i}\}}, again where eachxi≥xmin{\displaystyle x_{i}\geq x_{\min }}, the maximum likelihood exponent is the solution to the transcendental equation whereζ(α,xmin){\displaystyle \zeta (\alpha ,x_{\mathrm {min} })}is theincomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation. However, the two equations forα^{\displaystyle {\hat {\alpha }}}are not equivalent, and the continuous version should not be applied to discrete data, nor vice versa. Further, both of these estimators require the choice ofxmin{\displaystyle x_{\min }}. For functions with a non-trivialL(x){\displaystyle L(x)}function, choosingxmin{\displaystyle x_{\min }}too small produces a significant bias inα^{\displaystyle {\hat {\alpha }}}, while choosing it too large increases the uncertainty inα^{\displaystyle {\hat {\alpha }}}, and reduces thestatistical powerof our model. In general, the best choice ofxmin{\displaystyle x_{\min }}depends strongly on the particular form of the lower tail, represented byL(x){\displaystyle L(x)}above. More about these methods, and the conditions under which they can be used, can be found in .[10]Further, this comprehensive review article providesusable code(Matlab, Python, R and C++) for estimation and testing routines for power-law distributions. Another method for the estimation of the power-law exponent, which does not assumeindependent and identically distributed(iid) data, uses the minimization of theKolmogorov–Smirnov statistic,D{\displaystyle D}, between the cumulative distribution functions of the data and the power law: with wherePemp(x){\displaystyle P_{\mathrm {emp} }(x)}andPα(x){\displaystyle P_{\alpha }(x)}denote the cdfs of the data and the power law with exponentα{\displaystyle \alpha }, respectively. As this method does not assume iid data, it provides an alternative way to determine the power-law exponent for data sets in which the temporal correlation can not be ignored.[5] This criterion[71]can be applied for the estimation of power-law exponent in the case of scale-free distributions and provides a more convergent estimate than the maximum likelihood method. It has been applied to study probability distributions of fracture apertures. In some contexts the probability distribution is described, not by thecumulative distribution function, by thecumulative frequencyof a propertyX, defined as the number of elements per meter (or area unit, second etc.) for whichX>xapplies, wherexis a variable real number. As an example,[citation needed]the cumulative distribution of the fracture aperture,X, for a sample ofNelements is defined as 'the number of fractures per meter having aperture greater thanx. Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope). Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data.[34]This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation. For example,log-normal distributionsare often mistaken for power-law distributions:[72]a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law)[clarification needed], but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law).[citation needed] For example,Gibrat's lawabout proportional growth processes produce distributions that are lognormal, although their log–log plots look linear over a limited range. An explanation of this is that although the logarithm of thelognormal density functionis quadratic inlog(x), yielding a "bowed" shape in a log–log plot, if the quadratic term is small relative to the linear term then the result can appear almost linear, and the lognormal behavior is only visible when the quadratic term dominates, which may require significantly more data. Therefore, a log–log plot that is slightly "bowed" downwards can reflect a log-normal distribution – not a power law. In general, many alternative functional forms can appear to follow a power-law form for some extent.[73]Stumpf & Porter (2012)proposed plotting the empirical cumulative distribution function in the log-log domain and claimed that a candidate power-law should cover at least two orders of magnitude.[74]Also, researchers usually have to face the problem of deciding whether or not a real-world probability distribution follows a power law. As a solution to this problem, Diaz[57]proposed a graphical methodology based on random samples that allow visually discerning between different types of tail behavior. This methodology uses bundles of residual quantile functions, also called percentile residual life functions, which characterize many different types of distribution tails, including both heavy and non-heavy tails. However,Stumpf & Porter (2012)claimed the need for both a statistical and a theoretical background in order to support a power-law in the underlying mechanism driving the data generating process.[74] One method to validate a power-law relation tests many orthogonal predictions of a particular generative mechanism against data. Simply fitting a power-law relation to a particular kind of data is not considered a rational approach. As such, the validation of power-law claims remains a very active field of research in many areas of modern science.[10] Notes Bibliography
https://en.wikipedia.org/wiki/Power_law
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Derivation:The process of recursive generation of strings from a grammar. Parsing:Finding a valid derivation using an automaton. Parse Tree:The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2] Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple: where PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars. TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }". Its derivation is: Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6] Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities. An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7] Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15] Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental determination as is the case with energy minimization methods.[18][19] The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1] The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20] Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6] RNA secondary structure implementations based on PCFG approaches can be utilized in : Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27] PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to: The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6] A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21] A summary of general steps for utilizing PCFGs in various scenarios: Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1] The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslog⁡P(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1] Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24] CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows: The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1] In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -log⁡e^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-log⁡P(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1] The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice. By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35] For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20] After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38] Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance: Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20] PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20] Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
https://en.wikipedia.org/wiki/Probabilistic_context-free_grammar
Astochastic grammar(statistical grammar) is agrammar frameworkwith aprobabilisticnotion ofgrammaticality: The grammar is realized as alanguage model. Allowed sentences are stored in a database together with the frequency how common a sentence is.[2]Statisticalnatural language processingusesstochastic,probabilisticandstatisticalmethods, especially to resolve difficulties that arise because longer sentences are highlyambiguouswhen processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use ofcorporaandMarkov models. "A probabilistic model consists of a non-probabilistic model plus some numerical quantities; it is not true that probabilistic models are inherently simpler or less structural than non-probabilistic models."[3] A probabilistic method for rhyme detection is implemented by Hirjee & Brown[4]in their study in 2013 to find internal and imperfect rhyme pairs in rap lyrics. The concept is adapted from asequence alignmenttechnique usingBLOSUM(BLOcksSUbstitutionMatrix). They were able to detect rhymes undetectable by non-probabilistic models. Thisgrammar-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Stochastic_grammar
Inelectrical engineering,statistical computingandbioinformatics, theBaum–Welch algorithmis a special case of theexpectation–maximization algorithmused to find the unknown parameters of ahidden Markov model(HMM). It makes use of theforward-backward algorithmto compute the statistics for the expectation step. The Baum–Welch algorithm, the primary method for inference in hidden Markov models, is numerically unstable due to its recursive calculation of joint probabilities. As the number of variables grows, these joint probabilities become increasingly small, leading to the forward recursions rapidly approaching values below machine precision.[1] The Baum–Welch algorithm was named after its inventorsLeonard E. BaumandLloyd R. Welch. The algorithm and the Hidden Markov models were first described in a series of articles by Baum and his peers at theIDA Center for Communications Research, Princetonin the late 1960s and early 1970s.[2]One of the first major applications of HMMs was to the field ofspeech processing.[3]In the 1980s, HMMs were emerging as a useful tool in the analysis of biological systems and information, and in particulargenetic information.[4]They have since become an important tool in the probabilistic modeling of genomic sequences.[5] Ahidden Markov modeldescribes the joint probability of a collection of "hidden" and observed discrete random variables. It relies on the assumption that thei-th hidden variable given the (i− 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum–Welch algorithm uses the well known EM algorithm to find themaximum likelihoodestimate of the parameters of a hidden Markov model given a set of observed feature vectors. LetXt{\displaystyle X_{t}}be a discrete hidden random variable withN{\displaystyle N}possible values (i.e. We assume there areN{\displaystyle N}states in total). We assume theP(Xt∣Xt−1){\displaystyle P(X_{t}\mid X_{t-1})}is independent of timet{\displaystyle t}, which leads to the definition of the time-independent stochastic transition matrix The initial state distribution (i.e. whent=1{\displaystyle t=1}) is given by The observation variablesYt{\displaystyle Y_{t}}can take one ofK{\displaystyle K}possible values. We also assume the observation given the "hidden" state is time independent. The probability of a certain observationyi{\displaystyle y_{i}}at timet{\displaystyle t}for stateXt=j{\displaystyle X_{t}=j}is given by Taking into account all the possible values ofYt{\displaystyle Y_{t}}andXt{\displaystyle X_{t}}, we obtain theN×K{\displaystyle N\times K}matrixB={bj(yi)}{\displaystyle B=\{b_{j}(y_{i})\}}wherebj{\displaystyle b_{j}}belongs to all the possible states andyi{\displaystyle y_{i}}belongs to all the observations. An observation sequence is given byY=(Y1=y1,Y2=y2,…,YT=yT){\displaystyle Y=(Y_{1}=y_{1},Y_{2}=y_{2},\ldots ,Y_{T}=y_{T})}. Thus we can describe a hidden Markov chain byθ=(A,B,π){\displaystyle \theta =(A,B,\pi )}. The Baum–Welch algorithm finds a local maximum forθ∗=argmaxθ⁡P(Y∣θ){\displaystyle \theta ^{*}=\operatorname {arg\,max} _{\theta }P(Y\mid \theta )}(i.e. the HMM parametersθ{\displaystyle \theta }that maximize the probability of the observation).[6] Setθ=(A,B,π){\displaystyle \theta =(A,B,\pi )}with random initial conditions. They can also be set using prior information about the parameters if it is available; this can speed up the algorithm and also steer it toward the desired local maximum. Letαi(t)=P(Y1=y1,…,Yt=yt,Xt=i∣θ){\displaystyle \alpha _{i}(t)=P(Y_{1}=y_{1},\ldots ,Y_{t}=y_{t},X_{t}=i\mid \theta )}, the probability of seeing the observationsy1,y2,…,yt{\displaystyle y_{1},y_{2},\ldots ,y_{t}}and being in statei{\displaystyle i}at timet{\displaystyle t}. This is found recursively: Since this series converges exponentially to zero, the algorithm will numerically underflow for longer sequences.[7]However, this can be avoided in a slightly modified algorithm by scalingα{\displaystyle \alpha }in the forward andβ{\displaystyle \beta }in the backward procedure below. Letβi(t)=P(Yt+1=yt+1,…,YT=yT∣Xt=i,θ){\displaystyle \beta _{i}(t)=P(Y_{t+1}=y_{t+1},\ldots ,Y_{T}=y_{T}\mid X_{t}=i,\theta )}that is the probability of the ending partial sequenceyt+1,…,yT{\displaystyle y_{t+1},\ldots ,y_{T}}given starting statei{\displaystyle i}at timet{\displaystyle t}. We calculateβi(t){\displaystyle \beta _{i}(t)}as, We can now calculate the temporary variables, according to Bayes' theorem: which is the probability of being in statei{\displaystyle i}at timet{\displaystyle t}given the observed sequenceY{\displaystyle Y}and the parametersθ{\displaystyle \theta } which is the probability of being in statei{\displaystyle i}andj{\displaystyle j}at timest{\displaystyle t}andt+1{\displaystyle t+1}respectively given the observed sequenceY{\displaystyle Y}and parametersθ{\displaystyle \theta }. The denominators ofγi(t){\displaystyle \gamma _{i}(t)}andξij(t){\displaystyle \xi _{ij}(t)}are the same ; they represent the probability of making the observationY{\displaystyle Y}given the parametersθ{\displaystyle \theta }. The parameters of the hidden Markov modelθ{\displaystyle \theta }can now be updated: which is the expected frequency spent in statei{\displaystyle i}at time1{\displaystyle 1}. which is the expected number of transitions from stateito statejcompared to the expected total number of transitions away from statei. To clarify, the number of transitions away from stateidoes not mean transitions to a different statej, but to any state including itself. This is equivalent to the number of times stateiis observed in the sequence fromt= 1 tot=T− 1. where is an indicator function, andbi∗(vk){\displaystyle b_{i}^{*}(v_{k})}is the expected number of times the output observations have been equal tovk{\displaystyle v_{k}}while in statei{\displaystyle i}over the expected total number of times in statei{\displaystyle i}. These steps are now repeated iteratively until a desired level of convergence. Note:It is possible to over-fit a particular data set. That is,P(Y∣θfinal)>P(Y∣θtrue){\displaystyle P(Y\mid \theta _{\text{final}})>P(Y\mid \theta _{\text{true}})}. The algorithm also doesnotguarantee a global maximum. The algorithm described thus far assumes a single observed sequenceY=y1,…,yT{\displaystyle Y=y_{1},\ldots ,y_{T}}. However, in many situations, there are several sequences observed:Y1,…,YR{\displaystyle Y_{1},\ldots ,Y_{R}}. In this case, the information from all of the observed sequences must be used in the update of the parametersA{\displaystyle A},π{\displaystyle \pi }, andb{\displaystyle b}. Assuming that you have computedγir(t){\displaystyle \gamma _{ir}(t)}andξijr(t){\displaystyle \xi _{ijr}(t)}for each sequencey1,r,…,yNr,r{\displaystyle y_{1,r},\ldots ,y_{N_{r},r}}, the parameters can now be updated: where is an indicator function Suppose we have a chicken from which we collect eggs at noon every day. Now whether or not the chicken has laid eggs for collection depends on some unknown factors that are hidden. We can however (for simplicity) assume that the chicken is always in one of two states that influence whether the chicken lays eggs, and that this state only depends on the state on the previous day. Now we don't know the state at the initial starting point, we don't know the transition probabilities between the two states and we don't know the probability that the chicken lays an egg given a particular state.[8][9]To start we first guess the transition and emission matrices. We then take a set of observations (E = eggs, N = no eggs): N, N, N, N, N, E, E, N, N, N This gives us a set of observed transitions between days: NN, NN, NN, NN, NE, EE, EN, NN, NN The next step is to estimate a new transition matrix. For example, the probability of the sequence NN and the state being⁠S1{\displaystyle S_{1}}⁠then⁠S2{\displaystyle S_{2}}⁠is given by the following,P(S1)⋅P(N|S1)⋅P(S1→S2)⋅P(N|S2).{\displaystyle P(S_{1})\cdot P(N|S_{1})\cdot P(S_{1}\rightarrow S_{2})\cdot P(N|S_{2}).} Thus the new estimate for the⁠S1{\displaystyle S_{1}}⁠to⁠S2{\displaystyle S_{2}}⁠transition is now0.222.4234=0.0908{\displaystyle {\frac {0.22}{2.4234}}=0.0908}(referred to as "Pseudo probabilities" in the following tables). We then calculate the⁠S2{\displaystyle S_{2}}⁠to⁠S1{\displaystyle S_{1}}⁠,⁠S2{\displaystyle S_{2}}⁠to⁠S2{\displaystyle S_{2}}⁠and⁠S1{\displaystyle S_{1}}⁠to⁠S1{\displaystyle S_{1}}⁠transition probabilities and normalize so they add to 1. This gives us the updated transition matrix: Next, we want to estimate a new emission matrix, The new estimate for the E coming from⁠S1{\displaystyle S_{1}}⁠emission is now0.23940.2730=0.8769{\displaystyle {\frac {0.2394}{0.2730}}=0.8769}. This allows us to calculate the emission matrix as described above in the algorithm, by adding up the probabilities for the respective observed sequences. We then repeat for if N came from⁠S1{\displaystyle S_{1}}⁠and for if N and E came from⁠S2{\displaystyle S_{2}}⁠and normalize. To estimate the initial probabilities we assume all sequences start with the hidden state⁠S1{\displaystyle S_{1}}⁠and calculate the highest probability and then repeat for⁠S2{\displaystyle S_{2}}⁠. Again we then normalize to give an updated initial vector. Finally we repeat these steps until the resulting probabilities converge satisfactorily. Hidden Markov Models were first applied to speech recognition byJames K. Bakerin 1975.[10]Continuous speech recognition occurs by the following steps, modeled by a HMM. Feature analysis is first undertaken on temporal and/or spectral features of the speech signal. This produces an observation vector. The feature is then compared to all sequences of the speech recognition units. These units could bephonemes, syllables, or whole-word units. A lexicon decoding system is applied to constrain the paths investigated, so only words in the system's lexicon (word dictionary) are investigated. Similar to the lexicon decoding, the system path is further constrained by the rules of grammar and syntax. Finally, semantic analysis is applied and the system outputs the recognized utterance. A limitation of many HMM applications to speech recognition is that the current state only depends on the state at the previous time-step, which is unrealistic for speech as dependencies are often several time-steps in duration.[11]The Baum–Welch algorithm also has extensive applications in solving HMMs used in the field of speech synthesis.[12] The Baum–Welch algorithm is often used to estimate the parameters of HMMs in deciphering hidden or noisy information and consequently is often used incryptanalysis. In data security an observer would like to extract information from a data stream without knowing all the parameters of the transmission. This can involve reverse engineering achannel encoder.[13]HMMs and as a consequence the Baum–Welch algorithm have also been used to identify spoken phrases in encrypted VoIP calls.[14]In addition HMM cryptanalysis is an important tool for automated investigations of cache-timing data. It allows for the automatic discovery of critical algorithm state, for example key values.[15] TheGLIMMER(Gene Locator and Interpolated Markov ModelER) software was an earlygene-findingprogram used for the identification of coding regions inprokaryoticDNA.[16][17]GLIMMER uses Interpolated Markov Models (IMMs) to identify thecoding regionsand distinguish them from thenoncoding DNA. The latest release (GLIMMER3) has been shown to have increasedspecificityand accuracy compared with its predecessors with regard to predicting translation initiation sites, demonstrating an average 99% accuracy in locating 3' locations compared to confirmed genes in prokaryotes.[18] TheGENSCANwebserver is a gene locator capable of analyzingeukaryoticsequences up to one millionbase-pairs(1 Mbp) long.[19]GENSCAN utilizes a general inhomogeneous, three periodic, fifth order Markov model of DNA coding regions. Additionally, this model accounts for differences in gene density and structure (such as intron lengths) that occur in differentisochores. While most integrated gene-finding software (at the time of GENSCANs release) assumed input sequences contained exactly one gene, GENSCAN solves a general case where partial, complete, or multiple genes (or even no gene at all) is present.[20]GENSCAN was shown to exactly predict exon location with 90% accuracy with 80% specificity compared to an annotated database.[21] Copy-number variations(CNVs) are an abundant form of genome structure variation in humans. A discrete-valued bivariate HMM (dbHMM) was used assigning chromosomal regions to seven distinct states: unaffected regions, deletions, duplications and four transition states. Solving this model using Baum-Welch demonstrated the ability to predict the location of CNV breakpoint to approximately 300 bp frommicro-array experiments.[22]This magnitude of resolution enables more precise correlations between different CNVs andacross populationsthan previously possible, allowing the study of CNV population frequencies. It also demonstrated adirect inheritance pattern for a particular CNV.
https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm
Accuracy and precisionare two measures ofobservational error.Accuracyis how close a given set ofmeasurements(observationsor readings) are to theirtrue value.Precisionis how close the measurements are to each other. TheInternational Organization for Standardization(ISO) defines a related measure:[1]trueness, "the closeness of agreement between thearithmetic meanof a large number of test results and the true or accepted reference value." Whileprecisionis a description ofrandom errors(a measure ofstatistical variability),accuracyhas two different definitions: In simpler terms, given astatistical sampleor set of data points from repeated measurements of the same quantity, the sample or set can be said to beaccurateif theiraverageis close to the true value of the quantity being measured, while the set can be said to bepreciseif theirstandard deviationis relatively small. In the fields ofscienceandengineering, the accuracy of ameasurementsystem is the degree of closeness of measurements of aquantityto that quantity's truevalue.[3]The precision of a measurement system, related toreproducibilityandrepeatability, is the degree to which repeated measurements under unchanged conditions show the sameresults.[3][4]Although the two words precision and accuracy can besynonymousincolloquialuse, they are deliberately contrasted in the context of thescientific method. The field ofstatistics, where the interpretation of measurements plays a central role, prefers to use the termsbiasandvariabilityinstead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision. A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains asystematic error, then increasing thesample sizegenerally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is consideredvalidif it is bothaccurateandprecise. Related terms includebias(non-randomor directed effects caused by a factor or factors unrelated to theindependent variable) anderror(random variability). The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. In addition to accuracy and precision, measurements may also have ameasurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. Innumerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits. In military terms, accuracy refers primarily to the accuracy of fire (justesse de tir), the precision of fire expressed by the closeness of a grouping of shots at and around the centre of the target.[5] A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards in 1994, which is also reflected in the 2008 issue of the BIPMInternational Vocabulary of Metrology(VIM), items 2.13 and 2.14.[3] According to ISO 5725-1,[1]the general term "accuracy" is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the samemeasurand, it involves a component of random error and a component of systematic error. In this case trueness is the closeness of the mean of a set of measurement results to the actual (true) value, that is the systematic error, and precision is the closeness of agreement among a set of results, that is the random error. ISO 5725-1 and VIM also avoid the use of the term "bias", previously specified in BS 5497-1,[6]because it has different connotations outside the fields of science and engineering, as in medicine and law. In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions.[7] Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring sometraceablereferencestandard. Such standards are defined in theInternational System of Units(abbreviated SI from French:Système international d'unités) and maintained by nationalstandards organizationssuch as theNational Institute of Standards and Technologyin the United States. This also applies when measurements are repeated and averaged. In that case, the termstandard erroris properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, thecentral limit theoremshows that theprobability distributionof the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: A common convention in science and engineering is to express accuracy and/or precision implicitly by means ofsignificant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead tofalse precisionerrors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000. Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10m, meaning a range of between 7.54375 and 7.54421 × 10−10m. Precision includes: In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within.[8]For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data.[9] Accuracyis also used as a statistical measure of how well abinary classificationtest correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (bothtrue positivesandtrue negatives) among the total number of cases examined.[10]As such, it compares estimates ofpre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index".[11][12][13]It is a parameter of the test. The formula for quantifying binary accuracy is:Accuracy=TP+TNTP+TN+FP+FN{\displaystyle {\text{Accuracy}}={\frac {TP+TN}{TP+TN+FP+FN}}}whereTP = True positive;FP = False positive;TN = True negative;FN = False negative In this context, the concepts of trueness and precision as defined by ISO 5725-1 are not applicable. One reason is that there is not a single “true value” of a quantity, but rather two possible true values for every case, while accuracy is an average across all cases and therefore takes into account both values. However, the termprecisionis used in this context to mean a different metric originating from the field of information retrieval (see below). When computing accuracy in multiclass classification, accuracy is simply the fraction of correct classifications:[14][15]Accuracy=correct classificationsall classifications{\displaystyle {\text{Accuracy}}={\frac {\text{correct classifications}}{\text{all classifications}}}}This is usually expressed as a percentage. For example, if a classifier makes ten predictions and nine of them are correct, the accuracy is 90%. Accuracy is sometimes also viewed as amicro metric, to underline that it tends to be greatly affected by the particular class prevalence in a dataset and the classifier's biases.[14] Furthermore, it is also called top-1 accuracy to distinguish it from top-5 accuracy, common inconvolutional neural networkevaluation. To evaluate top-5 accuracy, the classifier must provide relative likelihoods for each class. When these are sorted, a classification is considered correct if the correct classification falls anywhere within the top 5 predictions made by the network. Top-5 accuracy was popularized by theImageNetchallenge. It is usually higher than top-1 accuracy, as any correct predictions in the 2nd through 5th positions will not improve the top-1 score, but do improve the top-5 score. Inpsychometricsandpsychophysics, the termaccuracyis interchangeably used withvalidityandconstant error.Precisionis a synonym forreliabilityandvariable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test likeCronbach's alphato ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.[citation needed] Inlogic simulation, a common mistake in evaluation of accurate models is to compare alogic simulation modelto atransistorcircuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.[16][17] Information retrieval systems, such asdatabasesandweb search engines, are evaluated bymany different metrics, some of which are derived from theconfusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions ofprecision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set ofground truthrelevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measureprecision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such asdiscounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. In cognitive systems, accuracy and precision is used to characterize and measure results of a cognitive process performed by biological or artificial entities where a cognitive process is a transformation of data, information, knowledge, or wisdom to a higher-valued form. (DIKW Pyramid) Sometimes, a cognitive process produces exactly the intended or desired output but sometimes produces output far from the intended or desired. Furthermore, repetitions of a cognitive process do not always produce the same output.Cognitive accuracy(CA) is the propensity of a cognitive process to produce the intended or desired output.Cognitive precision(CP) is the propensity of a cognitive process to produce the same output.[18][19][20]To measureaugmented cognitionin human/cog ensembles, where one or more humans work collaboratively with one or more cognitive systems (cogs), increases in cognitive accuracy and cognitive precision assist in measuring the degree ofcognitive augmentation.
https://en.wikipedia.org/wiki/Accuracy_and_precision
Ann-gramis a sequence ofnadjacent symbols in particular order.[1]The symbols may benadjacentletters(includingpunctuation marksand blanks),syllables, or rarely wholewordsfound in a language dataset; or adjacentphonemesextracted from a speech-recording dataset, or adjacent base pairs extracted from a genome. They are collected from atext corpusorspeech corpus. IfLatin numerical prefixesare used, thenn-gram of size 1 is called a "unigram", size 2 a "bigram" (or, less commonly, a "digram") etc. If, instead of the Latin ones, theEnglish cardinal numbersare furtherly used, then they are called "four-gram", "five-gram", etc. Similarly, usingGreek numerical prefixessuch as "monomer", "dimer", "trimer", "tetramer", "pentamer", etc., or English cardinal numbers, "one-mer", "two-mer", "three-mer", etc. are used in computational biology, forpolymersoroligomersof a known size, calledk-mers. When the items are words,n-grams may also be calledshingles.[2] In the context ofnatural language processing(NLP), the use ofn-grams allowsbag-of-wordsmodels to capture information such as word order, which would not be possible in the traditional bag of words setting. (Shannon 1951)[3]discussedn-gram models of English. For example: Figure 1 shows several example sequences and the corresponding 1-gram, 2-gram and 3-gram sequences. Here are further examples; these are word-level 3-grams and 4-grams (and counts of the number of times they appeared) from the Googlen-gram corpus.[4] 3-grams 4-grams
https://en.wikipedia.org/wiki/N-gram#Character_n-grams
Alanguage modelis amodelof natural language.[1]Language models are useful for a variety of tasks, includingspeech recognition,[2]machine translation,[3]natural language generation(generating more human-like text),optical character recognition,route optimization,[4]handwriting recognition,[5]grammar induction,[6]andinformation retrieval.[7][8] Large language models(LLMs), currently their most advanced form, are predominantly based ontransformerstrained on larger datasets (frequently using wordsscrapedfrom the publicinternet). They have supersededrecurrent neural network-based models, which had previously superseded the purely statistical models, such aswordn-gram language model. Noam Chomskydid pioneering work on language models in the 1950s by developing a theory offormal grammars.[9] In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations likewordn-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such asword embeddings, began to replace discrete representations.[10]Typically, the representation is areal-valuedvector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.[11] Awordn-gram language modelis a purely statistical model of language. It has been superseded byrecurrent neural network–based models, which have been superseded bylarge language models.[12]It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word is considered, it is called a bigram model; if two words, a trigram model; ifn− 1 words, ann-gram model.[13]Special tokens are introduced to denote the start and end of a sentence⟨s⟩{\displaystyle \langle s\rangle }and⟨/s⟩{\displaystyle \langle /s\rangle }. Maximum entropylanguage models encode the relationship between a word and then-gram history using feature functions. The equation is P(wm∣w1,…,wm−1)=1Z(w1,…,wm−1)exp⁡(aTf(w1,…,wm)){\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} whereZ(w1,…,wm−1){\displaystyle Z(w_{1},\ldots ,w_{m-1})}is thepartition function,a{\displaystyle a}is the parameter vector, andf(w1,…,wm){\displaystyle f(w_{1},\ldots ,w_{m})}is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certainn-gram. It is helpful to use a prior ona{\displaystyle a}or some form ofregularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. wordn-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that areskippedover (thus the name "skip-gram").[14] Formally, ak-skip-n-gram is a length-nsubsequence where the components occur at distance at mostkfrom each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented bylinear combinations, capturing a form ofcompositionality. For example, in some such models, ifvis the function that maps a wordwto itsn-d vector representation, then v(king)−v(male)+v(female)≈v(queen){\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} Continuous representations orembeddings of wordsare produced inrecurrent neural network-based language models (known also ascontinuous space language models).[17]Such continuous space embeddings help to alleviate thecurse of dimensionality, which is the consequence of the number of possible sequences of words increasingexponentiallywith the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[18] Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs are language models with many parameters, and are trained withself-supervised learningon a vast amount of text. Although sometimes matching human performance, it is not clear whether they are plausiblecognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[22] Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.[23] Various data sets have been developed for use in evaluating language processing systems.[24]These include:
https://en.wikipedia.org/wiki/Language_model
Ambiguityis the type ofmeaningin which aphrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity isuncertainty. It is thus anattributeof any idea or statement whoseintendedmeaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (Theprefixambi-reflects the idea of "two", as in "two meanings"). The concept of ambiguity is generally contrasted withvagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Lexical ambiguity is contrasted withsemantic ambiguity.[citation needed]The former represents a choice between a finite number of known and meaningfulcontext-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related tovagueness. Ambiguity in human language is argued to reflect principles of efficient communication.[2][3]Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguitycan be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Thelexical ambiguityof a word or phrase applies to it having more than one meaning in the language to which the word belongs.[4]"Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer. Lexical ambiguity can be addressed byalgorithmicmethods that automatically associate the appropriate meaning with a word in context, a task referred to asword-sense disambiguation. The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" andobfuscationare necessary to gain support from multipleconstituentswithmutually exclusiveconflicting desires from his or her candidate of choice. Ambiguity is a powerful tool ofpolitical science. More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good personversusthe lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to applyprefixesandsuffixescan also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock"). Semantic ambiguityoccurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either Syntactic ambiguityarises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.[5]For the notion of, and theoretic results about, syntactic ambiguity in artificial,formal languages(such as computerprogramming languages), seeAmbiguous grammar. Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used asvocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken languagecan contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called amondegreen. Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of aglittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. Incontinental philosophy(particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition.Martin Heideggerargued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology,Daseinis always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology"Jean-Paul Sartrefollows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity.Simone de Beauvoirtries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. FollowingErnest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). In the narrative, ambiguity can be introduced in several ways: motive, plot, character.F. Scott Fitzgeralduses the latter type of ambiguity with notable effect in his novelThe Great Gatsby. Mathematical notationis a helpful tool that eliminates a lot of misunderstandings associated with natural language inphysicsand othersciences. Nonetheless, there are still some inherent ambiguities due tolexical,syntactic, andsemanticreasons that persist in mathematical notation. Theambiguityin the style of writing afunctionshould not be confused with amultivalued function, which can (and should) be defined in a deterministic and unambiguous way. Severalspecial functionsstill do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Ambiguous expressions often appear in physical and mathematical texts. It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example,f=f(x){\displaystyle f=f(x)}.Then, if one seesf=f(y+1){\displaystyle f=f(y+1)},there is no way to distinguish whether it meansf=f(x){\displaystyle f=f(x)}multipliedby(y+1){\displaystyle (y+1)},or functionf{\displaystyle f}evaluatedat argument equal to(y+1){\displaystyle (y+1)}.In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++andFortran) require the character * as a symbol of multiplication. TheWolfram Languageused inMathematicaallows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expressionf=f(x){\displaystyle f=f(x)}is qualified as an error. The order of operations may depend on the context. In mostprogramming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example,a/bc{\displaystyle a/bc}is interpreted asa/(bc){\displaystyle a/(bc)};in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. In thescientific journalstyle, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expressionsin{\displaystyle sin}does not denote thesine function, but the product of the three variabless{\displaystyle s},i{\displaystyle i},n{\displaystyle n},although in the informal notation of a slide presentation it may stand forsin{\displaystyle \sin }. Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notationTmnk{\displaystyle T_{mnk}},the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variablesm{\displaystyle m},n{\displaystyle n}andk{\displaystyle k},or it is an indication to a trivalenttensor. An expression such assin2⁡α/2{\displaystyle \sin ^{2}\alpha /2}can be understood to mean either(sin⁡(α/2))2{\displaystyle (\sin(\alpha /2))^{2}}or(sin⁡α)2/2{\displaystyle (\sin \alpha )^{2}/2}.Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writingsin2⁡(α/2){\displaystyle \sin ^{2}(\alpha /2)}or12sin2⁡α{\textstyle {\frac {1}{2}}\sin ^{2}\alpha }. The expressionsin−1⁡α{\displaystyle \sin ^{-1}\alpha }meansarcsin⁡(α){\displaystyle \arcsin(\alpha )}in several texts, though it might be thought to mean(sin⁡α)−1{\displaystyle (\sin \alpha )^{-1}},sincesinn⁡α{\displaystyle \sin ^{n}\alpha }commonly means(sin⁡α)n{\displaystyle (\sin \alpha )^{n}}.Conversely,sin2⁡α{\displaystyle \sin ^{2}\alpha }might seem to meansin⁡(sin⁡α){\displaystyle \sin(\sin \alpha )},as thisexponentiationnotation usually denotesfunction iteration: in general,f2(x){\displaystyle f^{2}(x)}meansf(f(x)){\displaystyle f(f(x))}.However, fortrigonometricandhyperbolic functions, this notation conventionally means exponentiation of the result of function application. The expressiona/2b{\displaystyle a/2b}can be interpreted as meaning(a/2)b{\displaystyle (a/2)b};however, it is more commonly understood to meana/(2b){\displaystyle a/(2b)}. It is common to define thecoherent statesinquantum opticswith|α⟩{\displaystyle ~|\alpha \rangle ~}and states with fixed number of photons with|n⟩{\displaystyle ~|n\rangle ~}.Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, andn{\displaystyle n}-photon state if the Latin characters dominate. The ambiguity becomes even worse, if|x⟩{\displaystyle ~|x\rangle ~}is used for the states with certain value of the coordinate, and|p⟩{\displaystyle ~|p\rangle ~}means the state with certain value of the momentum, which may be used in books onquantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional,dimensionlessvariables are used. Expression|1⟩{\displaystyle |1\rangle }may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Some physical quantities do not yet have established notations; their value (and sometimes evendimension, as in the case of theEinstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just likeLudwig Wittgensteinstates inTractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."[7] A highly confusing term isgain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. The termintensityis ambiguous when applied to light. The term can refer to any ofirradiance,luminous intensity,radiant intensity, orradiance, depending on the background of the person using the term. Also, confusions may be related with the use ofatomic percentas measure of concentration of adopant, orresolutionof an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See alsoAccuracy and precision. TheBerry paradoxarises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise tovicious circlefallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.[8] In mathematics and logic, ambiguity can be considered to be an instance of the logical concept ofunderdetermination—for example,X=Y{\displaystyle X=Y}leaves open what the value ofX{\displaystyle X}is—while overdetermination, except when likeX=1,X=1,X=1{\displaystyle X=1,X=1,X=1}, is aself-contradiction, also calledinconsistency,paradoxicalness, oroxymoron, or in mathematics aninconsistent system—such asX=2,X=3{\displaystyle X=2,X=3},which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity andimpossible objects, such as the Necker cube and impossible cube, or many of the drawings ofM. C. Escher.[9] Somelanguages have been createdwith the intention of avoiding ambiguity, especiallylexical ambiguity.LojbanandLoglanare two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions tosyntaxandsemanticrules are time-consuming and difficult to learn. Instructural biology, ambiguity has been recognized as a problem for studyingprotein conformations.[10]The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits calleddomains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. ChristianityandJudaismemploy the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorseRudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans.[dubious–discuss]TheapocryphalBook of Judithis noted for the "ingenious ambiguity"[11]expressed by its heroine; for example, she says to the villain of the story,Holofernes, "my lord will not fail to achieve his purposes", without specifying whethermy lordrefers to the villain or to God.[12][13] The orthodox Catholic writerG. K. Chestertonregularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books,Orthodoxy(1908), itself employed such a paradox.[14] Inmusic, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as somepolytonality,polymeter, other ambiguousmetersorrhythms, and ambiguousphrasing, or (Stein 2005, p. 79) anyaspect of music. Themusic of Africais often purposely ambiguous. To quoteSir Donald Francis Tovey(1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." In visual art, certain images are visually ambiguous, such as theNecker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon calledmultistable perception. The opposite of suchambiguous imagesareimpossible objects.[15] Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Insocial psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to thebystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.[16] Incomputer science, theSI prefixeskilo-,mega-andgiga-were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242and 10243) contrary to themetric systemin which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g.DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Subsequently, the Ki, Mi, and Gi prefixes were introduced so thatbinary prefixescould be written explicitly, also rendering k, M, and Gunambiguousin texts conforming to the new standard—this led to anewambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously1000000or1048576) islessuncertain than the engineering value1.0×106(defined to designate the interval950000to 1050000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109and 1012bytes.
https://en.wikipedia.org/wiki/Ambiguity
Aregular expression(shortened asregexorregexp),[1]sometimes referred to asrational expression,[2][3]is a sequence ofcharactersthat specifies amatch patternintext. Usually such patterns are used bystring-searching algorithmsfor "find" or "find and replace" operations onstrings, or forinput validation. Regular expression techniques are developed intheoretical computer scienceandformal languagetheory. The concept of regular expressions began in the 1950s, when the American mathematicianStephen Cole Kleeneformalized the concept of aregular language. They came into common use withUnixtext-processing utilities. Differentsyntaxesfor writing regular expressions have existed since the 1980s, one being thePOSIXstandard and another, widely used, being thePerlsyntax. Regular expressions are used insearch engines, in search and replace dialogs ofword processorsandtext editors, intext processingutilities such assedandAWK, and inlexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine",[4][5]andmany of theseare available for reuse. Regular expressions originated in 1951, when mathematicianStephen Cole Kleenedescribedregular languagesusing his mathematical notation calledregular events.[6][7]These arose intheoretical computer science, in the subfields ofautomata theory(models of computation) and the description and classification offormal languages, motivated by Kleene's attempt to describe earlyartificial neural networks. (Kleene introduced it as an alternative toMcCulloch & Pitts's"prehensible", but admitted "We would welcome any suggestions as to a more descriptive term."[8]) Other early implementations ofpattern matchinginclude theSNOBOLlanguage, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor[9]and lexical analysis in a compiler.[10]Among the first appearances of regular expressions in program form was whenKen Thompsonbuilt Kleene's notation into the editorQEDas a means to match patterns intext files.[9][11][12][13]For speed, Thompson implemented regular expression matching byjust-in-time compilation(JIT) toIBM 7094code on theCompatible Time-Sharing System, an important early example of JIT compilation.[14]He later added this capability to the Unix editored, which eventually led to the popular search toolgrep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor:g/re/pmeaning "Global search for Regular Expression and Print matching lines").[15]Around the same time when Thompson developed QED, a group of researchers includingDouglas T. Rossimplemented a tool based on regular expressions that is used for lexical analysis incompilerdesign.[10] Many variations of these original forms of regular expressions were used inUnix[13]programs atBell Labsin the 1970s, includinglex,sed,AWK, andexpr, and in other programs such asvi, andEmacs(which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in thePOSIX.2standard in 1992. In the 1980s, the more complicated regexes arose inPerl, which originally derived from a regex library written byHenry Spencer(1986), who later wrote an implementation forTclcalledAdvanced Regular Expressions.[16]The Tcl library is a hybridNFA/DFAimplementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation includePostgreSQL.[17]Perl later expanded on Spencer's original library to add many new features.[18]Part of the effort in the design ofRaku(formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition ofparsing expression grammars.[19]The result is amini-languagecalledRaku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allowBNF-style definition of arecursive descent parservia sub-rules. The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards likeISO SGML(precursored by ANSI "GCA 101-1983") consolidated. The kernel of thestructure specification languagestandards consists of regexes. Its use is evident in theDTDelement group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in theglobsyntax for filenames, and in theSQLLIKEoperator. Starting in 1997,Philip HazeldevelopedPCRE(Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools includingPHPandApache HTTP Server.[20] Today, regexes are widely supported in programming languages, text processing programs (particularlylexers), advanced text editors, and some other programs. Regex support is part of thestandard libraryof many programming languages, includingJavaandPython, and is built into the syntax of others, including Perl andECMAScript. In the late 2010s, several companies started to offer hardware,FPGA,[21]GPU[22]implementations ofPCREcompatible regex engines that are faster compared toCPUimplementations. The phraseregular expressions, orregexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either ametacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regexb., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example,.is a very general pattern,[a-z](match all lower case letters from 'a' to 'z') is less general andbis a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standardASCIIkeyboard. A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in atext editor, the regular expressionseriali[sz]ematches both "serialise" and "serialize".Wildcard charactersalso achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is inglobbingsimilar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex^[ \t]+|[ \t]+$matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?. Aregex processortranslates a regular expression in the above syntax into an internal representation that can be executed and matched against astringrepresenting the text being searched in. One possible approach is theThompson's construction algorithmto construct anondeterministic finite automaton(NFA), which is thenmade deterministicand the resultingdeterministic finite automaton(DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA schemeN(s*)obtained from the regular expressions*, wheresdenotes a simpler regular expression in turn, which has already beenrecursivelytranslated to the NFAN(s). A regular expression, often called apattern, specifies asetof strings required for a particular purpose. A simple way to specify a finite set of strings is to list itselementsor members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the patternH(ä|ae?)ndel; we say that this patternmatcheseach of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example,(Hän|Han|Haen)delalso specifies the same set of three strings in this example. Most formalisms provide the following operations to construct regular expressions. These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. The precisesyntaxfor regular expressions varies among tools and with context; more detail is given in§ Syntax. Regular expressions describeregular languagesinformal language theory. They have the same expressive power asregular grammars. Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory.[24][25]Given a finitealphabetΣ, the following constants are defined as regular expressions: Given regular expressions R and S, the following operations over them are defined to produce regular expressions: To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example,(ab)ccan be written asabc, anda|(b(c*))can be written asa|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar. Examples: The formal definition of regular expressions is minimal on purpose, and avoids defining?and+—these can be expressed as follows:a+=aa*, anda?=(a|ε). Sometimes thecomplementoperator is added, to give ageneralized regular expression; hereRcmatches all strings over Σ* that do not matchR. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause adouble exponentialblow-up of its length.[26][27][28] Regular expressions in this sense can express the regular languages, exactly the class of languages accepted bydeterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size growsexponentiallyin the size of the shortest equivalent regular expressions. The standard example here is the languagesLkconsisting of all strings over the alphabet {a,b} whosekth-from-last letter equalsa. On the one hand, a regular expression describingL4is given by(a∣b)∗a(a∣b)(a∣b)(a∣b){\displaystyle (a\mid b)^{*}a(a\mid b)(a\mid b)(a\mid b)}. Generalizing this pattern toLkgives the expression: On the other hand, it is known that every deterministic finite automaton accepting the languageLkmust have at least 2kstates. Luckily, there is a simple mapping from regular expressions to the more generalnondeterministic finite automata(NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3grammarsof theChomsky hierarchy.[24] In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a givenISBNrequires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .[29] Given a regular expression,Thompson's construction algorithmcomputes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved byKleene's algorithm. Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implementregexes. Seebelowfor more on this. As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results. It is possible to write analgorithmthat, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to aminimal deterministic finite state machine, and determines whether they areisomorphic(equivalent). Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)∗and (X∗Y∗)∗denote the same regular language, for all regular expressionsX,Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)∗and (a∗b∗)∗denote the same language over the alphabet Σ={a,b}. More generally, an equationE=Fbetween regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.[30][31] Every regular expression can be written solely in terms of theKleene starandset unionsover finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to thestar height problem. In 1991,Dexter Kozenaxiomatized regular expressions as aKleene algebra, using equational andHorn clauseaxioms.[32]Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.[33] A regexpatternmatches a targetstring. The pattern is composed of a sequence ofatoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using( )as metacharacters. Metacharacters help form:atoms;quantifierstelling how many atoms (and whether it is agreedyquantifieror not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities. Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have theirliteralcharacter meaning, depending on context, or whether they are "escaped", i.e. preceded by anescape sequence, in this case, the backslash\. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" orleaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters( )and{ }be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are{}[]()^$.|*+?and\. The usual characters that become metacharacters when escaped aredswDSWandN. When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regexreis entered as"re". However, they are often written with slashes asdelimiters, as in/re/for the regexre. This originates ined, where/is the editor command for searching, and an expression/re/can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famouslyg/re/pas ingrep("global regex print"), which is included in mostUnix-based operating systems, such asLinuxdistributions. A similar convention is used insed, where search and replace is given bys/re/replacement/and patterns can be joined with a comma to specify a range of lines as in/re1/,/re2/. This notation is particularly well known due to its use inPerl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the commands,/,X,will replace a/with anX, using commas as delimiters. TheIEEEPOSIXstandard has three sets of compliance:BRE(Basic Regular Expressions),[34]ERE(Extended Regular Expressions), andSRE(Simple Regular Expressions). SRE isdeprecated,[35]in favor of BRE, as both providebackward compatibility. The subsection below covering thecharacter classesapplies to both BRE and ERE. BRE and ERE work together. ERE adds?,+, and|, and it removes the need to escape the metacharacters( )and{ }, which arerequiredin BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example,GNUgrephas the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" forPerlregexes. Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs,( )and{ }are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includeslazy matching,backreferences, named capture groups, andrecursivepatterns. In thePOSIXstandard, Basic Regular Syntax (BRE) requires that themetacharacters( )and{ }be designated\(\)and\{\}, whereas Extended Regular Syntax (ERE) does not. The-character is treated as a literal character if it is the last or the first (after the^, if present) character within the brackets:[abc-],[-abc],[^-abc]. Backslash escapes are not allowed. The]character can be included in a bracket expression if it is the first (after the^, if present) character:[]abc],[^]abc]. Examples: According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).[37] The meaning of metacharactersescapedwith a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example,\( \)is now( )and\{ \}is now{ }. Additionally, support is removed for\nbackreferences and the following metacharacters are added: Examples: POSIX Extended Regular Expressions can often be used with modern Unix utilities by including thecommand lineflag-E. The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example,[A-Z]could stand for any uppercase letter in the English alphabet, and\dcould mean any digit. Character classes apply to both POSIX levels. When specifying a range of characters, such as[a-Z](i.e. lowercaseato uppercaseZ), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could beabc...zABC...Z, oraAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table: POSIX character classes can only be used within bracket expressions. For example,[[:upper:]ab]matches the uppercase letters and lowercase "a" and "b". An additional non-POSIX class understood by some tools is[:word:], which is usually defined as[:alnum:]plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editorVimfurther distinguisheswordandword-headclasses (using the notation\wand\h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like\h\w*or[[:alpha:]_][[:alnum:]_]*in POSIX notation. Note that what the POSIX regex standards callcharacter classesare commonly referred to asPOSIX character classesin other regex flavors which support them. With most other regex flavors, the termcharacter classis used to describe what POSIX callsbracket expressions. Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar toPerl's—for example,Java,JavaScript,Julia,Python,Ruby,Qt, Microsoft's.NET Framework, andXML Schema. Some languages and tools such asBoostandPHPsupport multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.[38] In Python and some other implementations (e.g. Java), the three common quantifiers (*,+and?) aregreedyby default because they match as many characters as possible.[39]The regex".+"(including the double-quotes) applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part,"Ganymede,". The aforementioned quantifiers may, however, be madelazyorminimalorreluctant, matching as few characters as possible, by appending a question mark:".+?"matches only"Ganymede,".[39] In Java and Python 3.11+,[40]quantifiers may be madepossessiveby appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed:[41]While the regex".*"applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line, the regex".*+"doesnot match at all, because.*+consumes the entire input, including the final". Thus, possessive quantifiers are most useful with negated character classes, e.g."[^"]*+", which matches"Ganymede,"when applied to the same string. Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is(?>group). For example, while^(wi|w)i$matches bothwiandwii,^(?>wi|w)i$only matcheswiibecause the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".[42] Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.[41] IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.[43] Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds theregular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", calledsquaresin formal language theory. The pattern for these strings is(.+)\1. The language of squares is not regular, nor is itcontext-free, due to thepumping lemma. However,pattern matchingwith an unbounded number of backreferences, as supported by numerous modern tools, is stillcontext sensitive.[44]The general problem of matching any number of backreferences isNP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.[45] However, many tools, libraries, and engines that provide such constructions still use the termregular expressionfor their patterns. This has led to a nomenclature where the term regular expression has different meanings informal language theoryand pattern matching. For this reason, some people have taken to using the termregex,regexp, or simplypatternto describe the latter.Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku: "Regular expressions" […] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).[19] Other features not found in describing regular languages include assertions. These include the ubiquitous^and$, used since at least 1970,[46]as well as some more sophisticated extensions like lookaround that appeared in 1994.[47]Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching.[citation needed]Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.[48] Thelook-ahead assertions(?=...)and(?!...)have been attested since at least 1994, starting with Perl 5.[47]The lookbehind assertions(?<=...)and(?<!...)are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.[49] There are at least three differentalgorithmsthat decide whether and how a given regex matches a string. The oldest and fastest relies on a result in formal language theory that allows everynondeterministic finite automaton(NFA) to be transformed into adeterministic finite automaton(DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of sizemhas the time and memory cost ofO(2m), but it can be run on a string of sizenin timeO(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded. An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises toO(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.[50][51]Modern implementations include the re1-re2-sregex family based on Cox's code. The third algorithm is to match the pattern against the input string bybacktracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like(a|aa)*bthat contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem calledRegular expression Denial of Service(ReDoS). Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.[52] Sublinear runtime algorithms have been achieved usingBoyer-Moore (BM) based algorithmsand related DFA optimization techniques such as the reverse scan.[53]GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wuagrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.[54] A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of⁠O(n2k+2){\displaystyle {\mathrm {O} }(n^{2k+2})}⁠time and⁠O(n2k+1){\displaystyle {\mathrm {O} }(n^{2k+1})}⁠space for a haystack of length n and k backreferences in the RegExp.[55]A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.[56] In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to useASCIIcharacters as their token set though regex libraries have supported numerous othercharacter sets. Many modern regex engines offer at least some support forUnicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode. Mostgeneral-purpose programming languagessupport regex capabilities, either natively or vialibraries. Regexes are useful in a wide variety of text processing tasks, and more generallystring processing, where the data need not be textual. Common applications includedata validation,data scraping(especiallyweb scraping),data wrangling, simpleparsing, the production ofsyntax highlightingsystems, and many other tasks. Some high-enddesktop publishingsoftware has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining acharacter stylethat makes text intosmall capsand then using the regex[A-Z]{4,}to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead. While regexes would be useful on Internetsearch engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions includeGoogle Code SearchandExalead. However, Google Code Search was shut down in January 2012.[59] The specific syntax rules vary depending on the specific implementation,programming language, orlibraryin use. Additionally, the functionality of regex implementations can vary betweenversions. Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration. The following conventions are used in the examples.[60] These regexes are all Perl-like syntax. StandardPOSIXregular expressions are different. Unless otherwise indicated, the following examples conform to thePerlprogramming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex,\( \)vs.(), or lack of\dinstead ofPOSIX[:digit:]). The syntax and conventions used in these examples coincide with that of other programming environments as well.[61] Output: Output: Output: Output: Output: Output: Output: Output: Output: (^\w|\w$|\W\w|\w\W). Output: in Unicode,[58]where theAlphabeticproperty contains more than Latin letters, and theDecimal_Numberproperty contains more than Arab digits. Output: in Unicode. Output: Output: Output: Output: Output: Output: Output: Output: Output: Output: Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as theinduction of regular languagesand is part of the general problem ofgrammar inductionincomputational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of stringsnotin that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (seelanguage identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
https://en.wikipedia.org/wiki/Regular_expression
Atransduceris a device thatconvertsenergy from one form to another. Usually a transducer converts asignalin one form of energy to a signal in another.[1]Transducers are often employed at the boundaries ofautomation,measurement, andcontrol systems, where electrical signals are converted to and from other physical quantities (energy, force, torque, light, motion, position, etc.). The process of converting oneform of energyto another is known as transduction.[2] Transducers can be categorized by the direction information passes through them: Passive transducers require an external power source to operate, which is called an excitation signal. The signal is modulated by the sensor to produce an output signal. For example, athermistordoes not generate any electrical signal, but by passing an electric current through it, itsresistancecan be measured by detecting variations in the current orvoltageacross the thermistor.[5][2] Active transducers in contrast, generate electric current in response to an external stimulus which serves as the output signal without the need of an additional energy source. Such examples are aphotodiode, and apiezoelectric sensor, photovoltic,thermocouple.[5] Some specifications that are used to rate transducers: Electromechanicalinputfeeds meters and sensors, while electromechanicaloutputdevices are generically calledactuators): Also known asphotoelectric:
https://en.wikipedia.org/wiki/Transducer
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Derivation:The process of recursive generation of strings from a grammar. Parsing:Finding a valid derivation using an automaton. Parse Tree:The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2] Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple: where PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars. TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }". Its derivation is: Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6] Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities. An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7] Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15] Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental determination as is the case with energy minimization methods.[18][19] The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1] The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20] Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6] RNA secondary structure implementations based on PCFG approaches can be utilized in : Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27] PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to: The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6] A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21] A summary of general steps for utilizing PCFGs in various scenarios: Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1] The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslog⁡P(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1] Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24] CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows: The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1] In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -log⁡e^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-log⁡P(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1] The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice. By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35] For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20] After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38] Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance: Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20] PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20] Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
https://en.wikipedia.org/wiki/Probabilistic_parsing
Parsing,syntax analysis, orsyntactic analysisis a process of analyzing astringofsymbols, either innatural language,computer languagesordata structures, conforming to the rules of aformal grammarby breaking it into parts. The termparsingcomes from Latinpars(orationis), meaningpart (of speech).[1] The term has slightly different meanings in different branches oflinguisticsandcomputer science. Traditionalsentenceparsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such assentence diagrams. It usually emphasizes the importance of grammatical divisions such assubjectandpredicate. Withincomputational linguisticsthe term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in aparse treeshowing their syntactic relation to each other, which may also containsemanticinformation.[citation needed]Some parsing algorithms generate aparse forestor list of parse trees from a string that issyntactically ambiguous.[2] The term is also used inpsycholinguisticswhen describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc."[1]This term is especially common when discussing which linguistic cues help speakers interpretgarden-path sentences. Within computer science, the term is used in the analysis ofcomputer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing ofcompilersandinterpreters. The term may also be used to describe a split or separation. In data analysis, the term is often used to refer to a process extracting desired information from data, e.g., creating atime seriessignal from aXMLdocument. The traditional grammatical exercise of parsing, sometimes known asclause analysis, involves breaking down a text into its componentparts of speechwith an explanation of the form, function, and syntactic relationship of each part.[3]This is determined in large part from study of the language'sconjugationsanddeclensions, which can be quite intricate for heavilyinflectedlanguages. To parse a phrase such as "man bites dog" involves noting that thesingularnoun "man" is thesubjectof the sentence, the verb "bites" is thethird person singularof thepresent tenseof the verb "to bite", and the singular noun "dog" is theobjectof the sentence. Techniques such assentence diagramsare sometimes used to indicate relation between elements in the sentence. Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language.[citation needed] In somemachine translationandnatural language processingsystems, written texts in human languages are parsed by computer programs.[4]Human sentences are not easily parsed by programs, as there is substantialambiguityin the structure of human language, whose usage is to convey meaning (orsemantics) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case.[5]So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.[citation needed] In order to parse natural language data, researchers must first agree on thegrammarto be used. The choice of syntax is affected by bothlinguisticand computational concerns; for instance some parsing systems uselexical functional grammar, but in general, parsing for grammars of this type is known to beNP-complete.Head-driven phrase structure grammaris another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the PennTreebank.Shallow parsingaims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy isdependency grammarparsing. Most modern parsers are at least partly statistical; that is, they rely on acorpusof training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts.(Seemachine learning.)Approaches which have been used include straightforwardPCFGs(probabilistic context-free grammars),[6]maximum entropy,[7]andneural nets.[8]Most of the more successful systems uselexicalstatistics (that is, they consider the identities of the words involved, as well as theirpart of speech). However such systems are vulnerable tooverfittingand require some kind ofsmoothingto be effective.[citation needed] Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is notcontext-free, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of theCYK algorithm, usually with someheuristicto prune away unlikely analyses to save time.(Seechart parsing.)However some systems trade speed for accuracy using, e.g., linear-time versions of theshift-reducealgorithm. A somewhat recent development has beenparse rerankingin which the parser proposes some large number of analyses, and a more complex system selects the best option.[citation needed]Innatural language understandingapplications,semantic parsersconvert the text into a representation of its meaning.[9] Inpsycholinguistics, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known asconnotation). This normally occurs as words are being heard or read. Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the function of sentence parsing.[10]This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences. The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound. Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech.[11]For example, in the sentence, "the horse raced past the barn fell", raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase.[12]Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader. Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. "The boy saw the lady with the telescope", in which the ambiguous phrase with the telescope could modify the boy saw or the lady.)[11] A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. "The rat the cat the man hit chased ran into the trap".) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging for mental parsing, again because of ambiguity of syntactic relationship.[13] Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected.[14]The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated.[15] Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood.[16] Discourse analysisexamines ways to analyze language use and semiotic events. Persuasive language may be calledrhetoric. Aparseris a software component that takes input data (typically text) and builds adata structure– often some kind ofparse tree,abstract syntax treeor other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separatelexical analyser, which creates tokens from the sequence of input characters; alternatively, these can be combined inscannerless parsing. Parsers may be programmed by hand or may be automatically or semi-automatically generated by aparser generator. Parsing is complementary totemplating, which produces formattedoutput.These may be applied to different domains, but often appear together, such as thescanf/printfpair, or the input (front end parsing) and output (back end code generation) stages of acompiler. The input to a parser is typically text in somecomputer language, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such asscanf, to complex programs such as the frontend of aC++ compileror theHTMLparser of aweb browser. An important class of simple parsing is done usingregular expressions, in which a group of regular expressions defines aregular languageand a regular expression engine automatically generating a parser for that language, allowingpattern matchingand extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser. The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML orXMLtext; these examples aremarkup languages. In the case ofprogramming languages, a parser is a component of acompilerorinterpreter, which parses thesource codeof acomputer programming languageto create some form of internal representation; the parser is a key step in thecompiler frontend. Programming languages tend to be specified in terms of adeterministic context-free grammarbecause fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – seeone-pass compilerandmulti-pass compiler. The implied disadvantages of a one-pass compiler can largely be overcome by addingfix-ups, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known. Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at thesemantic analysis(contextual analysis) step. For example, inPythonthe following is syntactically valid code: The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use: The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic. The first stage is the token generation, orlexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar ofregular expressions. For example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it into the tokens12,*,(,3,+,4,),^,2, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters*,+,^,(and)mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated. The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to acontext-free grammarwhich recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed withattribute grammars. The final phase issemantic parsingor analysis, which is working out the implications of the expression just validated and taking the appropriate action.[17]In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars can also be used to define these actions. Thetaskof the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways: LL parsersandrecursive-descent parserare examples of top-down parsers that cannot accommodateleft recursiveproduction rules. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsingambiguous context-free grammars, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan[20][21]which accommodateambiguityandleft recursionin polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a givencontext-free grammar. An important distinction with regard to parsers is whether a parser generates aleftmost derivationor arightmost derivation(seecontext-free grammar). LL parsers will generate a leftmostderivationand LR parsers will generate a rightmost derivation (although usually in reverse).[18] Somegraphical parsingalgorithms have been designed forvisual programming languages.[22][23]Parsers for visual languages are sometimes based ongraph grammars.[24] Adaptive parsingalgorithms have been used to construct "self-extending"natural language user interfaces.[25] A simple parser implementation reads the entire input file, performs an intermediate computation or translation, and then writes the entire output file, such as in-memorymulti-pass compilers. Alternative parser implementation approaches: Some of the well known parser development tools include the following: Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant toLL,LR, andLALR parsers, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1). Mostprogramming languages, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change[citation needed]to this trend came in 1990 whenTerence ParrcreatedANTLRfor his Ph.D. thesis, aparser generatorfor efficient LL(k) parsers, wherekis any fixed value. LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce). Lookahead has two advantages.[clarification needed] Example: Parsing the Expression1 + 2 * 3[dubious–discuss] Most programming languages (except for a few such as APL and Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is1 + (2 * 3). Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax. Initially Input = [1, +, 2, *, 3] The parse tree and resulting code from it is not correct according to language semantics. To correctly parse without lookahead, there are three solutions: The parse tree generated is correct and simplymore efficient[clarify][citation needed]than non-lookahead parsers. This is the strategy followed inLALR parsers.
https://en.wikipedia.org/wiki/Syntactic_analysis
Inlinguistics,morphologyis the study ofwords, including the principles by which they are formed, and how they relate to one another within alanguage.[1][2]Most approaches to morphology investigate the structure of words in terms ofmorphemes, which are the smallest units in a language with some independentmeaning. Morphemes includerootsthat can exist as words by themselves, but also categories such asaffixesthat can only appear as part of a larger word. For example, in English the rootcatchand the suffix-ingare both morphemes;catchmay appear as its own word, or it may be combined with-ingto form the new wordcatching. Morphology also analyzes how words behave asparts of speech, and how they may beinflectedto expressgrammatical categoriesincludingnumber,tense, andaspect. Concepts such asproductivityare concerned with how speakers create words in specific contexts, which evolves over the history of a language. The basic fields of linguistics broadly focus on language structure at different "scales". Morphology is considered to operate at a scale larger thanphonology, which investigates the categories of speech sounds that are distinguished within a spoken language, and thus may constitute the difference between a morpheme and another. Conversely,syntaxis concerned with the next-largest scale, and studies how words in turn form phrases and sentences.Morphological typologyis a distinct field that categorises languages based on the morphological features they exhibit. The history ofancient Indianmorphological analysis dates back to the linguistPāṇini, who formulated the 3,959 rules ofSanskritmorphology in the textAṣṭādhyāyīby using aconstituency grammar. The Greco-Roman grammatical tradition also engaged in morphological analysis.[3]Studies in Arabic morphology, including theMarāḥ Al-Arwāḥof Aḥmad b. 'Alī Mas'ūd, date back to at least 1200 CE.[4] The term "morphology" was introduced into linguistics byAugust Schleicherin 1859.[a][5] The term "word" has no well-defined meaning.[6]Instead, two related terms are used in morphology:lexemeand word-form[definition needed]. Generally, a lexeme is a set of inflected word-forms that is often represented with thecitation forminsmall capitals.[7]For instance, the lexemeeatcontains the word-formseat, eats, eaten,andate.Eatandeatsare thus considered different word-forms belonging to the same lexemeeat.EatandEater, on the other hand, are different lexemes, as they refer to two different concepts. Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. InLatin, one way to express the concept of 'NOUN-PHRASE1andNOUN-PHRASE2' (as in "apples and oranges") is to suffix '-que' to the second noun phrase: "apples oranges-and". An extreme level of the theoretical quandary posed by some phonological words is provided by theKwak'walalanguage.[b]In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated byaffixes, instead of by independent "words". The three-word English phrase, "with his club", in which 'with' identifies its dependent noun phrase as an instrument and 'his' denotes a possession relation, would consist of two words or even one word in many languages. Unlike most other languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically but to the preceding lexeme. Consider the following example (in Kwak'wala, sentences begin with what corresponds to an English verb):[c] kwixʔid-i-da clubbed-PIVOT-DETERMINER bəgwanəmai-χ-a man-ACCUSATIVE-DETERMINER q'asa-s-isi otter-INSTRUMENTAL-3SG-POSSESSIVE t'alwagwayu club kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isit'alwagwayu clubbed-PIVOT-DETERMINER man-ACCUSATIVE-DETERMINER otter-INSTRUMENTAL-3SG-POSSESSIVE club "the man clubbed the otter with his club." That is, to a speaker of Kwak'wala, the sentence does not contain the "words" 'him-the-otter' or 'with-his-club' Instead, themarkers-i-da(PIVOT-'the'), referring to "man", attaches not to the nounbəgwanəma("man") but to the verb; the markers -χ-a(ACCUSATIVE-'the'), referring tootter, attach tobəgwanəmainstead of toq'asa('otter'), etc. In other words, a speaker of Kwak'wala does not perceive the sentence to consist of these phonological words: kwixʔid clubbed i-da-bəgwanəma PIVOT-the-mani χ-a-q'asa hit-the-otter s-isi-t'alwagwayu with-hisi-club kwixʔid i-da-bəgwanəma χ-a-q'asa s-isi-t'alwagwayu clubbed PIVOT-the-manihit-the-otter with-hisi-club A central publication on this topic is the volume edited by Dixon and Aikhenvald (2002), examining the mismatch between prosodic-phonological and grammatical definitions of "word" in various Amazonian, Australian Aboriginal, Caucasian, Eskimo, Indo-European, Native North American, West African, and sign languages. Apparently, a wide variety of languages make use of the hybrid linguistic unitclitic, possessing the grammatical features of independent words but theprosodic-phonological lack of freedom ofbound morphemes. The intermediate status of clitics poses a considerable challenge to linguistic theory.[8] Given the notion of a lexeme, it is possible to distinguish two kinds of morphological rules. Some morphological rules relate to different forms of the same lexeme, but other rules relate to different lexemes. Rules of the first kind areinflectionalrules, but those of the second kind are rules ofword formation.[9]The generation of the English pluraldogsfromdogis an inflectional rule, and compound phrases and words likedog catcherordishwasherare examples of word formation. Informally, word formation rules form "new" words (more accurately, new lexemes), and inflection rules yield variant forms of the "same" word (lexeme). The distinction between inflection and word formation is not at all clear-cut. There are many examples for which linguists fail to agree whether a given rule is inflection or word formation. The next section will attempt to clarify the distinction. Word formation includes a process in which one combines two complete words, but inflection allows the combination of a suffix with a verb to change the latter's form to that of the subject of the sentence. For example: in the present indefinite, 'go' is used with subject I/we/you/they and plural nouns, but third-person singular pronouns (he/she/it) and singular nouns causes 'goes' to be used. The '-es' is therefore an inflectional marker that is used to match with its subject. A further difference is that in word formation, the resultant word may differ from its source word'sgrammatical category, but in the process of inflection, the word never changes its grammatical category. There is a further distinction between two primary kinds of morphological word formation:derivationandcompounding. The latter is a process of word formation that involves combining complete word forms into a single compound form.Dog catcher, therefore, is a compound, as bothdogandcatcherare complete word forms in their own right but are subsequently treated as parts of one form. Derivation involves affixing bound (non-independent) forms to existing lexemes, but the addition of the affix derives a new lexeme. The wordindependent, for example, is derived from the worddependentby using the prefixin-, anddependentitself is derived from the verbdepend. There is also word formation in the processes of clipping in which a portion of a word is removed to create a new one, blending in which two parts of different words are blended into one, acronyms in which each letter of the new word represents a specific word in the representation (NATO forNorth Atlantic Treaty Organization), borrowing in which words from one language are taken and used in another, and coinage in which a new word is created to represent a new object or concept.[10] A linguisticparadigmis the complete set of related word forms associated with a given lexeme. The familiar examples of paradigms are theconjugationsof verbs and thedeclensionsof nouns. Also, arranging the word forms of a lexeme into tables, by classifying them according to shared inflectional categories such astense,aspect,mood,number,genderorcase, organizes such. For example, thepersonal pronouns in Englishcan be organized into tables by using the categories ofperson(first, second, third); number (singular vs. plural); gender (masculine, feminine, neuter); and case (nominative, oblique, genitive). The inflectional categories used to group word forms into paradigms cannot be chosen arbitrarily but must be categories that are relevant to stating thesyntactic rulesof the language. Person and number are categories that can be used to define paradigms in English because the language hasgrammatical agreementrules, which require the verb in a sentence to appear in an inflectional form that matches the person and number of the subject. Therefore, the syntactic rules of English care about the difference betweendoganddogsbecause the choice between both forms determines the form of the verb that is used. However, no syntactic rule shows the difference betweendoganddog catcher, ordependentandindependent. The first two are nouns, and the other two are adjectives. An important difference between inflection and word formation is that inflected word forms of lexemes are organized into paradigms that are defined by the requirements of syntactic rules, and there are no corresponding syntactic rules for word formation. The relationship between syntax and morphology, as well as how they interact, is called "morphosyntax";[11][12]the term is also used to underline the fact that syntax and morphology are interrelated.[13]The study of morphosyntax concerns itself with inflection and paradigms, and some approaches to morphosyntax exclude from its domain the phenomena of word formation, compounding, and derivation.[11]Within morphosyntax fall the study ofagreementandgovernment.[11] Above, morphological rules are described asanalogiesbetween word forms:dogis todogsascatis tocatsanddishis todishes. In this case, the analogy applies both to the form of the words and to their meaning. In each pair, the first word means "one of X", and the second "two or more of X", and the difference is always the plural form-s(or-es) affixed to the second word, which signals the key distinction between singular and plural entities. One of the largest sources of complexity in morphology is that the one-to-one correspondence between meaning and form scarcely applies to every case in the language. In English, there are word form pairs likeox/oxen,goose/geese, andsheep/sheepwhose difference between the singular and the plural is signaled in a way that departs from the regular pattern or is not signaled at all. Even cases regarded as regular, such as-s, are not so simple; the-sindogsis not pronounced the same way as the-sincats, and in plurals such asdishes, a vowel is added before the-s. Those cases, in which the same distinction is effected by alternative forms of a "word", constituteallomorphy.[14] Phonological rules constrain the sounds that can appear next to each other in a language, and morphological rules, when applied blindly, would often violate phonological rules by resulting in sound sequences that are prohibited in the language in question. For example, to form the plural ofdishby simply appending an-sto the end of the word would result in the form*[dɪʃs], which is not permitted by thephonotacticsof English. To "rescue" the word, a vowel sound is inserted between the root and the plural marker, and[dɪʃɪz]results. Similar rules apply to the pronunciation of the-sindogsandcats: it depends on the quality (voiced vs. unvoiced) of the final precedingphoneme. Lexical morphology is the branch of morphology that deals with thelexiconthat, morphologically conceived, is the collection oflexemesin a language. As such, it concerns itself primarily with word formation: derivation and compounding. There are three principal approaches to morphology and each tries to capture the distinctions above in different ways: While the associations indicated between the concepts in each item in that list are very strong, they are not absolute. In morpheme-based morphology, word forms are analyzed as arrangements ofmorphemes. A morpheme is defined as the minimal meaningful unit of a language. In a word such asindependently, the morphemes are said to bein-,de-,pend,-ent, and-ly;pendis the (bound)rootand the other morphemes are, in this case, derivational affixes.[d]In words such asdogs,dogis the root and the-sis an inflectional morpheme. In its simplest and most naïve form, this way of analyzing word forms, called "item-and-arrangement", treats words as if they were made of morphemes put after each other ("concatenated") like beads on a string. More recent and sophisticated approaches, such asdistributed morphology, seek to maintain the idea of the morpheme while accommodating non-concatenated, analogical, and other processes that have proven problematic for item-and-arrangement theories and similar approaches. Morpheme-based morphology presumes three basic axioms:[15] Morpheme-based morphology comes in two flavours, one Bloomfieldian[16]and oneHockettian.[17]For Bloomfield, the morpheme was the minimal form with meaning, but did not have meaning itself.[clarification needed]For Hockett, morphemes are "meaning elements", not "form elements". For him, there is a morpheme plural using allomorphs such as-s,-enand-ren. Within much morpheme-based morphological theory, the two views are mixed in unsystematic ways so a writer may refer to "the morpheme plural" and "the morpheme-s" in the same sentence. Lexeme-based morphology usually takes what is called an item-and-process approach. Instead of analyzing a word form as a set of morphemes arranged in sequence, a word form is said to be the result of applying rules that alter a word-form or stem in order to produce a new one. An inflectional rule takes a stem, changes it as is required by the rule, and outputs a word form;[18]a derivational rule takes a stem, changes it as per its own requirements, and outputs a derived stem; a compounding rule takes word forms, and similarly outputs a compound stem. Word-based morphology is (usually) a word-and-paradigm approach. The theory takes paradigms as a central notion. Instead of stating rules to combine morphemes into word forms or to generate word forms from stems, word-based morphology states generalizations that hold between the forms of inflectional paradigms. The major point behind this approach is that many such generalizations are hard to state with either of the other approaches. Word-and-paradigm approaches are also well-suited to capturing purely morphological phenomena, such asmorphomes. Examples to show the effectiveness of word-based approaches are usually drawn fromfusional languages, where a given "piece" of a word, which a morpheme-based theory would call an inflectional morpheme, corresponds to a combination of grammatical categories, for example, "third-person plural". Morpheme-based theories usually have no problems with this situation since one says that a given morpheme has two categories. Item-and-process theories, on the other hand, often break down in cases like these because they all too often assume that there will be two separate rules here, one for third person, and the other for plural, but the distinction between them turns out to be artificial. The approaches treat these as whole words that are related to each other by analogical rules. Words can be categorized based on the pattern they fit into. This applies both to existing words and to new ones. Application of a pattern different from the one that has been used historically can give rise to a new word, such asolderreplacingelder(whereolderfollows the normal pattern ofadjectivalcomparatives) andcowsreplacingkine(wherecowsfits the regular pattern of plural formation). In the 19th century, philologists devised a now classic classification of languages according to their morphology. Some languages areisolating, and have little to no morphology; others areagglutinativewhose words tend to have many easily separable morphemes (such asTurkic languages); others yet are inflectional orfusionalbecause their inflectional morphemes are "fused" together (like someIndo-European languagessuch asPashtoandRussian). That leads to one bound morpheme conveying multiple pieces of information. A standard example of an isolating language isChinese. An agglutinative language isTurkish(and practically all Turkic languages).LatinandGreekare prototypical inflectional or fusional languages. It is clear that this classification is not at all clearcut, and many languages (Latin and Greek among them) do not neatly fit any one of these types, and some fit in more than one way. A continuum of complex morphology of language may be adopted. The three models of morphology stem from attempts to analyze languages that more or less match different categories in this typology. The item-and-arrangement approach fits very naturally with agglutinative languages. The item-and-process and word-and-paradigm approaches usually address fusional languages. As there is very little fusion involved in word formation, classical typology mostly applies to inflectional morphology. Depending on the preferred way of expressing non-inflectional notions, languages may be classified as synthetic (using word formation) or analytic (using syntactic phrases). Pingelapeseis aMicronesian languagespoken on the Pingelap atoll and on two of the eastern Caroline Islands, called the high island of Pohnpei. Similar to other languages, words in Pingelapese can take different forms to add to or even change its meaning. Verbal suffixes are morphemes added at the end of a word to change its form. Prefixes are those that are added at the front. For example, the Pingelapese suffix –kinmeans 'with' or 'at.' It is added at the end of a verb. sa-is an example of a verbal prefix. It is added to the beginning of a word and means 'not.' There are also directional suffixes that when added to the root word give the listener a better idea of where the subject is headed. The verbalumeans to walk. A directional suffix can be used to give more detail. Directional suffixes are not limited to motion verbs. When added to non-motion verbs, their meanings are a figurative one. The following table gives some examples of directional suffixes and their possible meanings.[19]
https://en.wikipedia.org/wiki/Morphology_(linguistics)
Linguistic typology(orlanguage typology) is a field oflinguisticsthat studies and classifies languages according to their structural features to allow their comparison. Its aim is to describe and explain the structural diversity and the common properties of the world's languages.[1]Its subdisciplines include, but are not limited to phonological typology, which deals with sound features; syntactic typology, which deals with word order and form; lexical typology, which deals with language vocabulary; and theoretical typology, which aims to explain the universal tendencies.[2] Linguistic typology is contrasted withgenealogical linguisticson the grounds that typology groups languages or their grammatical features based on formal similarities rather than historic descendence.[3]The issue of genealogical relation is however relevant to typology because moderndata setsaim to be representative and unbiased. Samples are collected evenly from differentlanguage families, emphasizing the importance of lesser-known languages in gaining insight into human language.[4] Speculations of the existence of a (logical) general oruniversal grammarunderlying all languages were published in the Middle Ages, especially by theModistaeschool. At the time,Latinwas the model language of linguistics, although transcribing Irish and Icelandic into theLatin alphabetwas found problematic. The cross-linguistic dimension of linguistics was established in theRenaissanceperiod. For example,Grammaticae quadrilinguis partitiones(1544) by Johannes Drosaeus compared French and the three 'holy languages', Hebrew, Greek, and Latin. The approach was expanded by thePort-Royal Grammar(1660) ofAntoine ArnauldandClaude Lancelot, who added Spanish, Italian, German and Arabic.Nicolas Beauzée's 1767 book includes examples of English, Swedish,Lappish, Irish,Welsh,Basque,Quechua, and Chinese.[5] The conquest and conversion of the world by Europeans gave rise to 'missionary linguistics' producing first-hand word lists and grammatical descriptions of exotic languages. Such work is accounted for in the 'Catalogue of the Languages of the Populations We Know', 1800, by the Spanish JesuitLorenzo Hervás.Johann Christoph Adelungcollected the first large language sample with theLord's prayerin almost five hundred languages (posthumous 1817).[5] More developed nineteenth-century comparative works includeFranz Bopp's 'Conjugation System' (1816) andWilhelm von Humboldt's 'On the Difference in Human Linguistic Structure and Its Influence on the Intellectual Development of Mankind' (posthumous 1836). In 1818,August Wilhelm Schlegelmade a classification of the world's languages into three types: (i) languages lacking grammatical structure, e.g. Chinese; (ii) agglutinative languages, e.g. Turkish; and (iii) inflectional languages, which can be synthetic like Latin and Ancient Greek, or analytic like French. This idea was later developed by others includingAugust Schleicher,Heymann Steinthal, Franz Misteli,Franz Nicolaus Finck, andMax Müller.[3] The word 'typology' was proposed byGeorg von der Gabelentzin hisSprachwissenschaft(1891).Louis Hjelmslevproposed typology as a large-scale empirical-analytical endeavour of comparing grammatical features to uncover the essence of language. Such a project began from the 1961 conference on language universals atDobbs Ferry. Speakers includedRoman Jakobson,Charles F. Hockett, andJoseph Greenbergwho proposed forty-five different types of linguistic universals based on his data sets from thirty languages. Greenberg's findings were mostly known from the nineteenth-century grammarians, but his systematic presentation of them would serve as a model for modern typology.[3]Winfred P. Lehmannintroduced Greenbergian typological theory toIndo-European studiesin the 1970s.[6] During the twentieth century, typology based on missionary linguistics became centered aroundSIL International, which today hosts its catalogue of living languages,Ethnologue, as an online database. The Greenbergian or universalist approach is accounted for by theWorld Atlas of Language Structures, among others. Typology is also done within the frameworks of functional grammar includingFunctional Discourse Grammar,Role and Reference Grammar, andSystemic Functional Linguistics. During the early years of the twenty-first century, however, the existence oflinguistic universalsbecame questioned by linguists proposingevolutionarytypology.[7] Quantitative typology deals with the distribution and co-occurrence of structural patterns in the languages of the world.[8]Major types of non-chance distribution include: Linguistic universals are patterns that can be seen cross-linguistically. Universals can either be absolute, meaning that every documented language exhibits this characteristic, or statistical, meaning that this characteristic is seen in most languages or is probable in most languages. Universals, both absolute and statistical can be unrestricted, meaning that they apply to most or all languages without any additional conditions. Conversely, both absolute and statistical universals can be restricted or implicational, meaning that a characteristic will be true on the condition of something else (if Y characteristic is true, then X characteristic is true).[9]An example of animplicational hierarchyis that dual pronouns are only found in languages withplural pronounswhilesingular pronouns(or unspecified in terms of number) are found in all languages. The implicational hierarchy is thussingular < plural < dual(etc.). Qualitative typology develops cross-linguistically viable notions or types that provide a framework for the description and comparison of languages. The main subfields of linguistic typology include the empirical fields of syntactic, phonological and lexical typology. Additionally, theoretical typology aims to explain the empirical findings, especially statistical tendencies or implicational hierarchies. Syntactic typology studies a vast array of grammatical phenomena from the languages of the world. Two well-known issues include dominant order and left-right symmetry. One set of types reflects the basic order ofsubject,verb, anddirect objectin sentences: These labels usually appear abbreviated as "SVO" and so forth, and may be called "typologies" of the languages to which they apply. The most commonly attested word orders are SOV and SVO while the least common orders are those that are object initial with OVS being the least common with only four attested instances.[10] In the 1980s, linguists began to question the relevance of geographical distribution of different values for various features of linguistic structure. They may have wanted to discover whether a particular grammatical structure found in one language is likewise found in another language in the same geographic location.[11]Some languages split verbs into an auxiliary and an infinitive or participle and put the subject and/or object between them. For instance, German (Ichhabeeinen Fuchs im Waldgesehen- *"I have a fox in-the woods seen"), Dutch (Hansvermoeddedat Jan Mariezag leren zwemmen- *"Hans suspected that Jan Marie saw to learn to swim") and Welsh (Mae'r gwirio sillafu wedi'igwblhau- *"Is the checking spelling after its to complete"). In this case, linguists base the typology on the non-analytic tenses (i.e. those sentences in which the verb is not split) or on the position of the auxiliary. German is thus SVO in main clauses and Welsh is VSO (and preposition phrases would go after the infinitive). Many typologists[who?]classify both German and Dutch asV2languages, as the verb invariantly occurs as the second element of a full clause. Some languages allow varying degrees of freedom in their constituent order, posing a problem for their classification within the subject–verb–object schema. Languages with bound case markings for nouns, for example, tend to have more flexible word orders than languages where case is defined by position within a sentence or presence of a preposition. For example, in some languages with bound case markings for nouns, such as Language X, varying degrees of freedom in constituent order are observed. These languages exhibit more flexible word orders, allowing for variations like Subject-Verb-Object (SVO) structure, as in 'The cat ate the mouse,' and Object-Subject-Verb (OSV) structure, as in 'The mouse the cat ate.' To define a basic constituent order type in this case, one generally looks at frequency of different types in declarative affirmative main clauses in pragmatically neutral contexts, preferably with only old referents. Thus, for instance, Russian is widely considered an SVO language, as this is the most frequent constituent order under such conditions—all sorts of variations are possible, though, and occur in texts. In many inflected languages, such as Russian, Latin, and Greek, departures from the default word-orders are permissible but usually imply a shift in focus, an emphasis on the final element, or some special context. In the poetry of these languages, the word order may also shift freely to meet metrical demands. Additionally, freedom of word order may vary within the same language—for example, formal, literary, or archaizing varieties may have different, stricter, or more lenient constituent-order structures than an informal spoken variety of the same language. On the other hand, when there is no clear preference under the described conditions, the language is considered to have "flexible constituent order" (a type unto itself). An additional problem is that in languages without living speech communities, such asLatin,Ancient Greek, andOld Church Slavonic, linguists have only written evidence, perhaps written in a poetic, formalizing, or archaic style that mischaracterizes the actual daily use of the language.[12]The daily spoken language ofSophoclesorCiceromight have exhibited a different or much more regular syntax than their written legacy indicates. The below table indicates the distribution of the dominant word order pattern of over 5,000 individual languages and 366 language families. SOV is the most common type in both although much more clearly in the data of language families includingisolates. 'NODOM' represents languages without a single dominant order.[13] Though the reason of dominance is sometimes considered an unsolved or unsolvable typological problem, several explanations for the distribution pattern have been proposed. Evolutionary explanations include those byThomas Givon(1979), who suggests that all languages stem from an SOV language but are evolving into different kinds; and byDerek Bickerton(1981), who argues that the original language was SVO, which supports simpler grammar employing word order instead of case markers to differentiate between clausal roles.[14] Universalist explanations include a model by Russell Tomlin (1986) based on three functional principles: (i) animate before inanimate; (ii) theme before comment; and (iii) verb-object bonding. The three-way model roughly predicts the real hierarchy (see table above) assuming no statistical difference between SOV and SVO, and, also, no statistical difference between VOS and OVS. By contrast, the processing efficiency theory ofJohn A. Hawkins(1994) suggests that constituents are ordered from shortest to longest in VO languages, and from longest to shortest in OV languages, giving rise to the attested distribution. This approach relies on the notion that OV languages have heavy subjects, and VO languages have heavy objects, which is disputed.[14] A second major way of syntactic categorization is by excluding the subject from consideration. It is a well-documented typological feature that languages with a dominant OV order (object before verb), Japanese for example, tend to havepostpositions. In contrast, VO languages (verb before object) like English tend to haveprepositionsas their mainadpositionaltype. Several OV/VO correlations have been uncovered.[15] Severalprocessingexplanations were proposed in the 1980s and 1990s for the above correlations. They suggest that the brain finds it easier toparsesyntactic patternsthat are either right or leftbranching, but not mixed. The most widely held such explanation isJohn A. Hawkins' parsing efficiency theory, which argues that language is a non-innateadaptationto innatecognitivemechanisms. Typological tendencies are considered as being based on language users' preference for grammars that are organized efficiently, and on their avoidance of word orderings that cause processing difficulty. Hawkins's processing theory predicts the above table but also makes predictions for non-correlation pairs including the order of adjective,demonstrativeand numeral in respect with the noun. This theory was based oncorpusresearch and lacks support inpsycholinguisticstudies.[14] Some languages exhibit regular "inefficient" patterning. These include the VO languagesChinese, with theadpositional phrasebefore the verb, andFinnish, which has postpositions. But there are few other profoundly exceptional languages. It is suggested more recently that the left-right orientation is limited to role-marking connectives (adpositionsandsubordinators), stemming directly from the semantic mapping of the sentence. Since the true correlation pairs in the above table either involve such a connective or, arguably, follow from the canonical order, orientation predicts them without making problematic claims.[17] Another common classification distinguishesnominative–accusativealignment patterns andergative–absolutiveones. In a language withcases, the classification depends on whether the subject (S) of an intransitive verb has the same case as the agent (A) or the patient (P) of a transitive verb. If a language has no cases, but the word order is AVP or PVA, then a classification may reflect whether the subject of an intransitive verb appears on the same side as the agent or the patient of the transitive verb. Bickel (2011) has argued that alignment should be seen as a construction-specific property rather than a language-specific property.[18] Many languages show mixed accusative and ergative behaviour (for example: ergative morphology marking the verb arguments, on top of an accusative syntax). Other languages (called "active languages") have two types of intransitive verbs—some of them ("active verbs") join the subject in the same case as the agent of a transitive verb, and the rest ("stative verbs") join the subject in the same case as the patient[example needed]. Yet other languages behave ergatively only in some contexts (this "split ergativity" is often based on the grammatical person of the arguments or on the tense/aspect of the verb).[19]For example, only some verbs inGeorgianbehave this way, and, as a rule, only while using theperfective(aorist). Linguistic typology also seeks to identify patterns in the structure and distribution of sound systems among the world's languages. This is accomplished by surveying and analyzing the relative frequencies of different phonological properties. Exemplary relative frequencies are given below for certainspeech sounds formed by obstructing airflow (obstruents). These relative frequencies show that contrastive voicing commonly occurs withplosives, as in Englishneatandneed, but occurs much more rarely amongfricatives, such as the Englishnieceandknees. According to a worldwide sample of 637 languages,[20]62% have the voicing contrast in stops but only 35% have this in fricatives. In the vast majority of those cases, the absence of voicing contrast occurs because there is a lack of voiced fricatives and because all languages havesome form of plosive (occlusive),[21]but there are languages with no fricatives. Below is a chart showing the breakdown of voicing properties among languages in the aforementioned sample. [20] Languages worldwide also vary in the number of sounds they use. These languages can go from very small phonemic inventories (Rotokaswith six consonants and five vowels) to very large inventories (!Xóõwith 128 consonants and 28 vowels). An interesting phonological observation found with this data is that the larger a consonant inventory a language has, the more likely it is to contain a sound from a defined set of complex consonants (clicks, glottalized consonants, doubly articulated labial-velar stops, lateral fricatives and affricates, uvular and pharyngeal consonants, and dental or alveolar non-sibilant fricatives). Of this list, only about 26% of languages in a survey[20]of over 600 with small inventories (less than 19 consonants) contain a member of this set, while 51% of average languages (19-25) contain at least one member and 69% of large consonant inventories (greater than 25 consonants) contain a member of this set. It is then seen that complex consonants are in proportion to the size of the inventory. Vowels contain a more modest number of phonemes, with the average being 5–6, which 51% of the languages in the survey have. About a third of the languages have larger than average vowel inventories. Most interesting though is the lack of relationship between consonant inventory size and vowel inventory size. Below is a chart showing this lack of predictability between consonant and vowel inventory sizes in relation to each other. [20]
https://en.wikipedia.org/wiki/Linguistic_typology
Incomputing,preemptionis the act performed by an externalscheduler— without assistance or cooperation from the task — of temporarilyinterruptinganexecutingtask, with the intention of resuming it at a later time.[1]: 153This preemptive scheduler usually runs in the most privilegedprotection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of aprocessorare known ascontext switching. In any given system design, some operations performed by the system may not be preemptable. This usually applies tokernelfunctions and serviceinterruptswhich, if not permitted torun to completion, would tend to producerace conditionsresulting indeadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense ofsystem responsiveness. The distinction betweenuser modeandkernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable. Most modern operating systems havepreemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems areSolaris2.0/SunOS 5.0,[2]Windows NT,Linux kernel(2.5.4 and newer),[3]AIXand someBSDsystems (NetBSD, since version 5). The termpreemptive multitaskingis used to distinguish amultitasking operating system, which permits preemption of tasks, from acooperative multitaskingsystem wherein processes or tasks must be explicitly programmed toyieldwhen they do not need system resources. In simple terms: Preemptive multitasking involves the use of aninterrupt mechanismwhich suspends the currently executing process and invokes aschedulerto determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time. In preemptive multitasking, the operating systemkernelcan also initiate acontext switchto satisfy thescheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling. The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known astime-shared scheduling, ortime-sharing. Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-usercontrol systems(like those inrobotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer. The period of time for which a process is allowed to run in a preemptive multitasking system is generally called thetime sliceorquantum.[1]: 158The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler itself will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input. Aninterruptis scheduled to allow theoperating systemkernelto switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system. Today, nearly all operating systems support preemptive multitasking, including the current versions ofWindows,macOS,Linux(includingAndroid),iOSandiPadOS. An early microcomputer operating system providing preemptive multitasking wasMicroware'sOS-9, available for computers based on theMotorola 6809, including home computers such as theTRS-80 Color Computer 2when configured with disk drives,[4]with the operating system supplied by Tandy as an upgrade.[5]Sinclair QDOS[6]:18andAmigaOSon theAmigawere also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran onMotorola 68000-familymicroprocessorswithout memory management. Amiga OS useddynamic loadingof relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space. Early operating systems forIBM PC compatiblessuch asMS-DOSandPC DOS, did not support multitasking at all, however alternative operating systems such asMP/M-86(1981) andConcurrent CP/M-86did support preemptive multitasking. OtherUnix-likesystems includingMINIXandCoherentprovided preemptive multitasking on 1980s-era personal computers. LaterMS-DOScompatible systems natively supporting preemptive multitasking/multithreading includeConcurrent DOS,Multiuser DOS,Novell DOS(later calledCaldera OpenDOSandDR-DOS7.02 and higher). SinceConcurrent DOS 386, they could also run multiple DOS programs concurrently invirtual DOS machines. The earliest version of Windows to support a limited form of preemptive multitasking wasWindows/386 2.0, which used theIntel 80386'sVirtual 8086 modeto run DOS applications invirtual 8086 machines, commonly known as "DOS boxes", which could be preempted. InWindows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility.[7]In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space. Preemptive multitasking has always been supported byWindows NT(all versions),OS/2(native applications),UnixandUnix-likesystems (such asLinux,BSDandmacOS),VMS,OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets. Early versions of theclassic Mac OSdid not support multitasking at all, with cooperative multitasking becoming available viaMultiFinderinSystem Software 5and then standard inSystem 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist inMac OS 9, although in a limited sense[8]), these were abandoned in favor ofMac OS X (now called macOS)that, as a hybrid of the old Mac System style andNeXTSTEP, is an operating system based on theMachkernel and derived in part fromBSD, which had always provided Unix-like preemptive multitasking.
https://en.wikipedia.org/wiki/Preemptive_scheduling
Incomputing,schedulingis the action of assigningresourcesto performtasks. The resources may beprocessors,network linksorexpansion cards. The tasks may bethreads,processesor dataflows. The scheduling activity is carried out by a mechanism called ascheduler. Schedulers are often designed so as to keep all computer resources busy (as inload balancing), allow multiple users to share system resources effectively, or to achieve a targetquality-of-service. Scheduling is fundamental to computation itself, and an intrinsic part of theexecution modelof a computer system; the concept of scheduling makes it possible to havecomputer multitaskingwith a singlecentral processing unit(CPU). A scheduler may aim at one or more goals, for example: In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by any one of the concerns mentioned above, depending upon the user's needs and objectives. Inreal-timeenvironments, such asembedded systemsforautomatic controlin industry (for examplerobotics), the scheduler also must ensure that processes can meetdeadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network andmanagedthrough an administrative back end. The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next process to run. Operating systems may feature up to three distinct scheduler types: along-term scheduler(also known as an admission scheduler or high-level scheduler), amid-term or medium-term scheduler, and ashort-term scheduler. The names suggest the relative frequency with which their functions are performed. The process scheduler is a part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as apreemptivescheduler, otherwise it is acooperativescheduler.[5] We distinguish betweenlong-term scheduling,medium-term scheduling, andshort-term schedulingbased on how often decisions must be made.[6] Thelong-term scheduler, oradmission scheduler, decides which jobs or processes are to be admitted to the ready queue (in main memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, the degree of concurrency to be supported at any one time – whether many or few processes are to be executed concurrently, and how the split between I/O-intensive and CPU-intensive processes is to be handled. The long-term scheduler is responsible for controlling the degree of multiprogramming. In general, most processes can be described as eitherI/O-boundorCPU-bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. On the other hand, if all processes are CPU-bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.[7] Long-term scheduling is also important in large-scale systems such asbatch processingsystems,computer clusters,supercomputers, andrender farms. For example, inconcurrent systems,coschedulingof interacting processes is often required to prevent them from blocking due to waiting on each other. In these cases, special-purposejob schedulersoftware is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system. Some operating systems only allow new tasks to be added if it is sure all real-time deadlines can still be met. The specific heuristic algorithm used by an operating system to accept or reject new tasks is theadmission control mechanism.[8] Themedium-term schedulertemporarily removes processes from main memory and places them in secondary memory (such as ahard disk drive) or vice versa, which is commonly referred to asswapping outorswapping in(also incorrectly aspagingoutorpaging in). The medium-term scheduler may decide to swap out a process that has not been active for some time, a process that has a low priority, a process that ispage faultingfrequently, or a process that is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries asswapped-out processesupon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, orlazy loaded, also calleddemand paging. Theshort-term scheduler(also known as theCPU scheduler) decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clockinterrupt, an I/O interrupt, an operatingsystem callor another form ofsignal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers – A scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can bepreemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known asvoluntaryorco-operative), in which case the scheduler is unable toforceprocesses off the CPU. A preemptive scheduler relies upon aprogrammable interval timerwhich invokes aninterrupt handlerthat runs inkernel modeand implements the scheduling function. Another component that is involved in the CPU-scheduling function is the dispatcher, which is the module that gives control of the CPU to the process selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system call. The functions of a dispatcher involve the following: The dispatcher should be as fast as possible since it is invoked during every process switch. During the context switches, the processor is virtually idle for a fraction of time, thus unnecessary context switches should be avoided. The time it takes for the dispatcher to stop one process and start another is known as thedispatch latency.[7]: 155 Ascheduling discipline(also calledscheduling policyorscheduling algorithm) is an algorithm used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used inrouters(to handle packet traffic) as well as inoperating systems(to shareCPU timeamong boththreadsandprocesses), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc. The main purposes of scheduling algorithms are to minimizeresource starvationand to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them. Inpacket-switchedcomputer networksand otherstatistical multiplexing, the notion of ascheduling algorithmis used as an alternative tofirst-come first-servedqueuing of data packets. The simplest best-effort scheduling algorithms areround-robin,fair queuing(amax-min fairscheduling algorithm),proportional-fair schedulingandmaximum throughput. If differentiated or guaranteedquality of serviceis offered, as opposed to best-effort communication,weighted fair queuingmay be utilized. In advanced packet radio wireless networks such asHSDPA(High-Speed Downlink Packet Access)3.5Gcellular system,channel-dependent schedulingmay be used to take advantage ofchannel state information. If the channel conditions are favourable, thethroughputandsystem spectral efficiencymay be increased. In even more advanced systems such asLTE, the scheduling is combined by channel-dependent packet-by-packetdynamic channel allocation, or by assigningOFDMAmulti-carriers or otherfrequency-domain equalizationcomponents to the users that best can utilize them.[9] First in, first out(FIFO), also known asfirst come, first served(FCFS), is the simplest scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready queue. This is commonly used for atask queue, for example as illustrated in this section. Earliest deadline first (EDF) orleast time to gois a dynamic scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new task is released, etc.), the queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution. Similar toshortest job first(SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete. The operating system assigns a fixed-priority rank to every process, and the scheduler arranges the processes in the ready queue in order of their priority. Lower-priority processes get interrupted by incoming higher-priority processes. The scheduler assigns a fixed time unit per process, and cycles through them. If process completes within that time-slice it gets terminated otherwise it is rescheduled after giving a chance to all other processes. This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. It is very useful forshared memoryproblems. Awork-conserving scheduleris a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled. In contrast, a non-work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs ready to be scheduled. There are several scheduling problems in which the goal is to decide which job goes to which station at what time, such that the totalmakespanis minimized: A very common method in embedded systems is to schedule jobs manually. This can for example be done in a time-multiplexed fashion. Sometimes the kernel is divided in three or more parts: Manual scheduling, preemptive and interrupt level. Exact methods for scheduling jobs are often proprietary. When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the system is going to see. There is no universalbestscheduling algorithm, and many operating systems use extended or combinations of the scheduling algorithms above. For example,Windows NT/XP/Vista uses amultilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms. In this system, threads can dynamically increase or decrease in priority depending on if it has been serviced already, or if it has been waiting extensively. Every priority level is represented by its own queue, withround-robin schedulingamong the high-priority threads andFIFOamong the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly. Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority threads. The algorithm used may be as simple asround-robinin which each process is given equal time (for instance 1 ms, usually between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A. More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. InSMPsystems,processor affinityis considered to increase overall system performance, even if it may cause a process itself to run more slowly. This generally improves performance by reducingcache thrashing. IBMOS/360was available with three different schedulers. The differences were such that the variants were often considered three different operating systems: Later virtual storage versions of MVS added aWorkload Managerfeature to the scheduler, which schedules processor resources according to an elaborate scheme defined by the installation. Very earlyMS-DOSand Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler.Windows 3.1xused a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16-bit applications run without preemption.[10] Windows NT-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 beingnormalpriorities and priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 is reserved for the Operating System. User interfaces and APIs work with priority classes for the process and the threads in the process, which are then combined by the system into the absolute priority level. The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications.[11]The scheduler was modified inWindows Vistato use thecycle counter registerof modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine.[12]Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.[13] Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called theblue task. Those processes are scheduled cooperatively, using around-robin schedulingalgorithm; a process yields control of the processor to another process by explicitly calling ablocking functionsuch asWaitNextEvent. Each process has its own copy of theThread Managerthat schedules that process's threads cooperatively; a thread yields control of the processor to another thread by callingYieldToAnyThreadorYieldToThread.[14] macOS uses a multilevel feedback queue, with four priority bands for threads – normal, system high priority, kernel mode only, and real-time.[15]Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Manager inCarbon.[14] In AIX Version 4 there are three possible values for thread scheduling policy: Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure. AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.[16] Linux 1.2 used around-robin schedulingpolicy.[17] Linux 2.2 added scheduling classes and support forsymmetric multiprocessing(SMP).[17] InLinux2.4,[17]anO(n) schedulerwith amultilevel feedback queuewith priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considerednicetask levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms.[citation needed]The scheduler ran through therun queueof all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa. However, some enterpriseLinux distributionssuch asSUSE Linux Enterprise Serverreplaced this scheduler with a backport of theO(1) scheduler(which was maintained byAlan Coxin his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution. In versions 2.6.0 to 2.6.22, the kernel used anO(1) schedulerdeveloped byIngo Molnarand many other kernel developers during the Linux 2.5 development. For many kernel in time frame,Con Kolivasdeveloped patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers. Con Kolivas' work, most significantly his implementation offair schedulingnamedRotating Staircase Deadline(RSDL), inspired Ingo Molnár to develop theCompletely Fair Scheduler(CFS) as a replacement for the earlierO(1) scheduler, crediting Kolivas in his announcement.[18]CFS is the first implementation of a fair queuingprocess schedulerwidely used in a general-purpose operating system.[19] The CFS uses a well-studied, classic scheduling algorithm calledfair queuingoriginally invented forpacket networks. Fair queuing had been previously applied to CPU scheduling under the namestride scheduling. The fair queuing CFS scheduler has a scheduling complexity ofO(log⁡N){\displaystyle O(\log N)}, whereNis the number of tasks in therunqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requiresO(log⁡N){\displaystyle O(\log N)}operations, because therun queueis implemented as ared–black tree. TheBrain Fuck Scheduler, also created by Con Kolivas, is an alternative to the CFS. In 2023, Peter Zijlstra proposed replacing CFS with anearliest eligible virtual deadline first scheduling(EEVDF) process scheduler.[20][21]The aim was to remove the need for CFSlatency nicepatches.[22] Linux 6.12 added support foruserspacescheduler extensions, also known as sched_ext.[23]These schedulers can be installed and replace the default scheduler.[24] FreeBSDuses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.[25] NetBSDuses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which enteredkernel space, 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 forsoftware interrupts. Solarisuses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux,[25]when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed-priority class and fair share class. The threads with fixed priority have the same priority range as that of the time-sharing class, but their priorities are not dynamically adjusted. The fair scheduling class uses CPUsharesto prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known as a project.[7]
https://en.wikipedia.org/wiki/Scheduling_(computing)
Multiprocessing(MP) is the use of two or morecentral processing units(CPUs) within a singlecomputer system.[1][2]The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple coreson onedie, multiple dies in onepackage, multiple packages in onesystem unit, etc.). Amultiprocessoris a computer system having two or moreprocessing units(multiple processors) each sharingmain memoryand peripherals, in order to simultaneously process programs.[3][4]A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gavetightly coupled systemas a synonymous term.[5] At theoperating systemlevel,multiprocessingis sometimes used to refer to the execution of multiple concurrentprocessesin a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant.[6][7]When used with this definition, multiprocessing is sometimes contrasted withmultitasking, which may use just a single processor but switch it in time slices between tasks (i.e. atime-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor.[7]Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the termparallel processingis generally used to denote that scenario.[6]Other authors prefer to refer to the operating system techniques asmultiprogrammingand reserve the termmultiprocessingfor the hardware aspect of having more than one processor.[2][8]The remainder of this article discusses multiprocessing only in this hardware sense. InFlynn's taxonomy, multiprocessors as defined above areMIMDmachines.[9][10]As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also containsmessage passingmulticomputer systems.[9] In amultiprocessingsystem, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware andoperating systemsoftware design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are calledsymmetric multiprocessing(SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, includingasymmetric multiprocessing(ASMP),non-uniform memory access(NUMA) multiprocessing, andclusteredmultiprocessing. In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are theBull Gamma 60and theBurroughs B5000.[11] An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio ShackTRS-80 Model 16desktop computer which came out in February 1982 and ran the multi-user/multi-taskingXenixoperating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bitZilog Z80CPU running at 4 MHz, and a 16-bitMotorola 68000CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlierTRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021[12]microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple dataor SIMD, often used invector processing), multiple sequences of instructions in a single context (multiple instruction, single dataor MISD, used forredundancyin fail-safe systems and sometimes applied to describepipelined processorsorhyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple dataor MIMD). Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP orUMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). TheIBM p690Regatta is an example of a high end SMP system.IntelXeonprocessors dominated the multiprocessor market for business PCs and were the only major x86 option until the release ofAMD'sOpteronrange of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the systemRAM. Chip multiprocessors, also known asmulti-corecomputing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. Loosely coupled multiprocessor systems (often referred to asclusters) are based on multiple standalone relatively low processor countcommodity computersinterconnected via a high speed communication system (Gigabit Ethernetis common). A LinuxBeowulf clusteris an example of aloosely coupledsystem. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and maydepreciaterapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. Merging data from multiplethreadsorprocessesmay incur significant overhead due toconflict resolution,data consistency, versioning, and synchronization.[13]
https://en.wikipedia.org/wiki/Multiprocessing#Symmetric_multiprocessing
Incomputing,input/output(I/O,i/o, or informallyioorIO) is the communication between an information processing system, such as acomputer, and the outside world, such as another computer system, peripherals, or a human operator.Inputsare the signals or data received by the system and outputs are the signals ordatasent from it. The term can also be used as part of an action; to "perform I/O" is to perform aninput or output operation. I/O devicesare the pieces ofhardwareused by a human (or other system) to communicate with a computer. For instance, akeyboardorcomputer mouseis aninput devicefor a computer, whilemonitorsandprintersareoutput devices. Devices for communication between computers, such asmodemsandnetwork cards, typically perform both input and output operations. Any interaction with the system by an interactor is aninputand the reaction the system responds is called the output. The designation of a device as either input or output depends on perspective. Mice and keyboards take physical movements that the human user outputs and convert them into input signals that a computer can understand; the output from these devices is the computer's input. Similarly, printers and monitors take signals that computers output as input, and they convert these signals into a representation that human users can understand. From the humanuser's perspective, the process of reading or seeing these representations is receiving output; this type of interaction between computers and humans is studied in the field ofhuman–computer interaction. A further complication is that a device traditionally considered an input device, e.g., card reader, keyboard, may accept control commands to, e.g., select stacker, display keyboard lights, while a device traditionally considered as an output device may provide status data (e.g., low toner, out of paper, paper jam). In computer architecture, the combination of theCPUandmain memory, to which the CPU can read or write directly using individualinstructions, is considered the brain of a computer. Any transfer of information to or from the CPU/memory combo, for example by reading data from adisk drive, is considered I/O.[1]The CPU and its supporting circuitry may providememory-mapped I/Othat is used in low-levelcomputer programming, such as in the implementation ofdevice drivers, or may provide access toI/O channels. AnI/O algorithmis one designed to exploit locality and perform efficiently when exchanging data with a secondary storage device, such as a disk drive. An I/O interface is required whenever the I/O device is driven by a processor. Typically a CPU communicates with devices via abus. The interface must have the necessary logic to interpret the device address generated by the processor.Handshakingshould be implemented by the interface using appropriate commands (like BUSY, READY, and WAIT), and the processor can communicate with an I/O device through the interface. If different data formats are being exchanged, the interface must be able to convert serial data to parallel form and vice versa. Because it would be a waste for a processor to be idle while it waits for data from an input device there must be provision for generatinginterrupts[2]and the corresponding type numbers for further processing by the processor if required.[clarification needed] A computer that usesmemory-mapped I/Oaccesses hardware by reading and writing to specific memory locations, using the same assembly language instructions that computer would normally use to access memory. An alternative method is via instruction-based I/O which requires that a CPU have specialized instructions for I/O.[1]Both input and output devices have adata processingrate that can vary greatly.[2]With some devices able to exchange data at very high speedsdirect accessto memory (DMA) without the continuous aid of a CPU is required.[2] Higher-leveloperating systemand programming facilities employ separate, more abstract I/O concepts andprimitives. For example, most operating systems provide application programs with the concept offiles. Most programming languages provide I/O facilities either as statements in the language or asfunctionsin a standard library for the language. An alternative to special primitive functions is theI/O monad, which permits programs to just describe I/O, and the actions are carried out outside the program. This is notable because theI/Ofunctions would introduceside-effectsto any programming language, but this allowspurely functional programmingto be practical. The I/O facilities provided by operating systems may berecord-oriented, with files containingrecords, or stream-oriented, with the file containing a stream of bytes. Channel I/Orequires the use of instructions that are specifically designed to perform I/O operations. The I/O instructions address the channel or the channel and device; the channel asynchronously accesses all other required addressing and control information. This is similar to DMA, but more flexible. Port-mapped I/Oalso requires the use of special I/O instructions. Typically one or more ports are assigned to the device, each with a special purpose. The port numbers are in a separate address space from that used by normal instructions. Direct memory access(DMA) is a means for devices to transfer large chunks of data to and from memory independently of the CPU.
https://en.wikipedia.org/wiki/Input/output
Network,networkingandnetworkedmay refer to:
https://en.wikipedia.org/wiki/Networking
Incomputing, afile systemorfilesystem(often abbreviated toFSorfs) governsfileorganization and access. Alocalfile system is a capability of anoperating systemthat services the applications running on the samecomputer.[1][2]Adistributed file systemis aprotocolthat provides file access betweennetworkedcomputers. A file system provides adata storageservicethat allowsapplicationsto sharemass storage. Without a file system, applications could access the storage inincompatibleways that lead toresource contention,data corruptionanddata loss. There are many file systemdesignsandimplementations– with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more. File systems have been developed for many types ofstorage devices, includinghard disk drives(HDDs),solid-state drives(SSDs),magnetic tapesandoptical discs.[3] A portion of the computermain memorycan be set up as aRAM diskthat serves as a storage device for a file system. File systems such astmpfscan store files invirtual memory. Avirtualfile system provides access to files that are either computed on request, calledvirtual files(seeprocfsandsysfs), or are mapping into another, backing storage. Fromc.1900and before the advent of computers the termsfile system,filing systemandsystem for filingwere used to describe methods of organizing, storing and retrieving paper documents.[4]By 1961, the termfile systemwas being applied to computerized filing alongside the original meaning.[5]By 1964, it was in general use.[6] A local file system'sarchitecturecan be described aslayers of abstractioneven though a particular file system design may not actually separate the concepts.[7] Thelogical file systemlayer provides relatively high-level access via anapplication programming interface(API) for file operations including open, close, read and write – delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors.[8]It provides file access, directory operations, security and protection.[7] Thevirtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which is called a file system implementation.[8] Thephysical file systemlayer provides relatively low-level access to a storage device (e.g. disk). It reads and writesdata blocks, providesbufferingand othermemory managementand controls placement of blocks in specific locations on the storage medium. This layer usesdevice driversorchannel I/Oto drive the storage device.[7] Afile name, orfilename, identifies a file to consuming applications and in some cases users. A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory. Most file systems restrict the length of a file name. Some file systems match file names ascase sensitiveand others as case insensitive. For example, the namesMYFILEandmyfilematch the same file for case insensitive, but different files for case sensitive. Most modern file systems allow a file name to contain a wide range of characters from theUnicodecharacter set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type. File systems typically support organizing files intodirectories, also calledfolders, which segregate files into groups. This may be implemented by associating the file name with an index in atable of contentsor aninodein aUnix-likefile system. Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories. The first file system to support arbitrary hierarchies of directories was used in theMulticsoperating system.[9]The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do,Apple'sHierarchical File Systemand its successorHFS+inclassic Mac OS, theFATfile system inMS-DOS2.0 and later versions of MS-DOS and inMicrosoft Windows, theNTFSfile system in theWindows NTfamily of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of theFiles-11file system inOpenVMS. In addition to data, the file content, a file system also manages associatedmetadatawhich may include but is not limited to: A file system stores associated metadata separate from the content of the file. Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file. Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as theinode. Most file systems also store metadata not associated with any one particular file. Such metadata includes information about unused regions—free space bitmap,block availability map—and information aboutbad sectors. Often such information about anallocation groupis stored inside the allocation group itself. Additional attributes can be associated on file systems, such asNTFS,XFS,ext2,ext3, some versions ofUFS, andHFS+, usingextended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image. Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to asstreamsorforks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago. Seecomparison of file systems § Metadatafor details on which file systems support which kinds of metadata. A local file system tracks which areas of storage belong to which file and which are not being used. When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows. To delete a file, the file system records that the file's space is free; available to use for another file. A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e.bytes). For example, inApple DOSof the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used atrack/sector map.[citation needed] The granular nature results in unused space, sometimes calledslack space, for each file except for those that have the rare size that is a multiple of the granular allocation.[10]For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB. Generally, the allocation unit size is set when the storage is configured. Choosing a relatively small size compared to the files stored, results in excessive access overhead. Choosing a relatively large size results in excessive unused space. Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space. As a file system creates, modifies and deletes files, the underlying storage representation may becomefragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous. A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.[11] This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such ashard disk drives. Other hardware such assolid-state drivesare not affected by fragmentation. A file system often supports access control of data that it manages. The intent of access control is often to prevent certain users from reading or modifying certain files. Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere andfile permissionsin the form of permission bits,access control lists, orcapabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders. Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data. Some operating systems allow a system administrator to enabledisk quotasto limit a user's use of storage space. A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like: Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media. A file system might record events to allow analysis of issues such as: Many file systems access data as a stream ofbytes. Typically, to read file data, a program provides amemory bufferand the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium. Some file systems, or layers on top of a file system, allow a program to define arecordso that a program can read and write data as a structure; not an unorganized sequence of bytes. If afixed lengthrecord definition is used, then locating the nthrecord can be calculated mathematically, which is relatively fast compared to parsing the data for record separators. An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.[12] Typically, a file system can be managed by the user via various utility programs. Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system. Directory utilities may be used to create, rename and deletedirectory entries, which are also known asdentries(singular:dentry),[13]and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard linksinUnix), to rename parent links (".." inUnix-likeoperating systems),[clarification needed]and to create bidirectional links to files. File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category. Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file systemdefragmentationutilities. Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system. Utilities, libraries and programs usefile system APIsto make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal. Frequently, retail systems are configured with a single file system occupying the entirestorage device. Another approach is topartitionthe disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be setread-onlyand only periodically be set writable. Some file systems, such asZFSandAPFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.[14][15] A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using theext4file system) in a virtual machine under his/her production Windows environment (usingNTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on thehypervisorand settings) in the NTFS host file system. Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of thesystemfile system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition,defragmentationmay be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches). Adisk file systemtakes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples includeFAT(FAT12,FAT16,FAT32),exFAT,NTFS,ReFS,HFSandHFS+,HPFS,APFS,UFS,ext2,ext3,ext4,XFS,btrfs,Files-11,Veritas File System,VMFS,ZFS,ReiserFS,NSSand ScoutFS. Some disk file systems arejournaling file systemsorversioning file systems. ISO 9660andUniversal Disk Format(UDF) are two common formats that targetCompact Discs,DVDsandBlu-raydiscs.Mount Rainieris an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs. Aflash file systemconsiders the special abilities, performance and restrictions offlash memorydevices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.[16] Atape file systemis a file system and tape format designed to store files on tape.Magnetic tapesare sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system. In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks. Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other. Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file. Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to asstreaming, so that time-consuming and repeated tape motions are not required to write new data. However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future. IBM has developed a file system for tape called theLinear Tape File System. The IBM implementation of this file system has been released as the open-sourceIBM Linear Tape File System — Single Drive Edition (LTFS-SDE)product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape. Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes.[a]With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media. Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time. Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similarrich metadata.[17] IBM DB2 for i[18](formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object basedIBM i[19]operating system (formerly known as OS/400 and i5/OS), incorporating asingle level storeand running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish.[20]These technologies are informally known as 'Fortress Rochester'[citation needed]and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective[citation needed]. Some other projects that are not "pure" database file systems but that use some aspects of a database file system: Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the commandshell, may leave the entire system in an unusable state. Transaction processingintroduces theatomicityguarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide theisolationguarantee[clarification needed], meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properlyserializedwith the transaction. Windows, beginning with Vista, added transaction support toNTFS, in a feature calledTransactional NTFS, but its use is now discouraged.[21]There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system,[22]Amino,[23]LFS,[24]and a transactionalext3file system on the TxOS kernel,[25]as well as transactional file systems targeting embedded systems, such as TFFS.[26] Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions.File lockingcan be used as aconcurrency controlmechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot preventTOCTTOUrace conditions on symbolic links. File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity. Journaling file systemsis one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call. Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software. Anetwork file systemis a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for theNFS,[27]AFS,SMBprotocols, and file-system-like clients forFTPandWebDAV. Ashared disk file systemis one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually astorage area network). The file system arbitrates access to that subsystem, preventing write collisions.[28]Examples includeGFS2fromRed Hat,GPFS, now known as Spectrum Scale, from IBM,SFSfrom DataPlow,CXFSfromSGI,StorNextfromQuantum Corporationand ScoutFS from Versity. Some file systems expose elements of the operating system as files so they can be acted on via thefile system API. This is common inUnix-likeoperating systems, and to a lesser extent in other operating systems. Examples include: In the 1970s disk and digital tape devices were too expensive for some earlymicrocomputerusers. An inexpensive basic data storage system was devised that used commonaudio cassettetape. When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, thenmodulated soundsthat encoded a prefix, the data, achecksumand a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system wouldlistento the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as theCommodore PETseries of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data. In a flat file system, there are nosubdirectories; directory entries for all files are stored in a single directory. Whenfloppy diskmedia was first available this type of file system was adequate due to the relatively small amount of data space available.CP/Mmachines featured a flat file system, where files could be assigned to one of 16user areasand generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specificquotafor each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The earlyApple Macintoshalso featured a flat file system, theMacintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder.IBMDOS/360andOS/360store entries for all files on a disk pack (volume) in a directory on the pack called aVolume Table of Contents(VTOC). While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files. A recent addition to the flat file system family isAmazon'sS3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes. Anoperating system(OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently. An OS typically provides file system access to the user. Often an OS providescommand line interface, such asUnix shell, WindowsCommand PromptandPowerShell, andOpenVMS DCL. An OS often also providesgraphical user interfacefile browserssuch as MacOSFinderand WindowsFile Explorer. Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory. Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called themount point– it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems. Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose. Linuxsupports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2,ext3andext4),XFS,JFS, andbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there areUBIFS,JFFS2andYAFFS, among others.SquashFSis a common compressed read-only file system. Solarisin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS. Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (journaling)VxFS, Sun Microsystems (clustering)QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS. Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orjournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS. Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools inZFS. macOS (formerly Mac OS X)uses theApple File System(APFS), which in 2017 replaced a file system inherited fromclassic Mac OScalledHFS Plus(HFS+). Apple also uses the term "Mac OS Extended" for HFS+.[29]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter. File names can be up to 255 characters. HFS Plus usesUnicodeto store file names. On macOS, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension. HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic links, andaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland. macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses theApple File Systemonsolid-state drives. macOS also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[30]As ofMac OS X LionUFS support was completely dropped. Newer versions of macOS are capable of reading and writing to the legacyFATfile systems (16 and 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on macOS versions prior toMac OS X Snow Leopardthird-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).[31] Finally, macOS supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[32] OS/21.2 introduced theHigh Performance File System(HPFS). HPFS supports mixed case file names in differentcode pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data,extent-basedspace allocation, aB+ treestructure for directories, and the root directory located at the midpoint of the disk, for faster average access. Ajournaled filesystem(JFS) was shipped in 1999. PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28. Plan 9 from Bell Labstreats everything as a file and accesses all objects as a file would be accessed (i.e., there is noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system. TheInfernooperating system shares these concepts with Plan 9. Windows makes use of theFAT,NTFS,exFAT,Live File SystemandReFSfile systems (the last of these is only supported and usable inWindows Server 2012,Windows Server 2016,Windows 8,Windows 8.1, andWindows 10; Windows cannot boot from it). Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primaryhard disk drivepartition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967. The family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PC DOS,OS/2, andDR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age. The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed] Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows. The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions. FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS. FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion. NTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links). exFAThas certain advantages over NTFS with regard tofile system overhead.[citation needed] exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11. exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard).[32]Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.[33][34] Prior to the introduction ofVSAM,OS/360systems implemented a hybrid file system. The system was designed to easily supportremovable disk packs, so the information relating to all files on one disk (volumein IBM terminology) is stored on that disk in aflat system filecalled theVolume Table of Contents(VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of theSystem Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access. The IBMConversational Monitor System(CMS) component ofVM/370uses a separate flat file system for eachvirtual disk(minidisk). File data and control information are scattered and intermixed. The anchor is a record called theMaster File Directory(MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels ofindirection, where the file's directory entry (called aFile Status Table(FST) entry) points to blocks containing a list of addresses of the individual records. Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in asingle-level store. Many types ofobjectsare defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integratedrelational database. File systems limitstorable data capacity– generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future. Since storage sizes have increased at nearexponentialrate (seeMoore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity. With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980shome computerswith 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems. It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason. In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended.[39]On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse.[39]On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back),[40]and both ext3 and ext4 can be converted tobtrfs, and converted back until the undo information is deleted.[41]These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases usingsparse filesupport.[41] Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system. For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted. An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup. Inhierarchical file systems, files are accessed by means of apaththat is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name. Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
https://en.wikipedia.org/wiki/File_system
Aninode(index node) is adata structurein aUnix-style file systemthat describes afile-systemobject such as afileor adirectory. Each inode stores the attributes and disk block locations of the object's data.[1]File-system object attributes may includemetadata(times of last change,[2]access, modification), as well as owner andpermissiondata.[3] A directory is a list of inodes with their assigned names. The list includes an entry for itself, its parent, and each of its children. There has been uncertainty on theLinux kernel mailing listabout the reason for the "i" in "inode". In 2002, the question was brought to Unix pioneerDennis Ritchie, who replied:[4] In truth, I don't know either. It was just a term that we started to use. "Index" is my best guess, because of the slightly unusual file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this. Thus the i-number is an index in this array, the i-node is the selected element of the array. (The "i-" notation was used in the 1st edition manual; its hyphen was gradually dropped.) A 1978 paper by Ritchie andKen Thompsonbolsters the notion of "index" being the etymological origin of inodes. They wrote:[5] […] a directory entry contains only a name for the associated file and apointerto the file itself. This pointer is an integer called thei-number(for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (thei-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file'si-node) contains the description of the file. Additionally, Maurice J. Bach wrote that the wordinode"is a contraction of the term index node and is commonly used in literature on the UNIX system".[6] A file system relies on data structuresaboutthe files, as opposed to the contents of that file. The former are calledmetadata—data that describes data. Each file is associated with aninode, which is identified by an integer, often referred to as ani-numberorinode number. Inodes store information about files and directories (folders), such as file ownership, access mode (read, write, execute permissions), and file type. The data may be called stat data, in reference to thestatsystem callthat provides the data to programs. The inode number indexes a table of inodes on the file system. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file, thereby allowing access to the file. A file's inode number can be found using thels -icommand. Thels -icommand prints the inode number in the first column of the report. On many older file systems, inodes are stored in one or more fixed-size areas that are set up at file system creation time, so the maximum number of inodes is fixed at file system creation, limiting the maximum number of files the file system can hold. A typical allocation heuristic for inodes in a file system is one inode for every 2K bytes contained in the filesystem.[8] Some Unix-style file systems such asJFS,XFS,ZFS,OpenZFS,ReiserFS,btrfs, andAPFSomit a fixed-size inode table, but must store equivalent data in order to provide equivalent capabilities. Common alternatives to the fixed-size table includeB-treesand the derivedB+ trees. File names and directory implications: The operating system kernel's in-memory representation of this data is calledstruct inodeinLinux. Systems derived fromBSDuse the termvnode(the "v" refers to the kernel'svirtual file systemlayer). ThePOSIXstandard mandates file-system behavior that is strongly influenced by traditionalUNIXfile systems. An inode is denoted by the phrase "file serial number", defined as aper-file systemunique identifier for a file.[9]That file serial number, together with the device ID of the device containing the file, uniquely identify the file within the whole system.[10] Within a POSIX system, a file has the following attributes[10]which may be retrieved by thestatsystem call: Filesystems designed with inodes will have the following administrative characteristics: Files can have multiple names. If multiple names hard link to the same inode then the names are equivalent; i.e., the first to be created has no special status. This is unlikesymbolic links, which depend on the original name, not the inode (number). An inode may have no links. An inode without links represents a file with no remaining directory entries or paths leading to it in the filesystem. A file that has been deleted or lacks directory entries pointing to it is termed an 'unlinked' file. Such files are removed from the filesystem, freeing the occupied disk space for reuse. An inode without links remains in the filesystem until the resources (disk space and blocks) freed by the unlinked file are deallocated or the file system is modified. Although an unlinked file becomes invisible in the filesystem, its deletion is deferred until all processes with access to the file have finished using it, including executable files which are implicitly held open by the processes executing them. It is typically not possible to map from an open file to the filename that was used to open it. When a program opens a file, the operating system converts the filename to an inode number and then discards the filename. As a result, functions likegetcwd()andgetwd()which retrieve the currentworking directoryof the process, cannot directly access the filename. Beginning with the current directory, these functions search up to itsparent directory, then to the parent's parent, and so on, until reaching theroot directory. At each level, the function looks for a directory entry whose inode matches that of the directory it just moved up from. Because the child directory's inode still exists as an entry in itsparent directory, it allows the function to reconstruct theabsolute pathof the currentworking directory. Some operating systems maintain extra information to make this operation run faster. For example, in theLinuxVFS,[11]directory entry cache,[12]also known as dentry or dcache, are cache entries used by thekernelto speed up filesystem operations by storing information about directory links inRAM. Historically, it was possible tohard linkdirectories. This made the directory structure an arbitrarydirected graphcontrary to adirected acyclic graph. It was even possible for a directory to be its own parent. Modern systems generally prohibit this confusing state, except that the parent ofrootis still defined as root. The most notable exception to this prohibition is found inMac OS X(versions 10.5 and higher) which allows hard links of directories to be created onHFS+file systems by the superuser.[13] When a file is relocated to a different directory on the same file system, or when a diskdefragmentationalters its physical location, the file's inode number remains unchanged. This unique characteristic permits the file to be moved or renamed even during read or write operations, thereby ensuring continuous access without disruptions. This feature—having a file's metadata anddata blocklocations persist in a centraldata structure, irrespective of file renaming or moving—cannot be fully replicated in manynon-Unix file systemslikeFATand its derivatives, as they lack a mechanism to maintain this invariant property when both the file's directory entry and its data are simultaneously relocated. In these file systems, moving or renaming a file might lead to more significant changes in the data structure representing the file, and the system does not keep a separate, central record of the file'sdata blocklocations andmetadataas inodes do inUnix-likesystems. inode file systems allow a running process to continue accessing a library file even as another process is replacing that same file. This operation should be performedatomically, meaning it should appear as a single operation that is either entirely completed or not done at all, with no intermediate state visible to other processes. During thereplacement, a new inode is created for the newlibrary file, establishing an entirely new mapping. Subsequently, future access requests for that library will retrieve the newly installed version. When the operating system is replacing the file (and creating a new inode), it places a lock[14]on the inode[15]and possibly the containing directory.[16]This prevents other processes fromreading or writingto the file (inode)[17]during the update operation, thereby avoiding data inconsistency or corruption.[18] Once the update operation is complete, the lock is released. Any subsequent access to the file (via the inode) by any processes will now point to the new version of the library. Thus, making it possible to perform updates even when the library is in use by another process. One significant advantage of this mechanism is that it eliminates the need for asystem rebootto replace libraries currently in use. Consequently, systems can update or upgradesoftware librariesseamlessly without interrupting running processes or operations. When a file system is created, some file systems allocate a fixed number of inodes.[19]This means that it is possible to run out of inodes on a file system, even if there is free space remaining in the file system. This situation often arises in use cases where there are many small files, such as on a server storing email messages, because each file, no matter how small, requires its own inode. Other file systems avoid this limitation by using dynamic inode allocation.[20]Dynamic inode allocation allows a file system to create more inodes as needed instead of relying on a fixed number created at the time of file system creation.[21]This can "grow" the file system by increasing the number of inodes available for new files and directories, thus avoiding the problem of running out of inodes.[22] It can make sense to store very small files in the inode itself to save both space (no data block needed) and lookup time (no further disk access needed). This file system feature is called inlining. The strict separation of inode and file data thus can no longer be assumed when using modern file systems. If the data of a file fits in the space allocated for pointers to the data, this space can conveniently be used. For example,ext2and its successors store the data of symlinks (typically file names) in this way if the data is no more than 60 bytes ("fast symbolic links").[23] Ext4has a file system option calledinline_datathat allows ext4 to perform inlining if enabled during file system creation. Because an inode's size is limited, this only works for very small files.[24]
https://en.wikipedia.org/wiki/Inode
Thex86instruction setrefers to the set of instructions thatx86-compatiblemicroprocessorssupport. The instructions are usually part of anexecutableprogram, often stored as acomputer fileand executed on the processor. The x86 instruction set has been extended several times, introducing widerregistersand datatypes as well as new functionality.[1] Below is the full8086/8088instruction set of Intel (81 instructions total).[2]These instructions are also available in 32-bit mode, in which they operate on 32-bit registers (eax,ebx, etc.) and values instead of their 16-bit (ax,bx, etc.) counterparts. The updated instruction set is grouped according to architecture (i186,i286,i386,i486,i586/i686) and is referred to as (32-bit)x86and (64-bit)x86-64(also known asAMD64). This is the original instruction set. In the 'Notes' column,rmeansregister,mmeansmemory addressandimmmeansimmediate(i.e. a value). Note that since the lower half is the same for unsigned and signed multiplication, this version of the instruction can be used for unsigned multiplication as well. The new instructions added in 80286 add support for x86protected mode. Some but not all of the instructions are available inreal modeas well. The TSS (Task State Segment) specified by the 16-bit argument is marked busy, but a task switch is not done. The 80386 added support for 32-bit operation to the x86 instruction set. This was done by widening the general-purpose registers to 32 bits and introducing the concepts ofOperandSizeandAddressSize– most instruction forms that would previously take 16-bit data arguments were given the ability to take 32-bit arguments by setting their OperandSize to 32 bits, and instructions that could take 16-bit address arguments were given the ability to take 32-bit address arguments by setting their AddressSize to 32 bits. (Instruction forms that work on 8-bit data continue to be 8-bit regardless of OperandSize. Using a data size of 16 bits will cause only the bottom 16 bits of the 32-bit general-purpose registers to be modified – the top 16 bits are left unchanged.) The default OperandSize and AddressSize to use for each instruction is given by the D bit of thesegment descriptorof the current code segment -D=0makes both 16-bit,D=1makes both 32-bit. Additionally, they can be overridden on a per-instruction basis with two new instruction prefixes that were introduced in the 80386: The 80386 also introduced the two new segment registersFSandGSas well as the x86control,debugandtest registers. The new instructions introduced in the 80386 can broadly be subdivided into two classes: For instruction forms where the operand size can be inferred from the instruction's arguments (e.g.ADD EAX,EBXcan be inferred to have a 32-bit OperandSize due to its use of EAX as an argument), new instruction mnemonics are not needed and not provided. Mainly used to prepare a dividend for the 32-bitIDIV(signed divide) instruction. Instruction is serializing. Second operand specifies which bit of the first operand to test. The bit to test is copied toEFLAGS.CF. Second operand specifies which bit of the first operand to test and set. Second operand specifies which bit of the first operand to test and clear. Second operand specifies which bit of the first operand to test and toggle. Differs from older variants of conditional jumps in that they accept a 16/32-bit offset rather than just an 8-bit offset. Offset part is stored in destination register argument, segment part in FS/GS/SS segment register as indicated by the instruction mnemonic.[i] Moves to theCR3control register are serializing and will flush theTLB.[l] On Pentium and later processors, moves to theCR0andCR4control registers are also serializing.[m] On Pentium and later processors, moves to the DR0-DR7 debug registers are serializing. Performs software interrupt #1 if executed when not using in-circuit emulation.[p] Performs same operation asMOVif executed when not doing in-circuit emulation.[q] UsingBSWAPwith a 16-bit register argument produces an undefined result.[a] Instruction atomic only if used withLOCKprefix. Instruction atomic only if used withLOCKprefix. Instruction is serializing. Integer/system instructions that were not present in the basic 80486 instruction set, but were added in various x86 processors prior to the introduction of SSE. (Discontinued instructionsare not included.) Instruction is, with some exceptions, serializing.[c] Instruction is serializing. Instruction is serializing, and causes a mandatory #VMEXIT under virtualization. Support forCPUIDcan be checked by toggling bit 21 ofEFLAGS(EFLAGS.ID) – if this bit can be toggled,CPUIDis present. Instruction atomic only if used withLOCKprefix.[k] In early processors, the TSC was a cycle counter, incrementing by 1 for each clock cycle (which could cause its rate to vary on processors that could change clock speed at runtime) – in later processors, it increments at a fixed rate that doesn't necessarily match the CPU clock speed.[n] Other than AMD K7/K8, broadly unsupported in non-Intel processors released before 2005.[v][60] These instructions are provided for software testing to explicitly generate invalid opcodes. The opcodes for these instructions are reserved for this purpose. WRMSRto the x2APIC ICR (Interrupt Command Register; MSR830h) is commonly used to produce an IPI (Inter-processor interrupt) - on Intel[40]but not AMD[41]CPUs, such an IPI can be reordered before an older memory store. For cases where there is a need to use more than 9 bytes of NOP padding, it is recommended to use multiple NOPs. These instructions can only be encoded in 64 bit mode. They fall in four groups: Most instructions with a 64 bit operand size encode this using aREX.Wprefix; in the absence of theREX.Wprefix, the corresponding instruction with 32 bit operand size is encoded. This mechanism also applies to most other instructions with 32 bit operand size. These are not listed here as they do not gain a new mnemonic in Intel syntax when used with a 64 bit operand size. Bit manipulation instructions. For all of theVEX-encodedinstructions defined by BMI1 and BMI2, the operand size may be 32 or 64 bits, controlled by the VEX.W bit – none of these instructions are available in 16-bit variants. The VEX-encoded instructions are not available in Real Mode and Virtual-8086 mode - other than that, the bit manipulation instructions are available in all operating modes on supported CPUs. Intel CET (Control-Flow Enforcement Technology) adds two distinct features to help protect against security exploits such asreturn-oriented programming: ashadow stack(CET_SS), andindirect branch tracking(CET_IBT). The XSAVE instruction set extensions are designed to save/restore CPU extended state (typically for the purpose ofcontext switching) in a manner that can be extended to cover new instruction set extensions without the OS context-switching code needing to understand the specifics of the new extensions. This is done by defining a series ofstate-components, each with a size and offset within a given save area, and each corresponding to a subset of the state needed for one CPU extension or another. TheEAX=0DhCPUIDleaf is used to provide information about which state-components the CPU supports and what their sizes/offsets are, so that the OS can reserve the proper amount of space and set the associated enable-bits. Instruction is serializing on AMD but not Intel CPUs. The C-states are processor-specific power states, which do not necessarily correspond 1:1 toACPI C-states. Any unsupported value in EAX causes an #UD exception. Any unsupported value in the register argument causes a #GP exception. Depending on function, the instruction may return data in RBX and/or an error code in EAX. Depending on function, the instruction may return data/status information in EAX and/or RCX. Instruction returns status information in EAX. If the instruction fails, it will set EFLAGS.ZF=1 and return an error code in EAX. If it is successful, it sets EFLAGS.ZF=0 and EAX=0. The register argument to theUMWAITandTPAUSEinstructions specifies extra flags to control the operation of the instruction.[q] PopsRIP,RFLAGSandRSPoff the stack, in that order.[u] Part of Intel DSA (Data Streaming AcceleratorArchitecture).[126] The instruction differs from the olderWRMSRinstruction in that it is not serializing. The instruction is not serializing. Part of Intel TSE (Total Storage Encryption), and available in 64-bit mode only. If the instruction fails, it will set EFLAGS.ZF=1 and return an error code in EAX. If it is successful, it sets EFLAGS.ZF=0 and EAX=0. Intel XED uses the mnemonicshint-takenandhint-not-takenfor these branch hints.[115] Any unsupported value in EAX causes a #GP exception. Any unsupported value in EAX causes a #GP exception.TheEENTERandERESUMEfunctions cannot be executed inside an SGX enclave – the other functions can only be executed inside an enclave. Any unsupported value in EAX causes a #GP exception.TheENCLVinstruction is only present on systems that support the EPC Oversubscription Extensions to SGX ("OVERSUB"). Any unsupported value in EAX causes a #GP(0) exception. The value of the MSR is returned in EDX:EAX. Unsupported values in ECX return 0. Thex87coprocessor, if present, provides support for floating-point arithmetic. The coprocessor provides eight data registers, each holding one 80-bit floating-point value (1 sign bit, 15 exponent bits, 64 mantissa bits) – these registers are organized as a stack, with the top-of-stack register referred to as "st" or "st(0)", and the other registers referred to as st(1), st(2), ...st(7). It additionally provides a number of control and status registers, including "PC" (precision control, to control whether floating-point operations should be rounded to 24, 53 or 64 mantissa bits) and "RC" (rounding control, to pick rounding-mode: round-to-zero, round-to-positive-infinity, round-to-negative-infinity, round-to-nearest-even) and a 4-bit condition code register "CC", whose four bits are individually referred to as C0, C1, C2 and C3). Not all of the arithmetic instructions provided by x87 obey PC and RC. C1 is set to the sign-bit of st(0), regardless of whether st(0) is Empty or not. x86 also includes discontinued instruction sets which are no longer supported by Intel and AMD, and undocumented instructions which execute but are not officially documented. The x86 CPUs containundocumented instructionswhich are implemented on the chips but not listed in some official documents. They can be found in various sources across the Internet, such asRalf Brown's Interrupt Listand atsandpile.org Some of these instructions are widely available across many/most x86 CPUs, while others are specific to a narrow range of CPUs. The actual operation isAH ← AL/imm8; AL ← AL mod imm8for any imm8 value (except zero, which produces a divide-by-zero exception).[143] The actual operation isAL ← (AL+(AH*imm8)) & 0FFh; AH ← 0for any imm8 value. Unavailable on some 80486 steppings.[146][147] Introduced in the Pentium Pro in 1995, but remained undocumented until March 2006.[61][158][159] Unavailable on AMD K6, AMD Geode LX, VIA Nehemiah.[161] On AMD CPUs,0F 0D /rwith a memory argument is documented asPREFETCH/PREFETCHWsince K6-2 – originally as part of 3Dnow!, but has been kept in later AMD CPUs even after the rest of 3Dnow! was dropped. Available on Intel CPUs since65 nmPentium 4. Microsoft Windows 95 Setup is known to depend on0F FFbeing invalid[165][166]– it is used as a self check to test that its #UD exception handler is working properly. Other invalid opcodes that are being relied on by commercial software to produce #UD exceptions includeFF FF(DIF-2,[167]LaserLok[168]) andC4 C4("BOP"[169][170]), however as of January 2022 they are not published as intentionally invalid opcodes. STOREALL In some implementations, emulated throughBIOSas ahaltingsequence.[173] Ina forum post at the Vintage Computing Federation, this instruction (withF1prefix) is explained asSAVEALL. It interacts with ICE mode. Opcode reused forSYSCALLin AMD K6 and later CPUs. Opcode reused forSYSRETin AMD K6 and later CPUs. Opcodes reused for SSE instructions in later CPUs. The NexGen Nx586 CPU uses "hyper code"[180](x86 code sequences unpacked at boot time and only accessible in a special "hyper mode" operation mode, similar to DEC Alpha'sPALcodeand Intel's XuCode[181]) for many complicated operations that are implemented with microcode in most other x86 CPUs. The Nx586 provides a large number of undocumented instructions to assist hyper mode operation. Instruction known to be recognized byMASM6.13 and 6.14. Opcode reused for documentedPSWAPDinstruction from AMD K7 onwards. 64 0F (80..8F) rel16/32 Segment prefixes on conditional branches are accepted but ignored by non-NetBurst CPUs. On at least AMD K6-2, all of the unassigned 3DNow! opcodes (other than the undocumentedPF2IW,PI2FWandPSWAPWinstructions) are reported to execute as equivalents ofPOR(MMX bitwise-OR instruction).[183] GP2MEM Supported by OpenSSL[191]as part of itsVIA PadLocksupport, and listed in a Zhaoxin-supplied Linux kernel patch,[192]but not documented by the VIA PadLock Programming Guide. Listed in a VIA-supplied patch to add support for VIA Nano-specific PadLock instructions to OpenSSL,[193]but not documented by the VIA PadLock Programming Guide. FENI8087_NOP Present on all Intel x87 FPUs from 80287 onwards. For FPUs other than the ones where they were introduced on (8087 forFENI/FDISIand 80287 forFSETPM), they act asNOPs. These instructions and their operation on modern CPUs are commonly mentioned in later Intel documentation, but with opcodes omitted and opcode table entries left blank (e.g.Intel SDM 325462-077, April 2022mentions them twice without opcodes). The opcodes are, however, recognized by Intel XED.[199] FDISI8087_NOP FSETPM287_NOP Their actual operation is not known, nor is it known whether their operation is the same on all of these CPUs.
https://en.wikipedia.org/wiki/X86_instruction_set
Incomputer science, ahash tableis adata structurethat implements anassociative array, also called adictionaryor simplymap; an associative array is anabstract data typethat mapskeystovalues.[3]A hash table uses ahash functionto compute anindex, also called ahash code, into an array ofbucketsorslots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called ahash map. Most hash table designs employ animperfect hash function.Hash collisions, where the hash function generates the same index for more than one key, therefore typically must be accommodated in some way. In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions ofkey–value pairs, atamortizedconstant average cost per operation.[4][5][6] Hashing is an example of aspace-time tradeoff. Ifmemoryis infinite, the entire key can be used directly as an index to locate its value with a single memory access. On the other hand, if infinite time is available, values can be stored without regard for their keys, and abinary searchorlinear searchcan be used to retrieve the element.[7]: 458 In many situations, hash tables turn out to be on average more efficient thansearch treesor any othertablelookup structure. For this reason, they are widely used in many kinds of computersoftware, particularly forassociative arrays,database indexing,caches, andsets. The idea of hashing arose independently in different places. In January 1953,Hans Peter Luhnwrote an internalIBMmemorandum that used hashing with chaining. The first example ofopen addressingwas proposed by A. D. Linh, building on Luhn's memorandum.[5]: 547Around the same time,Gene Amdahl,Elaine M. McGraw,Nathaniel Rochester, andArthur SamuelofIBM Researchimplemented hashing for theIBM 701assembler.[8]: 124Open addressing with linear probing is credited to Amdahl, althoughAndrey Ershovindependently had the same idea.[8]: 124–125The term "open addressing" was coined byW. Wesley Petersonin his article which discusses the problem of search in large files.[9]: 15 The firstpublishedwork on hashing with chaining is credited toArnold Dumey, who discussed the idea of using remainder modulo a prime as a hash function.[9]: 15The word "hashing" was first published in an article by Robert Morris.[8]: 126Atheoretical analysisof linear probing was submitted originally by Konheim and Weiss.[9]: 15 Anassociative arraystores asetof (key, value) pairs and allows insertion, deletion, and lookup (search), with the constraint ofunique keys. In the hash table implementation of associative arrays, an arrayA{\displaystyle A}of lengthm{\displaystyle m}is partially filled withn{\displaystyle n}elements, wherem≥n{\displaystyle m\geq n}. A keyx{\displaystyle x}is hashed using a hash functionh{\displaystyle h}to compute an index locationA[h(x)]{\displaystyle A[h(x)]}in the hash table, whereh(x)<m{\displaystyle h(x)<m}. At this index, both the key and its associated value are stored. Storing the key alongside the value ensures that lookups can verify the key at the index to retrieve the correct value, even in the presence of collisions. Under reasonable assumptions, hash tables have bettertime complexitybounds on search, delete, and insert operations in comparison toself-balancing binary search trees.[9]: 1 Hash tables are also commonly used to implement sets, by omitting the stored value for each key and merely tracking whether the key is present.[9]: 1 Aload factorα{\displaystyle \alpha }is a critical statistic of a hash table, and is defined as follows:[2]load factor(α)=nm,{\displaystyle {\text{load factor}}\ (\alpha )={\frac {n}{m}},}where The performance of the hash table deteriorates in relation to the load factorα{\displaystyle \alpha }.[9]: 2 The software typically ensures that the load factorα{\displaystyle \alpha }remains below a certain constant,αmax{\displaystyle \alpha _{\max }}. This helps maintain good performance. Therefore, a common approach is to resize or "rehash" the hash table whenever the load factorα{\displaystyle \alpha }reachesαmax{\displaystyle \alpha _{\max }}. Similarly the table may also be resized if the load factor drops belowαmax/4{\displaystyle \alpha _{\max }/4}.[10] With separate chaining hash tables, each slot of the bucket array stores a pointer to a list or array of data.[11] Separate chaining hash tables suffer gradually declining performance as the load factor grows, and no fixed point beyond which resizing is absolutely needed.[10] With separate chaining, the value ofαmax{\displaystyle \alpha _{\max }}that gives best performance is typically between 1 and 3.[10] With open addressing, each slot of the bucket array holds exactly one item. Therefore an open-addressed hash table cannot have a load factor greater than 1.[11] The performance of open addressing becomes very bad when the load factor approaches 1.[10] Therefore a hash table that uses open addressingmustbe resized orrehashedif the load factorα{\displaystyle \alpha }approaches 1.[10] With open addressing, acceptable figures of max load factorαmax{\displaystyle \alpha _{\max }}should range around 0.6 to 0.75.[12][13]: 110 Ahash functionh:U→{0,...,m−1}{\displaystyle h:U\rightarrow \{0,...,m-1\}}maps the universeU{\displaystyle U}of keys to indices or slots within the table, that is,h(x)∈{0,...,m−1}{\displaystyle h(x)\in \{0,...,m-1\}}forx∈U{\displaystyle x\in U}. The conventional implementations of hash functions are based on theinteger universe assumptionthat all elements of the table stem from the universeU={0,...,u−1}{\displaystyle U=\{0,...,u-1\}}, where thebit lengthofu{\displaystyle u}is confined within theword sizeof acomputer architecture.[9]: 2 A hash functionh{\displaystyle h}is said to beperfectfor a given setS{\displaystyle S}if it isinjectiveonS{\displaystyle S}, that is, if each elementx∈S{\displaystyle x\in S}maps to a different value in0,...,m−1{\displaystyle {0,...,m-1}}.[14][15]A perfect hash function can be created if all the keys are known ahead of time.[14] The schemes of hashing used ininteger universe assumptioninclude hashing by division, hashing by multiplication,universal hashing,dynamic perfect hashing, andstatic perfect hashing.[9]: 2However, hashing by division is the commonly used scheme.[16]: 264[13]: 110 The scheme in hashing by division is as follows:[9]: 2h(x)=xmodm,{\displaystyle h(x)\ =\ x\,{\bmod {\,}}m,}whereh(x){\displaystyle h(x)}is the hash value ofx∈S{\displaystyle x\in S}andm{\displaystyle m}is the size of the table. The scheme in hashing by multiplication is as follows:[9]: 2–3h(x)=⌊m((xA)mod1)⌋{\displaystyle h(x)=\lfloor m{\bigl (}(xA){\bmod {1}}{\bigr )}\rfloor }WhereA{\displaystyle A}is a non-integerreal-valued constantandm{\displaystyle m}is the size of the table. An advantage of the hashing by multiplication is that them{\displaystyle m}is not critical.[9]: 2–3Although any valueA{\displaystyle A}produces a hash function,Donald Knuthsuggests using thegolden ratio.[9]: 3 Uniform distributionof the hash values is a fundamental requirement of a hash function. A non-uniform distribution increases the number of collisions and the cost of resolving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated empirically using statistical tests, e.g., aPearson's chi-squared testfor discrete uniform distributions.[17][18] The distribution needs to be uniform only for table sizes that occur in the application. In particular, if one uses dynamic resizing with exact doubling and halving of the table size, then the hash function needs to be uniform only when the size is apower of two. Here the index can be computed as some range of bits of the hash function. On the other hand, some hashing algorithms prefer to have the size be aprime number.[19] Foropen addressingschemes, the hash function should also avoidclustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behavior.[19][5] K-independent hashingoffers a way to prove a certain hash function does not have bad keysets for a given type of hashtable. A number of K-independence results are known for collision resolution schemes such as linear probing and cuckoo hashing. Since K-independence can prove a hash function works, one can then focus on finding the fastest possible such hash function.[20] A search algorithm that uses hashing consists of two parts. The first part is computing ahash functionwhich transforms the search key into anarray index. The ideal case is such that no two search keys hash to the same array index. However, this is not always the case and impossible to guarantee for unseen given data.[21]: 515Hence the second part of the algorithm is collision resolution. The two common methods for collision resolution are separate chaining and open addressing.[7]: 458 In separate chaining, the process involves building alinked listwithkey–value pairfor each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key.[7]: 464Collision resolution through chaining with linked list is a common method of implementation of hash tables. LetT{\displaystyle T}andx{\displaystyle x}be the hash table and the node respectively, the operation involves as follows:[16]: 258 If the element is comparable eithernumericallyorlexically, and inserted into the list by maintaining thetotal order, it results in faster termination of the unsuccessful searches.[21]: 520–521 If the keys areordered, it could be efficient to use "self-organizing" concepts such as using aself-balancing binary search tree, through which thetheoretical worst casecould be brought down toO(log⁡n){\displaystyle O(\log {n})}, although it introduces additional complexities.[21]: 521 Indynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteedO(1){\displaystyle O(1)}in the worst case. In this technique, the buckets ofk{\displaystyle k}entries are organized asperfect hash tableswithk2{\displaystyle k^{2}}slots providing constant worst-case lookup time, and low amortized time for insertion.[22]A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load.[23]: 99 Techniques such as usingfusion treefor each buckets also result in constant time for all operations with high probability.[24] The linked list of separate chaining implementation may not becache-consciousdue tospatial locality—locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entailCPU cacheinefficiencies.[23]: 91 Incache-conscious variantsof collision resolution through separate chaining, adynamic arrayfound to be morecache-friendlyis used in the place where a linked list or self-balancing binary search trees is usually deployed, since thecontiguous allocationpattern of the array could be exploited byhardware-cache prefetchers—such astranslation lookaside buffer—resulting in reduced access time and memory consumption.[25][26][27] Open addressingis another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed throughprobing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in someprobe sequence, until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search.[28] Well-known probe sequences include: The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factorα{\displaystyle \alpha }approaches 1.[10][23]: 93The probing results in aninfinite loopif the load factor reaches 1, in the case of a completely filled table.[7]: 471Theaverage costof linear probing depends on the hash function's ability todistributethe elementsuniformlythroughout the table to avoidclustering, since formation of clusters would result in increased search time.[7]: 472 Since the slots are located in successive locations, linear probing could lead to better utilization ofCPU cachedue tolocality of referencesresulting in reducedmemory latency.[29] Coalesced hashingis a hybrid of both separate chaining and open addressing in which the buckets or nodes link within the table.[31]: 6–8The algorithm is ideally suited forfixed memory allocation.[31]: 4The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address.[31]: 8 Cuckoo hashingis a form of open addressing collision resolution technique which guaranteesO(1){\displaystyle O(1)}worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters intoinfinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues.[32]: 124–125 Hopscotch hashingis an open addressing based algorithm which combines the elements ofcuckoo hashing,linear probingand chaining through the notion of aneighbourhoodof buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket.[33]: 351–352The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput inconcurrent settings, thus well suited for implementing resizableconcurrent hash table.[33]: 350The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items.[33]: 352 Each bucket within the hash table includes an additional "hop-information"—anH-bitbit arrayfor indicating therelative distanceof the item which was originally hashed into the current virtual bucket withinH− 1 entries.[33]: 352Letk{\displaystyle k}andBk{\displaystyle Bk}be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed:[33]: 352–353ifBk{\displaystyle Bk}is empty, the element is inserted, and the leftmost bit of bitmap issetto 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of theneighbourhood,i.e.H− 1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhoodinvariant properties.[33]: 353 Robin Hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longestprobe sequence length(PSL)—from its "home location" i.e. the bucket to which the item was hashed into.[34]: 12Although Robin Hood hashing does not change thetheoretical search cost, it significantly affects thevarianceof thedistributionof the items on the buckets,[35]: 2i.e. dealing withclusterformation in the hash table.[36]Each node within the hash table that uses Robin Hood hashing should be augmented to store an extra PSL value.[37]Letx{\displaystyle x}be the key to be inserted,x.psl{\displaystyle x{.}{\text{psl}}}be the (incremental) PSL length ofx{\displaystyle x},T{\displaystyle T}be the hash table andj{\displaystyle j}be the index, the insertion procedure is as follows:[34]: 12–13[38]: 5 Repeated insertions cause the number of entries in a hash table to grow, which consequently increases the load factor; to maintain the amortizedO(1){\displaystyle O(1)}performance of the lookup and insertion operations, a hash table is dynamically resized and the items of the tables arerehashedinto the buckets of the new hash table,[10]since the items cannot be copied over as varying table sizes results in different hash value due tomodulo operation.[39]If a hash table becomes "too empty" after deleting some elements, resizing may be performed to avoid excessivememory usage.[40] Generally, a new hash table with a size double that of the original hash table getsallocatedprivately and every item in the original hash table gets moved to the newly allocated one by computing the hash values of the items followed by the insertion operation. Rehashing is simple, but computationally expensive.[41]: 478–479 Some hash table implementations, notably inreal-time systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform the resizing gradually to avoid storage blip—typically at 50% of new table's size—during rehashing and to avoidmemory fragmentationthat triggersheap compactiondue to deallocation of largememory blockscaused by the old hash table.[42]: 2–3In such case, the rehashing operation is done incrementally through extending prior memory block allocated for the old hash table such that the buckets of the hash table remain unaltered. A common approach for amortized rehashing involves maintaining two hash functionshold{\displaystyle h_{\text{old}}}andhnew{\displaystyle h_{\text{new}}}. The process of rehashing a bucket's items in accordance with the new hash function is termed ascleaning, which is implemented throughcommand patternby encapsulating the operations such asAdd(key){\displaystyle \mathrm {Add} (\mathrm {key} )},Get(key){\displaystyle \mathrm {Get} (\mathrm {key} )}andDelete(key){\displaystyle \mathrm {Delete} (\mathrm {key} )}through aLookup(key,command){\displaystyle \mathrm {Lookup} (\mathrm {key} ,{\text{command}})}wrappersuch that each element in the bucket gets rehashed and its procedure involve as follows:[42]: 3 Linear hashingis an implementation of the hash table which enables dynamic growths or shrinks of the table one bucket at a time.[43] The performance of a hash table is dependent on the hash function's ability in generatingquasi-random numbers(σ{\displaystyle \sigma }) for entries in the hash table whereK{\displaystyle K},n{\displaystyle n}andh(x){\displaystyle h(x)}denotes the key, number of buckets and the hash function such thatσ=h(K)%n{\displaystyle \sigma \ =\ h(K)\ \%\ n}. If the hash function generates the sameσ{\displaystyle \sigma }for distinct keys (K1≠K2,h(K1)=h(K2){\displaystyle K_{1}\neq K_{2},\ h(K_{1})\ =\ h(K_{2})}), this results incollision, which is dealt with in a variety of ways. The constant time complexity (O(1){\displaystyle O(1)}) of the operation in a hash table is presupposed on the condition that the hash function doesn't generate colliding indices; thus, the performance of the hash table isdirectly proportionalto the chosen hash function's ability todispersethe indices.[44]: 1However, construction of such a hash function ispractically infeasible, that being so, implementations depend oncase-specificcollision resolution techniquesin achieving higher performance.[44]: 2 Hash tables are commonly used to implement many types of in-memory tables. They are used to implementassociative arrays.[30] Hash tables may also be used asdisk-based data structures anddatabase indices(such as indbm) althoughB-treesare more popular in these applications.[45] Hash tables can be used to implementcaches, auxiliary data tables that are used to speed up the access to data that is primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two colliding entries—usually erasing the old item that is currently stored in the table and overwriting it with the new item, so every item in the table has a unique hash value.[46][47] Hash tables can be used in the implementation ofset data structure, which can store unique values without any particular order; set is typically used in testing the membership of a value in the collection, rather than element retrieval.[48] Atransposition tableto a complex Hash Table which stores information about each section that has been searched.[49] Many programming languages provide hash table functionality, either as built-in associative arrays or asstandard librarymodules.
https://en.wikipedia.org/wiki/Hash_table
Incomputing, afile systemorfilesystem(often abbreviated toFSorfs) governsfileorganization and access. Alocalfile system is a capability of anoperating systemthat services the applications running on the samecomputer.[1][2]Adistributed file systemis aprotocolthat provides file access betweennetworkedcomputers. A file system provides adata storageservicethat allowsapplicationsto sharemass storage. Without a file system, applications could access the storage inincompatibleways that lead toresource contention,data corruptionanddata loss. There are many file systemdesignsandimplementations– with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more. File systems have been developed for many types ofstorage devices, includinghard disk drives(HDDs),solid-state drives(SSDs),magnetic tapesandoptical discs.[3] A portion of the computermain memorycan be set up as aRAM diskthat serves as a storage device for a file system. File systems such astmpfscan store files invirtual memory. Avirtualfile system provides access to files that are either computed on request, calledvirtual files(seeprocfsandsysfs), or are mapping into another, backing storage. Fromc.1900and before the advent of computers the termsfile system,filing systemandsystem for filingwere used to describe methods of organizing, storing and retrieving paper documents.[4]By 1961, the termfile systemwas being applied to computerized filing alongside the original meaning.[5]By 1964, it was in general use.[6] A local file system'sarchitecturecan be described aslayers of abstractioneven though a particular file system design may not actually separate the concepts.[7] Thelogical file systemlayer provides relatively high-level access via anapplication programming interface(API) for file operations including open, close, read and write – delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors.[8]It provides file access, directory operations, security and protection.[7] Thevirtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which is called a file system implementation.[8] Thephysical file systemlayer provides relatively low-level access to a storage device (e.g. disk). It reads and writesdata blocks, providesbufferingand othermemory managementand controls placement of blocks in specific locations on the storage medium. This layer usesdevice driversorchannel I/Oto drive the storage device.[7] Afile name, orfilename, identifies a file to consuming applications and in some cases users. A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory. Most file systems restrict the length of a file name. Some file systems match file names ascase sensitiveand others as case insensitive. For example, the namesMYFILEandmyfilematch the same file for case insensitive, but different files for case sensitive. Most modern file systems allow a file name to contain a wide range of characters from theUnicodecharacter set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type. File systems typically support organizing files intodirectories, also calledfolders, which segregate files into groups. This may be implemented by associating the file name with an index in atable of contentsor aninodein aUnix-likefile system. Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories. The first file system to support arbitrary hierarchies of directories was used in theMulticsoperating system.[9]The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do,Apple'sHierarchical File Systemand its successorHFS+inclassic Mac OS, theFATfile system inMS-DOS2.0 and later versions of MS-DOS and inMicrosoft Windows, theNTFSfile system in theWindows NTfamily of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of theFiles-11file system inOpenVMS. In addition to data, the file content, a file system also manages associatedmetadatawhich may include but is not limited to: A file system stores associated metadata separate from the content of the file. Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file. Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as theinode. Most file systems also store metadata not associated with any one particular file. Such metadata includes information about unused regions—free space bitmap,block availability map—and information aboutbad sectors. Often such information about anallocation groupis stored inside the allocation group itself. Additional attributes can be associated on file systems, such asNTFS,XFS,ext2,ext3, some versions ofUFS, andHFS+, usingextended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image. Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to asstreamsorforks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago. Seecomparison of file systems § Metadatafor details on which file systems support which kinds of metadata. A local file system tracks which areas of storage belong to which file and which are not being used. When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows. To delete a file, the file system records that the file's space is free; available to use for another file. A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e.bytes). For example, inApple DOSof the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used atrack/sector map.[citation needed] The granular nature results in unused space, sometimes calledslack space, for each file except for those that have the rare size that is a multiple of the granular allocation.[10]For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB. Generally, the allocation unit size is set when the storage is configured. Choosing a relatively small size compared to the files stored, results in excessive access overhead. Choosing a relatively large size results in excessive unused space. Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space. As a file system creates, modifies and deletes files, the underlying storage representation may becomefragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous. A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.[11] This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such ashard disk drives. Other hardware such assolid-state drivesare not affected by fragmentation. A file system often supports access control of data that it manages. The intent of access control is often to prevent certain users from reading or modifying certain files. Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere andfile permissionsin the form of permission bits,access control lists, orcapabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders. Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data. Some operating systems allow a system administrator to enabledisk quotasto limit a user's use of storage space. A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like: Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media. A file system might record events to allow analysis of issues such as: Many file systems access data as a stream ofbytes. Typically, to read file data, a program provides amemory bufferand the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium. Some file systems, or layers on top of a file system, allow a program to define arecordso that a program can read and write data as a structure; not an unorganized sequence of bytes. If afixed lengthrecord definition is used, then locating the nthrecord can be calculated mathematically, which is relatively fast compared to parsing the data for record separators. An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.[12] Typically, a file system can be managed by the user via various utility programs. Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system. Directory utilities may be used to create, rename and deletedirectory entries, which are also known asdentries(singular:dentry),[13]and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard linksinUnix), to rename parent links (".." inUnix-likeoperating systems),[clarification needed]and to create bidirectional links to files. File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category. Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file systemdefragmentationutilities. Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system. Utilities, libraries and programs usefile system APIsto make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal. Frequently, retail systems are configured with a single file system occupying the entirestorage device. Another approach is topartitionthe disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be setread-onlyand only periodically be set writable. Some file systems, such asZFSandAPFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.[14][15] A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using theext4file system) in a virtual machine under his/her production Windows environment (usingNTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on thehypervisorand settings) in the NTFS host file system. Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of thesystemfile system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition,defragmentationmay be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches). Adisk file systemtakes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples includeFAT(FAT12,FAT16,FAT32),exFAT,NTFS,ReFS,HFSandHFS+,HPFS,APFS,UFS,ext2,ext3,ext4,XFS,btrfs,Files-11,Veritas File System,VMFS,ZFS,ReiserFS,NSSand ScoutFS. Some disk file systems arejournaling file systemsorversioning file systems. ISO 9660andUniversal Disk Format(UDF) are two common formats that targetCompact Discs,DVDsandBlu-raydiscs.Mount Rainieris an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs. Aflash file systemconsiders the special abilities, performance and restrictions offlash memorydevices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.[16] Atape file systemis a file system and tape format designed to store files on tape.Magnetic tapesare sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system. In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks. Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other. Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file. Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to asstreaming, so that time-consuming and repeated tape motions are not required to write new data. However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future. IBM has developed a file system for tape called theLinear Tape File System. The IBM implementation of this file system has been released as the open-sourceIBM Linear Tape File System — Single Drive Edition (LTFS-SDE)product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape. Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes.[a]With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media. Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time. Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similarrich metadata.[17] IBM DB2 for i[18](formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object basedIBM i[19]operating system (formerly known as OS/400 and i5/OS), incorporating asingle level storeand running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish.[20]These technologies are informally known as 'Fortress Rochester'[citation needed]and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective[citation needed]. Some other projects that are not "pure" database file systems but that use some aspects of a database file system: Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the commandshell, may leave the entire system in an unusable state. Transaction processingintroduces theatomicityguarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide theisolationguarantee[clarification needed], meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properlyserializedwith the transaction. Windows, beginning with Vista, added transaction support toNTFS, in a feature calledTransactional NTFS, but its use is now discouraged.[21]There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system,[22]Amino,[23]LFS,[24]and a transactionalext3file system on the TxOS kernel,[25]as well as transactional file systems targeting embedded systems, such as TFFS.[26] Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions.File lockingcan be used as aconcurrency controlmechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot preventTOCTTOUrace conditions on symbolic links. File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity. Journaling file systemsis one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call. Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software. Anetwork file systemis a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for theNFS,[27]AFS,SMBprotocols, and file-system-like clients forFTPandWebDAV. Ashared disk file systemis one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually astorage area network). The file system arbitrates access to that subsystem, preventing write collisions.[28]Examples includeGFS2fromRed Hat,GPFS, now known as Spectrum Scale, from IBM,SFSfrom DataPlow,CXFSfromSGI,StorNextfromQuantum Corporationand ScoutFS from Versity. Some file systems expose elements of the operating system as files so they can be acted on via thefile system API. This is common inUnix-likeoperating systems, and to a lesser extent in other operating systems. Examples include: In the 1970s disk and digital tape devices were too expensive for some earlymicrocomputerusers. An inexpensive basic data storage system was devised that used commonaudio cassettetape. When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, thenmodulated soundsthat encoded a prefix, the data, achecksumand a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system wouldlistento the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as theCommodore PETseries of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data. In a flat file system, there are nosubdirectories; directory entries for all files are stored in a single directory. Whenfloppy diskmedia was first available this type of file system was adequate due to the relatively small amount of data space available.CP/Mmachines featured a flat file system, where files could be assigned to one of 16user areasand generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specificquotafor each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The earlyApple Macintoshalso featured a flat file system, theMacintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder.IBMDOS/360andOS/360store entries for all files on a disk pack (volume) in a directory on the pack called aVolume Table of Contents(VTOC). While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files. A recent addition to the flat file system family isAmazon'sS3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes. Anoperating system(OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently. An OS typically provides file system access to the user. Often an OS providescommand line interface, such asUnix shell, WindowsCommand PromptandPowerShell, andOpenVMS DCL. An OS often also providesgraphical user interfacefile browserssuch as MacOSFinderand WindowsFile Explorer. Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory. Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called themount point– it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems. Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose. Linuxsupports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2,ext3andext4),XFS,JFS, andbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there areUBIFS,JFFS2andYAFFS, among others.SquashFSis a common compressed read-only file system. Solarisin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS. Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (journaling)VxFS, Sun Microsystems (clustering)QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS. Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orjournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS. Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools inZFS. macOS (formerly Mac OS X)uses theApple File System(APFS), which in 2017 replaced a file system inherited fromclassic Mac OScalledHFS Plus(HFS+). Apple also uses the term "Mac OS Extended" for HFS+.[29]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter. File names can be up to 255 characters. HFS Plus usesUnicodeto store file names. On macOS, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension. HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic links, andaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland. macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses theApple File Systemonsolid-state drives. macOS also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[30]As ofMac OS X LionUFS support was completely dropped. Newer versions of macOS are capable of reading and writing to the legacyFATfile systems (16 and 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on macOS versions prior toMac OS X Snow Leopardthird-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).[31] Finally, macOS supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[32] OS/21.2 introduced theHigh Performance File System(HPFS). HPFS supports mixed case file names in differentcode pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data,extent-basedspace allocation, aB+ treestructure for directories, and the root directory located at the midpoint of the disk, for faster average access. Ajournaled filesystem(JFS) was shipped in 1999. PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28. Plan 9 from Bell Labstreats everything as a file and accesses all objects as a file would be accessed (i.e., there is noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system. TheInfernooperating system shares these concepts with Plan 9. Windows makes use of theFAT,NTFS,exFAT,Live File SystemandReFSfile systems (the last of these is only supported and usable inWindows Server 2012,Windows Server 2016,Windows 8,Windows 8.1, andWindows 10; Windows cannot boot from it). Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primaryhard disk drivepartition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967. The family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PC DOS,OS/2, andDR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age. The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed] Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows. The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions. FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS. FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion. NTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links). exFAThas certain advantages over NTFS with regard tofile system overhead.[citation needed] exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11. exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard).[32]Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.[33][34] Prior to the introduction ofVSAM,OS/360systems implemented a hybrid file system. The system was designed to easily supportremovable disk packs, so the information relating to all files on one disk (volumein IBM terminology) is stored on that disk in aflat system filecalled theVolume Table of Contents(VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of theSystem Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access. The IBMConversational Monitor System(CMS) component ofVM/370uses a separate flat file system for eachvirtual disk(minidisk). File data and control information are scattered and intermixed. The anchor is a record called theMaster File Directory(MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels ofindirection, where the file's directory entry (called aFile Status Table(FST) entry) points to blocks containing a list of addresses of the individual records. Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in asingle-level store. Many types ofobjectsare defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integratedrelational database. File systems limitstorable data capacity– generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future. Since storage sizes have increased at nearexponentialrate (seeMoore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity. With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980shome computerswith 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems. It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason. In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended.[39]On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse.[39]On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back),[40]and both ext3 and ext4 can be converted tobtrfs, and converted back until the undo information is deleted.[41]These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases usingsparse filesupport.[41] Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system. For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted. An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup. Inhierarchical file systems, files are accessed by means of apaththat is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name. Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
https://en.wikipedia.org/wiki/File_system#Directory_structure
x86(also known as80x86[3]or the8086 family)[4]is a family ofcomplex instruction set computer(CISC)instruction set architectures[a]initially developed byIntel, based on the8086microprocessor and its 8-bit-external-bus variant, the8088. The 8086 was introduced in 1978 as a fully16-bitextension of8-bitIntel's8080microprocessor, withmemory segmentationas a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the80186,80286,80386and80486. Colloquially, their names were "186", "286", "386" and "486". The term is not synonymous withIBM PC compatibility, as this implies a multitude of othercomputer hardware.Embedded systemsand general-purpose computers used x86 chipsbefore the PC-compatible market started,[b]some of them before theIBM PC(1981) debut. As of June 2022[update], mostdesktopandlaptopcomputers sold are based on the x86 architecture family,[5]while mobile categories such assmartphonesortabletsare dominated byARM. At the high end, x86 continues to dominate computation-intensiveworkstationandcloud computingsegments.[6] In the 1980s and early 1990s, when the8088and80286were still in common use, the term x86 usually represented any 8086-compatible CPU. Today, however, x86 usually implies binary compatibility with the32-bitinstruction setof the80386. This is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also probably because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fatedIntel iAPX 432processor was tried on the more successful 8086 family of chips,[c]applied as a kind of system-level prefix. An 8086 system, includingcoprocessorssuch as8087and8089, and simpler Intel-specific system chips,[d]was thereby described as an iAPX 86 system.[7][e]There were also termsiRMX(for operating systems),iSBC(for single-board computers), andiSBX(for multimodule boards based on the 8086 architecture), all together under the headingMicrosystem 80.[8][9]However, this naming scheme was quite temporary, lasting for a few years during the early 1980s.[f] Although the 8086 was primarily developed forembedded systemsand small multi-user or single-user computers, largely as a response to the successful 8080-compatibleZilog Z80,[10]the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, and is also used inmidrange computers,workstations, servers, and most newsupercomputerclustersof theTOP500list. A large amount ofsoftware, including a large list ofx86 operating systemsare using x86-based hardware. Modern x86 is relatively uncommon inembedded systems, however; smalllow powerapplications (using tiny batteries), and low-cost microprocessor markets, such ashome appliancesand toys, lack significant x86 presence.[g]Simple 8- and 16-bit based architectures are common here, as well as simpler RISC architectures likeRISC-V, although the x86-compatibleVIA C7,VIA Nano,AMD'sGeode,Athlon NeoandIntel Atomare examples of 32- and64-bitdesigns used in some relatively low-power and low-cost segments. There have been several attempts, including by Intel, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are theiAPX 432(a project originally named theIntel 8800[11]), theIntel 960,Intel 860and the Intel/Hewlett-PackardItaniumarchitecture. However, the continuous refinement of x86microarchitectures,circuitryandsemiconductor manufacturingwould make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 (which Intel eventually responded to with a compatible design)[12]and the scalability of x86 chips in the form of modern multi-core CPUs, is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from completely new architectures.[13] The table below lists processor models and model series implementing various architectures in the x86 family, in chronological order. Each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. At various times, companies such asIBM,VIA,NEC,[h]AMD,TI,STM,Fujitsu,OKI,Siemens,Cyrix,Intersil,C&T,NexGen,UMC, andDM&Pstarted to design or manufacture[i]x86processors(CPUs) intended for personal computers and embedded systems. Other companies that designed or manufactured x86 orx87processors includeITT Corporation,National Semiconductor, ULSI System Technology, andWeitek. Such x86 implementations were seldom simple copies but often employed different internalmicroarchitecturesand different solutions at the electronic and physical levels. Quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For thepersonal computermarket, real quantities started to appear around 1990 withi386andi486compatible processors, often named similarly to Intel's original chips. After the fullypipelinedi486, in 1993Intelintroduced thePentiumbrand name (which, unlike numbers, could betrademarked) for their new set ofsuperscalarx86 designs. With the x86 naming scheme now legally cleared, other x86 vendors had to choose different names for their x86-compatible products, and initially some chose to continue with variations of the numbering scheme:IBMpartnered withCyrixto produce the5x86and then the very efficient6x86(M1) and6x86MX (MII) lines of Cyrix designs, which were the first x86 microprocessors implementingregister renamingto enablespeculative execution. AMD meanwhile designed and manufactured the advanced but delayed5k86(K5), which, internally, was closely based on AMD's earlier29KRISCdesign; similar toNexGen'sNx586, it used a strategy such that dedicated pipeline stages decode x86 instructions into uniform and easily handledmicro-operations, a method that has remained the basis for most x86 designs to this day. Some early versions of these microprocessors had heat dissipation problems. The 6x86 was also affected by a few minor compatibility problems, theNx586lacked afloating-point unit(FPU) and (the then crucial) pin-compatibility, while theK5had somewhat disappointing performance when it was (eventually) introduced. Customer ignorance of alternatives to the Pentium series further contributed to these designs being comparatively unsuccessful, despite the fact that theK5had very good Pentium compatibility and the6x86was significantly faster than the Pentium on integer code.[j]AMDlater managed to grow into a serious contender with theK6set of processors, which gave way to the very successfulAthlonandOpteron. There were also other contenders, such asCentaur Technology(formerlyIDT),Rise Technology, andTransmeta.VIA Technologies' energy efficientC3andC7processors, which were designed by theCentaurcompany, were sold for many years following their release in 2005. Centaur's 2008 design, theVIA Nano, was their first processor withsuperscalarandspeculative execution. It was introduced at about the same time (in 2008) as Intel introduced theIntel Atom, its first "in-order" processor after theP5Pentium. Many additions and extensions have been added to the original x86 instruction set over the years, almost consistently with fullbackward compatibility.[k]The architecture family has been implemented in processors from Intel,Cyrix,AMD,VIA Technologiesand many other companies; there are also open implementations, such as the Zet SoC platform (currently inactive).[16]Nevertheless, of those, only Intel, AMD, VIA Technologies, andDM&P Electronicshold x86 architectural licenses, and from these, only the first two actively produce modern 64-bit designs, leading to what has been called a "duopoly" of Intel and AMD in x86 processors. However, in 2014 the Shanghai-based Chinese companyZhaoxin, a joint venture between a Chinese company and VIA Technologies, began designing VIA based x86 processors for desktops and laptops. The release of its newest "7" family[17]of x86 processors (e.g. KX-7000), which are not quite as fast as AMD or Intel chips but are still state of the art,[18]had been planned for 2021; as of March 2022 the release had not taken place, however.[19] Theinstruction set architecturehas twice been extended to a largerwordsize. In 1985, Intel released the 32-bit 80386 (later known as i386) which gradually replaced the earlier 16-bit chips in computers (although typically not inembedded systems) during the following years; this extended programming model was originally referred to asthe i386 architecture(like its first implementation) but Intel later dubbed itIA-32when introducing its (unrelated)IA-64architecture. In 1999–2003,AMDextended this 32-bit architecture to 64 bits and referred to it asx86-64in early documents and later asAMD64. Intel soon adopted AMD's architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64.MicrosoftandSun Microsystems/Oraclealso use term "x64", while manyLinux distributions, and theBSDsalso use the "amd64" term. Microsoft Windows, for example, designates its 32-bit versions as "x86" and 64-bit versions as "x64", while installation files of 64-bit Windows versions are required to be placed into a directory called "AMD64".[20] In 2023, Intel proposed a major change to the architecture referred to asX86S(formerly known as X86-S). The S in X86S stood for "simplification", which aimed to remove support for legacy execution modes and instructions. A processor implementing this proposal would start execution directly inlong modeand would only support 64-bit operating systems. 32-bit code would only be supported for user applications running in ring 3, and would use the same simplified segmentation as long mode.[21][22]In December 2024 Intel cancelled this project.[23] The x86 architecture is a variable instruction length, primarily "CISC" design with emphasis onbackward compatibility. The instruction set is not typical CISC, however, but basically an extended version of the simple eight-bit8008and8080architectures. Byte-addressing is enabled and words are stored in memory withlittle-endianbyte order. Memory access to unaligned addresses is allowed for almost all instructions. The largest native size forintegerarithmetic and memory addresses (oroffsets) is 16, 32 or 64 bits depending on architecture generation (newer processors include direct support for smaller integers as well). Multiple scalar values can be handled simultaneously via the SIMD unit present in later generations, as described below.[l]Immediate addressing offsets and immediate data may be expressed as 8-bit quantities for the frequently occurring cases or contexts where a −128..127 range is enough. Typical instructions are therefore 2 or 3 bytes in length (although some are much longer, and some are single-byte). To further conserve encoding space, most registers are expressed inopcodesusing three or four bits, the latter via an opcode prefix in 64-bit mode, while at most one operand to an instruction can be a memory location.[m]However, this memory operand may also be the destination (or a combined source and destination), while the other operand, the source, can be either register or immediate. Among other factors, this contributes to a code size that rivals eight-bit machines and enables efficient use of instruction cache memory. The relatively small number of general registers (also inherited from its 8-bit ancestors) has made register-relative addressing (using small immediate offsets) an important method of accessing operands, especially on the stack. Much work has therefore been invested in making such accesses as fast as register accesses—i.e., a one cycle instruction throughput, in most circumstances where the accessed data is available in the top-level cache. A dedicatedfloating-point processorwith 80-bit internal registers, the8087, was developed for the original8086. This microprocessor subsequently developed into the extended80387, and later processors incorporated abackward compatibleversion of this functionality on the same microprocessor as the main processor. In addition to this, modern x86 designs also contain aSIMD-unit (seeSSEbelow) where instructions can work in parallel on (one or two) 128-bit words, each containing two or fourfloating-point numbers(each 64 or 32 bits wide respectively), or alternatively, 2, 4, 8 or 16 integers (each 64, 32, 16 or 8 bits wide respectively). The presence of wide SIMD registers means that existing x86 processors can load or store up to 128 bits of memory data in a single instruction and also perform bitwise operations (although not integer arithmetic[n]) on full 128-bits quantities in parallel. Intel'sSandy Bridgeprocessors added theAdvanced Vector Extensions(AVX) instructions, widening the SIMD registers to 256 bits. The Intel Initial Many Core Instructions implemented by the Knights CornerXeon Phiprocessors, and theAVX-512instructions implemented by the Knights Landing Xeon Phi processors and bySkylake-Xprocessors, use 512-bit wide SIMD registers. Duringexecution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces called micro-operations. These are then handed to acontrol unitthat buffers and schedules them in compliance with x86-semantics so that they can be executed, partly in parallel, by one of several (more or less specialized)execution units. These modern x86 designs are thuspipelined,superscalar, and also capable ofout of orderandspeculative execution(viabranch prediction,register renaming, andmemory dependence prediction), which means they may execute multiple (partial or complete) x86 instructions simultaneously, and not necessarily in the same order as given in the instruction stream.[24]Some Intel CPUs (Xeon Foster MP, somePentium 4, and someNehalemand laterIntel Coreprocessors) and AMD CPUs (starting fromZen) are also capable ofsimultaneous multithreadingwith twothreadspercore(Xeon Phihas four threads per core). Some Intel CPUs supporttransactional memory(TSX). When introduced, in the mid-1990s, this method was sometimes referred to as a "RISC core" or as "RISC translation", partly for marketing reasons, but also because these micro-operations share some properties with certain types of RISC instructions. However, traditionalmicrocode(used since the 1950s) also inherently shares many of the same properties; the new method differs mainly in that the translation to micro-operations now occurs asynchronously. Not having to synchronize the execution units with the decode steps opens up possibilities for more analysis of the (buffered) code stream, and therefore permits detection of operations that can be performed in parallel, simultaneously feeding more than one execution unit. The latest processors also do the opposite when appropriate; they combine certain x86 sequences (such as a compare followed by a conditional jump) into a more complex micro-op which fits the execution model better and thus can be executed faster or with fewer machine resources involved. Another way to try to improve performance is to cache the decoded micro-operations, so the processor can directly access the decoded micro-operations from a special cache, instead of decoding them again. Intel followed this approach with the Execution Trace Cache feature in theirNetBurstmicroarchitecture (for Pentium 4 processors) and later in the Decoded Stream Buffer (for Core-branded processors since Sandy Bridge).[25] Transmetaused a completely different method in theirCrusoex86 compatible CPUs. They usedjust-in-timetranslation to convert x86 instructions to the CPU's nativeVLIWinstruction set. Transmeta argued that their approach allows for more power efficient designs since the CPU can forgo the complicated decode step of more traditional x86 implementations. Addressing modesfor 16-bit processor modes can be summarized by the formula:[26][27] Addressing modes for 32-bit x86 processor modes[28]can be summarized by the formula:[29] Addressing modes for the 64-bit processor mode can be summarized by the formula:[29] Instruction relative addressing in 64-bit code (RIP + displacement, where RIP is theinstruction pointer register) simplifies the implementation ofposition-independent code(as used inshared librariesin some operating systems).[30] The 8086 had64 KBof eight-bit (or alternatively32 K-word of 16-bit)I/Ospace, and a64 KB(one segment)stackin memory supported bycomputer hardware. Only words (two bytes) can be pushed to the stack. The stack grows toward numerically lower addresses, withSS:SPpointing to the most recently pushed item. There are 256interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store thereturn address. The originalIntel 8086and8088have fourteen 16-bitregisters. Four of them (AX, BX, CX, DX) are general-purpose registers (GPRs), although each may have an additional purpose; for example, only CX can be used as a counter with the loop instruction. Each can be accessed as two separate bytes (thus BX's high byte can be accessed as BH and low byte as BL). Two pointer registers have special roles: SP (stack pointer) points to the "top" of thestack, and BP (base pointer) is often used to point at some other place in the stack, typically above the local variables (seeframe pointer). The registers SI, DI, BX and BP areaddress registers, and may also be used for array indexing. One of four possible 'segment registers' (CS, DS, SS and ES) is used to form a memory address. In the original 8086 / 8088 / 80186 / 80188 every address was built from a segment register and one of the general purpose registers. For example ds:si is the notation for an address formed as [16 * ds + si] to allow 20-bit addressing rather than 16 bits, although this changed in later processors. At that time only certain combinations were supported. TheFLAGS registercontainsflagssuch ascarry flag,overflow flagandzero flag. Finally, theinstruction pointer(IP) points to the next instruction that will be fetched from memory and then executed; this register cannot be directly accessed (read or written) by a program.[31] TheIntel 80186and80188are essentially an upgraded 8086 or 8088 CPU, respectively, with on-chip peripherals added, and they have the same CPU registers as the 8086 and 8088 (in addition to interface registers for the peripherals). The 8086, 8088, 80186, and 80188 can use an optional floating-point coprocessor, the8087. The 8087 appears to the programmer as part of the CPU and adds eight 80-bit wide registers, st(0) to st(7), each of which can hold numeric data in one of seven formats: 32-, 64-, or 80-bit floating point, 16-, 32-, or 64-bit (binary) integer, and 80-bit packed decimal integer.[9]: S-6, S-13..S-15It also has its own 16-bit status register accessible through thefstswinstruction, and it is common to simply use some of its bits for branching by copying it into the normal FLAGS.[32] In theIntel 80286, to supportprotected mode, three special registers hold descriptor table addresses (GDTR, LDTR,IDTR), and a fourth task register (TR) is used for task switching. The80287is the floating-point coprocessor for the 80286 and has the same registers as the 8087 with the same data formats. With the advent of the 32-bit80386processor, the 16-bit general-purpose registers, base registers, index registers, instruction pointer, andFLAGS register, but not the segment registers, were expanded to 32 bits. The nomenclature represented this by prefixing an "E" (for "extended") to the register names inx86 assembly language. Thus, the AX register corresponds to the lower 16 bits of the new 32-bit EAX register, SI corresponds to the lower 16 bits of ESI, and so on. The general-purpose registers, base registers, and index registers can all be used as the base in addressing modes, and all of those registers except for the stack pointer can be used as the index in addressing modes. Two new segment registers (FS and GS) were added. With a greater number of registers, instructions and operands, themachine codeformat was expanded. To provide backward compatibility, segments with executable code can be marked as containing either 16-bit or 32-bit instructions. Special prefixes allow inclusion of 32-bit instructions in a 16-bit segment or vice versa. The 80386 had an optional floating-point coprocessor, the80387; it had eight 80-bit wide registers: st(0) to st(7),[33]like the 8087 and 80287. The 80386 could also use an 80287 coprocessor.[34]With the80486and all subsequent x86 models, the floating-point processing unit (FPU) is integrated on-chip. ThePentium MMXadded eight 64-bitMMXinteger vector registers (MM0 to MM7, which share lower bits with the 80-bit-wide FPU stack).[35]With thePentium III, Intel added a 32-bitStreaming SIMD Extensions(SSE) control/status register (MXCSR) and eight 128-bit SSE floating-point registers (XMM0 to XMM7).[36] Starting with theAMD Opteronprocessor, the x86 architecture extended the 32-bit registers into 64-bit registers in a way similar to how the 16 to 32-bit extension took place. AnR-prefix (for "register") identifies the 64-bit registers (RAX, RBX, RCX, RDX, RSI, RDI, RBP, RSP, RFLAGS, RIP), and eight additional 64-bit general registers (R8–R15) were also introduced in the creation ofx86-64. Also, eight more SSE vector registers (XMM8–XMM15) were added. However, these extensions are only usable in 64-bit mode, which is one of the two modes only available inlong mode. The addressing modes were not dramatically changed from 32-bit mode, except that addressing was extended to 64 bits, virtual addresses are now sign extended to 64 bits (in order to disallow mode bits in virtual addresses), and other selector details were dramatically reduced. In addition, an addressing mode was added to allow memory references relative to RIP (theinstruction pointer), to ease the implementation ofposition-independent code, used in shared libraries in some operating systems. SIMD registers XMM0–XMM15 (XMM0–XMM31 whenAVX-512is supported). SIMD registers YMM0–YMM15 (YMM0–YMM31 whenAVX-512is supported). Lower half of each of the YMM registers maps onto the corresponding XMM register. SIMD registers ZMM0–ZMM31. Lower half of each of the ZMM registers maps onto the corresponding YMM register. x86 processors that have aprotected mode, i.e. the 80286 and later processors, also have three descriptor registers (GDTR, LDTR,IDTR) and a task register (TR). 32-bit x86 processors (starting with the 80386) also include various special/miscellaneous registers such ascontrol registers(CR0 through 4, CR8 for 64-bit only),debug registers(DR0 through 3, plus 6 and 7),test registers(TR3 through 7; 80486 only), andmodel-specific registers(MSRs, appearing with the Pentium[o]). AVX-512has eight extra 64-bit mask registers K0–K7 for selecting elements in a vector register. Depending on the vector register and element widths, only a subset of bits of the mask register may be used by a given instruction. Although the main registers (with the exception of the instruction pointer) are "general-purpose" in the 32-bit and 64-bit versions of the instruction set and can be used for anything, it was originally envisioned that they be used for the following purposes: Segment registers: No particular purposes were envisioned for the other 8 registers available only in 64-bit mode. Some instructions compile and execute more efficiently when using these registers for their designed purpose. For example, using AL as anaccumulatorand adding an immediate byte value to it produces the efficientadd to ALopcodeof 04h, whilst using the BL register produces the generic and longeradd to registeropcode of 80C3h. Another example is double precision division and multiplication that works specifically with the AX and DX registers. Modern compilers benefited from the introduction of thesibbyte (scale-index-base byte) that allows registers to be treated uniformly (minicomputer-like). However, using the sib byte universally is non-optimal, as it produces longer encodings than only using it selectively when necessary. (The main benefit of the sib byte is the orthogonality and more powerful addressing modes it provides, which make it possible to save instructions and the use of registers for address calculations such as scaling an index.) Some special instructions lost priority in the hardware design and became slower than equivalent small code sequences. A notable example is the LODSW instruction. Note: The ?PL registers are only available in 64-bit mode. Note: The ?IL registers are only available in 64-bit mode. Real Address mode,[37]commonly called Real mode, is an operating mode of8086and later x86-compatibleCPUs. Real mode is characterized by a 20-bit segmented memory address space (meaning that only slightly more than 1MiBof memory can be addressed[p]), direct software access to peripheral hardware, and no concept ofmemory protectionormultitaskingat the hardware level. All x86 CPUs in the80286series and later start up in real mode at power-on;80186CPUs and earlier had only one operational mode, which is equivalent to real mode in later chips. (On the IBM PC platform, direct software access to the IBMBIOSroutines is available only in real mode, since BIOS is written for real mode. However, this is not a property of the x86 CPU but of the IBM BIOS design.) In order to use more than 64 KB of memory, the segment registers must be used. This created great complications for compiler implementors who introduced odd pointer modes such as "near", "far" and "huge" to leverage the implicit nature of segmented architecture to different degrees, with some pointers containing 16-bit offsets within implied segments and other pointers containing segment addresses and offsets within segments. It is technically possible to use up to 256 KB of memory for code and data, with up to 64 KB for code, by setting all four segment registers once and then only using 16-bit offsets (optionally with default-segment override prefixes) to address memory, but this puts substantial restrictions on the way data can be addressed and memory operands can be combined, and it violates the architectural intent of the Intel designers, which is for separate data items (e.g. arrays, structures, code units) to be contained in separate segments and addressed by their own segment addresses, in new programs that are not ported from earlier 8-bit processors with 16-bit address spaces. Unreal mode is used by some 16-bitoperating systemsand some 32-bitboot loaders. The System Management Mode (SMM) is only used by the system firmware (BIOS/UEFI), not byoperating systemsand applications software. The SMM code is running in SMRAM. In addition to real mode, the Intel 80286 supports protected mode, expanding addressablephysical memoryto 16MBand addressablevirtual memoryto 1GB, and providingprotected memory, which prevents programs from corrupting one another. This is done by using the segment registers only for storing an index into a descriptor table that is stored in memory. There are two such tables, theGlobal Descriptor Table(GDT) and theLocal Descriptor Table(LDT), each holding up to 8192 segment descriptors, each segment giving access to 64 KB of memory. In the 80286, a segment descriptor provides a 24-bitbase address, and this base address is added to a 16-bit offset to create an absolute address. The base address from the table fulfills the same role that the literal value of the segment register fulfills in real mode; the segment registers have been converted from direct registers to indirect registers. Each segment can be assigned one of fourringlevels used for hardware-basedcomputer security. Each segment descriptor also contains a segment limit field which specifies the maximum offset that may be used with the segment. Because offsets are 16 bits, segments are still limited to 64 KB each in 80286 protected mode.[38] Each time a segment register is loaded in protected mode, the 80286 must read a 6-byte segment descriptor from memory into a set of hidden internal registers. Thus, loading segment registers is much slower in protected mode than in real mode, and changing segments very frequently is to be avoided. Actual memory operations using protected mode segments are not slowed much because the 80286 and later have hardware to check the offset against the segment limit in parallel with instruction execution. TheIntel 80386extended offsets and also the segment limit field in each segment descriptor to 32 bits, enabling a segment to span the entire memory space. It also introduced support in protected mode forpaging, a mechanism making it possible to use pagedvirtual memory(with 4 KB page size). Paging allows the CPU to map any page of the virtual memory space to any page of the physical memory space. To do this, it uses additional mapping tables in memory called page tables. Protected mode on the 80386 can operate with paging either enabled or disabled; the segmentation mechanism is always active and generates virtual addresses that are then mapped by the paging mechanism if it is enabled. The segmentation mechanism can also be effectively disabled by setting all segments to have a base address of 0 and size limit equal to the whole address space; this also requires a minimally-sized segment descriptor table of only four descriptors (since the FS and GS segments need not be used).[q] Paging is used extensively by modern multitasking operating systems.Linux,386BSDandWindows NTwere developed for the 386 because it was the first Intel architecture CPU to support paging and 32-bit segment offsets. The 386 architecture became the basis of all further development in the x86 series. x86 processors that support protected mode boot intoreal modefor backward compatibility with the older 8086 class of processors. Upon power-on (a.k.a.booting), the processor initializes in real mode, and then begins executing instructions. Operating system boot code, which might be stored inread-only memory, may place the processor into theprotected modeto enable paging and other features. Conversely, segment arithmetic, a common practice in real mode code, is not allowed in protected mode. There is also a sub-mode of operation in 32-bit protected mode (a.k.a. 80386 protected mode) calledvirtual 8086 mode, also known asV86 mode. This is basically a special hybrid operating mode that allows real mode programs and operating systems to run while under the control of a protected mode supervisor operating system. This allows for a great deal of flexibility in running both protected mode programs and real mode programs simultaneously. This mode is exclusively available for the 32-bit version of protected mode; it does not exist in the 16-bit version of protected mode, or in long mode. In the mid 1990s, it was obvious that the 32-bit address space of the x86 architecture was limiting its performance in applications requiring large data sets. A 32-bit address space would allow the processor to directly address only 4 GB of data, a size surpassed by applications such asvideo processinganddatabase engines. Using 64-bit addresses, it is possible to directly address 16EiBof data, although most 64-bit architectures do not support access to the full 64-bit address space; for example, AMD64 supports only 48 bits from a 64-bit address, split into four paging levels. In 1999,AMDpublished a (nearly) complete specification for a64-bitextension of the x86 architecture which they calledx86-64with claimed intentions to produce. That design is currently used in almost all x86 processors, with some exceptions intended forembedded systems. Mass-producedx86-64chips for the general market were available four years later, in 2003, after the time was spent for working prototypes to be tested and refined; about the same time, the initial namex86-64was changed toAMD64. The success of the AMD64 line of processors coupled with lukewarm reception of the IA-64 architecture forced Intel to release its own implementation of the AMD64 instruction set. Intel had previously implemented support for AMD64[39]but opted not to enable it in hopes that AMD would not bring AMD64 to market before Itanium's new IA-64 instruction set was widely adopted. It branded its implementation of AMD64 asEM64T, and later rebranded itIntel 64. In its literature and product version names, Microsoft and Sun refer to AMD64/Intel 64 collectively asx64in the Windows andSolarisoperating systems.Linux distributionsrefer to it either as "x86-64", its variant "x86_64", or "amd64".BSDsystems use "amd64" whilemacOSuses "x86_64". Long mode is mostly an extension of the 32-bit instruction set, but unlike the 16–to–32-bit transition, many instructions were dropped in the 64-bit mode. This does not affect actual binary backward compatibility (which would execute legacy code in other modes that retain support for those instructions), but it changes the way assembler and compilers for new code have to work. This was the first time that a major extension of the x86 architecture was initiated and originated by a manufacturer other than Intel. It was also the first time that Intel accepted technology of this nature from an outside source. Early x86 processors could be extended withfloating-pointhardware in the form of a series of floating-pointnumericalco-processorswith names like8087, 80287 and 80387, abbreviated x87. This was also known as the NPX (Numeric Processor eXtension), an apt name since the coprocessors, while used mainly for floating-point calculations, also performed integer operations on both binary and decimal formats. With very few exceptions, the 80486 and subsequent x86 processors then integrated this x87 functionality on chip which made the x87 instructions ade factointegral part of the x86 instruction set. Each x87 register, known as ST(0) through ST(7), is 80 bits wide and stores numbers in theIEEE floating-point standarddouble extended precision format. These registers are organized as a stack with ST(0) as the top. This was done in order to conserve opcode space, and the registers are therefore randomly accessible only for either operand in a register-to-register instruction; ST0 must always be one of the two operands, either the source or the destination, regardless of whether the other operand is ST(x) or a memory operand. However, random access to the stack registers can be obtained through an instruction which exchanges any specified ST(x) with ST(0). The operations include arithmetic and transcendental functions, including trigonometric and exponential functions, and instructions that load common constants (such as 0; 1; e, the base of the natural logarithm; log2(10); and log10(2)) into one of the stack registers. While the integer ability is often overlooked, the x87 can operate on larger integers with a single instruction than the 8086, 80286, 80386, or any x86 CPU without to 64-bit extensions can, and repeated integer calculations even on small values (e.g., 16-bit) can be accelerated by executing integer instructions on the x86 CPU and the x87 in parallel. (The x86 CPU keeps running while the x87 coprocessor calculates, and the x87 sets a signal to the x86 when it is finished or interrupts the x86 if it needs attention because of an error.) MMX is aSIMDinstruction set designed by Intel and introduced in 1997 for thePentium MMXmicroprocessor.[40]The MMX instruction set was developed from a similar concept first used on theIntel i860. It is supported on most subsequent IA-32 processors by Intel and other vendors. MMX is typically used for video processing (in multimedia applications, for instance).[41] MMX added 8 new registers to the architecture, known as MM0 through MM7 (henceforth referred to asMMn). In reality, these new registers were just aliases for the existing x87 FPU stack registers. Hence, anything that was done to the floating-point stack would also affect the MMX registers. Unlike the FP stack, these MMn registers were fixed, not relative, and therefore they were randomly accessible. The instruction set did not adopt the stack-like semantics so that existing operating systems could still correctly save and restore the register state when multitasking without modifications.[40] Each of the MMn registers are 64-bit integers. However, one of the main concepts of the MMX instruction set is the concept ofpacked data types, which means instead of using the whole register for a single 64-bit integer (quadword), one may use it to contain two 32-bit integers (doubleword), four 16-bit integers (word) or eight 8-bit integers (byte). Given that the MMX's 64-bit MMn registers are aliased to the FPU stack and each of the floating-point registers are 80 bits wide, the upper 16 bits of the floating-point registers are unused in MMX. These bits are set to all ones by any MMX instruction, which correspond to the floating-point representation ofNaNsor infinities.[40] In 1997, AMD introduced 3DNow!.[42]The introduction of this technology coincided with the rise of3Dentertainment applications and was designed to improve the CPU'svector processingperformance of graphic-intensive applications. 3D video game developers and 3D graphics hardware vendors use 3DNow! to enhance their performance on AMD'sK6andAthlonseries of processors.[43] 3DNow! was designed to be the natural evolution of MMX from integers to floating point. As such, it uses exactly the same register naming convention as MMX, that is MM0 through MM7.[44]The only difference is that instead of packing integers into these registers, twosingle-precision floating-pointnumbers are packed into each register. The advantage of aliasing the FPU registers is that the same instruction and data structures used to save the state of the FPU registers can also be used to save 3DNow! register states. Thus no special modifications are required to be made to operating systems which would otherwise not know about them.[45] In 1999, Intel introduced the Streaming SIMD Extensions (SSE)instruction set, following in 2000 with SSE2. The first addition allowed offloading of basic floating-point operations from the x87 stack and the second made MMX almost obsolete and allowed the instructions to be realistically targeted by conventional compilers. Introduced in 2004 along with thePrescottrevision of thePentium 4processor, SSE3 added specific memory andthread-handling instructions to boost the performance of Intel'sHyperThreadingtechnology. AMD licensed the SSE3 instruction set and implemented most of the SSE3 instructions for its revision E and later Athlon 64 processors. The Athlon 64 does not support HyperThreading and lacks those SSE3 instructions used only for HyperThreading.[46] SSE discarded all legacy connections to the FPU stack. This also meant that this instruction set discarded all legacy connections to previous generations of SIMD instruction sets like MMX. But it freed the designers up, allowing them to use larger registers, not limited by the size of the FPU registers. The designers created eight 128-bit registers, named XMM0 through XMM7. (InAMD64, the number of SSE XMM registers has been increased from 8 to 16.) However, the downside was that operating systems had to have an awareness of this new set of instructions in order to be able to save their register states. So Intel created a slightly modified version of Protected mode, called Enhanced mode which enables the usage of SSE instructions, whereas they stay disabled in regular Protected mode. An OS that is aware of SSE will activate Enhanced mode, whereas an unaware OS will only enter into traditional Protected mode. SSE is a SIMD instruction set that works only on floating-point values, like 3DNow!. However, unlike 3DNow! it severs all legacy connection to the FPU stack. Because it has larger registers than 3DNow!, SSE can pack twice the number ofsingle precisionfloats into its registers. The original SSE was limited to only single-precision numbers, like 3DNow!. The SSE2 introduced the capability to packdouble precisionnumbers too, which 3DNow! had no possibility of doing since a double precision number is 64-bit in size which would be the full size of a single 3DNow! MMn register. At 128 bits, the SSE XMMn registers could pack two double precision floats into one register. Thus SSE2 is much more suitable for scientific calculations than either SSE1 or 3DNow!, which were limited to only single precision. SSE3 does not introduce any additional registers.[46] The Advanced Vector Extensions (AVX) doubled the size of SSE registers to 256-bit YMM registers. It also introduced the VEX coding scheme to accommodate the larger registers, plus a few instructions to permute elements. AVX2 did not introduce extra registers, but was notable for the addition for masking,gather, and shuffle instructions. AVX-512 features yet another expansion to 32 512-bit ZMM registers and a new EVEX scheme. Unlike its predecessors featuring a monolithic extension, it is divided into many subsets that specific models of CPUs can choose to implement. Physical Address Extensionor PAE was first added in the IntelPentium Pro, and later byAMDin the Athlon processors,[47]to allow up to 64 GB of RAM to be addressed. Without PAE, physical RAM in 32-bit protected mode is usually limited to 4GB. PAE defines a different page table structure with wider page table entries and a third level of page table, allowing additional bits of physical address. Although the initial implementations on 32-bit processors theoretically supported up to 64 GB of RAM, chipset and other platform limitations often restricted what could actually be used.x86-64processors define page table structures that theoretically allow up to 52 bits of physical address, although again, chipset and other platform concerns (like the number of DIMM slots available, and the maximum RAM possible per DIMM) prevent such a large physical address space to be realized. On x86-64 processors PAE mode must be active before the switch tolong mode, and must remain active whilelong modeis active, so while in long mode there is no "non-PAE" mode. PAE mode does not affect the width of linear or virtual addresses. By the 2000s, 32-bit x86 processors' limits in memory addressing were an obstacle to their use in high-performance computing clusters and powerful desktop workstations. The aged 32-bit x86 was competing with much more advanced 64-bit RISC architectures which could address much more memory. Intel and the whole x86 ecosystem needed 64-bit memory addressing if x86 was to survive the 64-bit computing era, as workstation and desktop software applications were soon to start hitting the limits of 32-bit memory addressing. However, Intel felt that it was the right time to make a bold step and use the transition to 64-bit desktop computers for a transition away from the x86 architecture in general, an experiment which ultimately failed. In 2001, Intel attempted to introduce a non-x86 64-bit architecture namedIA-64in itsItaniumprocessor, initially aiming for thehigh-performance computingmarket, hoping that it would eventually replace the 32-bit x86.[48]While IA-64 was incompatible with x86, the Itanium processor did provideemulationabilities for translating x86 instructions into IA-64, but this affected the performance of x86 programs so badly that it was rarely, if ever, actually useful to the users: programmers should rewrite x86 programs for the IA-64 architecture or their performance on Itanium would be orders of magnitude worse than on a true x86 processor. The market rejected the Itanium processor since it brokebackward compatibilityand preferred to continue using x86 chips, and very few programs were rewritten for IA-64. AMD decided to take another path toward 64-bit memory addressing, making sure backward compatibility would not suffer. In April 2003, AMD released the first x86 processor with 64-bit general-purpose registers, theOpteron, capable of addressing much more than 4GBof virtual memory using the newx86-64extension (also known as AMD64 or x64). The 64-bit extensions to the x86 architecture were enabled only in the newly introducedlong mode, therefore 32-bit and 16-bit applications and operating systems could simply continue using an AMD64 processor in protected or other modes, without even the slightest sacrifice of performance[49]and with full compatibility back to the original instructions of the 16-bit Intel 8086.[50]: 13–14The market responded positively, adopting the 64-bit AMD processors for both high-performance applications and business or home computers. Seeing the market rejecting the incompatible Itanium processor and Microsoft supporting AMD64, Intel had to respond and introduced its own x86-64 processor, thePrescottPentium 4, in July 2004.[51]As a result, the Itanium processor with its IA-64 instruction set is rarely used and x86, through its x86-64 incarnation, is still the dominant CPU architecture in non-embedded computers. x86-64 also introduced theNX bit, which offers some protection against security bugs caused bybuffer overruns. As a result of AMD's 64-bit contribution to the x86 lineage and its subsequent acceptance by Intel, the 64-bit RISC architectures ceased to be a threat to the x86 ecosystem and almost disappeared from the workstation market. x86-64 began to be utilized in powerfulsupercomputers(in itsAMD OpteronandIntel Xeonincarnations), a market which was previously the natural habitat for 64-bit RISC designs (such as theIBM Power microprocessorsorSPARCprocessors). The great leap toward 64-bit computing and the maintenance of backward compatibility with 32-bit and 16-bit software enabled the x86 architecture to become an extremely flexible platform today, with x86 chips being utilized from small low-power systems (for example,Intel QuarkandIntel Atom) to fast gaming desktop computers (for example,Intel Core i7andAMD FX/Ryzen), and even dominate large supercomputingclusters, effectively leaving only theARM32-bit and 64-bit RISC architecture as a competitor in thesmartphoneandtabletmarket. Prior to 2005, x86 architecture processors were unable to meet thePopek and Goldberg requirements– a specification for virtualization created in 1974 byGerald J. PopekandRobert P. Goldberg. However, both proprietary and open-sourcex86 virtualizationhypervisor products were developed usingsoftware-based virtualization. Proprietary systems includeHyper-V,Parallels Workstation,VMware ESX,VMware Workstation,VMware Workstation PlayerandWindows Virtual PC, whilefree and open-sourcesystems includeQEMU,Kernel-based Virtual Machine,VirtualBox, andXen. The introduction of the AMD-V and Intel VT-x instruction sets in 2005 allowed x86 processors to meet the Popek and Goldberg virtualization requirements.[52] APX (Advanced Performance Extensions) are extensions to double the number of general-purpose registers from 16 to 32 and add new features to improve general-purpose performance.[53][54][55][56]These extensions have been called "generational"[57]and "the biggest x86 addition since 64 bits".[58]Intel contributed APX support toGNU Compiler Collection(GCC) 14.[59] According to the architecture specification,[60]the main features of APX are: Extended GPRs for general purpose instructions are encoded using 2-byteREX2prefix, while new instructions and extended operands for existingAVX/AVX2/AVX-512instructions are encoded withextended EVEXprefix which has four variants used for different groups of instructions.
https://en.wikipedia.org/wiki/X86
Josis a city in Nigeria's middle belt. JosorJOSmay also refer to:
https://en.wikipedia.org/wiki/JOS
Real-time computing(RTC) is thecomputer scienceterm forhardwareandsoftwaresystems subject to a "real-time constraint", for example fromeventtosystem response.[1]Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".[2] The term "real-time" is also used insimulationto mean that the simulation's clock runs at the same speed as a real clock. Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. A system not specified as operating in real time cannot usuallyguaranteea response within any timeframe, althoughtypicalorexpectedresponse times may be given. Real-time processingfailsif not completed within a specified deadline relative to an event; deadlines must always be met, regardless ofsystem load. A real-time system has been described as one which "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time".[3]The term "real-time" is used inprocess controlandenterprise systemsto mean "without significant delay". Real-time software may use one or more of the following:synchronous programming languages,real-time operating systems(RTOSes), and real-time networks. Each of these provide essential frameworks on which to build a real-time software application. Systems used for manysafety-criticalapplications must be real-time, such as for control offly-by-wireaircraft, oranti-lock brakes, both of which demand immediate and accurate mechanical response.[4] The termreal-timederives from its use in earlysimulation, where a real-world process is simulated at a rate which matched that of the real process (now calledreal-time simulationto avoid ambiguity).Analog computers, most often, were capable of simulating at a much faster pace than real-time, a situation that could be just as dangerous as a slow simulation if it were not also recognized and accounted for. Minicomputers, particularly in the 1970s onwards, when built into dedicatedembedded systemssuch as DOG (Digital on-screen graphic) scanners, increased the need for low-latency priority-driven responses to important interactions with incoming data. Operating systems such asData General'sRDOS (Real-Time Disk Operating System)and RTOS withbackground and foreground schedulingas well asDigital Equipment Corporation'sRT-11date from this era. Background-foreground scheduling allowed low priority tasks CPU time when no foreground task needed to execute, and gave absolute priority within the foreground to threads/tasks with the highest priority. Real-time operating systems would also be used fortime-sharingmultiuser duties. For example,Data General Business Basiccould run in the foreground or background of RDOS and would introduce additional elements to the scheduling algorithm to make it more appropriate for people interacting viadumb terminals. Early personal computers were sometimes used for real-time computing. The possibility of deactivating other interrupts allowed for hard-coded loops with defined timing, and the lowinterrupt latencyallowed the implementation of a real-time operating system, giving the user interface and the disk drives lower priority than the real-time thread. Compared to these theprogrammable interrupt controllerof the Intel CPUs (8086..80586) generates a very large latency and the Windows operating system is neither a real-time operating system nor does it allow a program to take over the CPU completely and use its ownscheduler, without using native machine language and thus bypassing all interrupting Windows code. However, several coding libraries exist which offer real time capabilities in a high level language on a variety of operating systems, for exampleJava Real Time. Later microprocessors such as theMotorola 68000and subsequent family members (68010, 68020,ColdFireetc.) also became popular with manufacturers of industrial control systems. This application area is one where real-time control offers genuine advantages in terms of process performance and safety.[citation needed] A system is said to bereal-timeif the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed.[5]Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline:[6] Thus, the goal of ahard real-time systemis to ensure that all deadlines are met, but forsoft real-time systemsthe goal becomes meeting a certain subset of deadlines in order to optimize some application-specific criteria. The particular criteria optimized depend on the application, but some typical examples include maximizing the number of deadlines met, minimizing the lateness of tasks and maximizing the number of high priority tasks meeting their deadlines. Hard real-time systemsare used when it is imperative that an event be reacted to within a strict deadline. Such strong guarantees are required of systems for which not reacting in a certain interval of time would cause great loss in some manner, especially damaging the surroundings physically or threatening human lives (although the strict definition is simply that missing the deadline constitutes failure of the system). Some examples of hard real-time systems: In the context ofmultitaskingsystems thescheduling policyis normally priority driven (pre-emptiveschedulers). In some situations, these can guarantee hard real-time performance (for instance if the set of tasks and their priorities is known in advance). There are other hard real-time schedulers such asrate-monotonicwhich is not common in general-purpose systems, as it requires additional information in order to schedule a task: namely a bound or worst-case estimate for how long the task must execute. Specific algorithms for scheduling such hard real-time tasks exist, likeearliest deadline first, which, ignoring the overhead ofcontext switching, is sufficient for system loads of less than 100%.[7]New overlay scheduling systems, such as anadaptive partition schedulerassist in managing large systems with a mixture of hard real-time and non real-time applications. Firm real-time systemsare more nebulously defined, and some classifications do not include them, distinguishing only hard and soft real-time systems. Some examples of firm real-time systems: Soft real-time systemsare typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations. Some examples of soft real-time systems: In a real-timedigital signal processing(DSP) process, the analyzed (input) and generated (output) samples can be processed (or generated) continuously in the time it takes to input and output the same set of samplesindependentof the processing delay.[9]It means that the processing delay must be bounded even if the processing continues for an unlimited time. Themeanprocessing time per sample, includingoverhead, is no greater than the sampling period, which is the reciprocal of thesampling rate. This is the criterion whether the samples are grouped together in large segments and processed as blocks or are processed individually and whether there are long, short, or non-existentinput and output buffers. Consider anaudio DSPexample; if a process requires 2.01 seconds toanalyze,synthesize, or process 2.00 seconds of sound, it is not real-time. However, if it takes 1.99 seconds, it is or can be made into a real-time DSP process. A common life analogy is standing in a line orqueuewaiting for the checkout in a grocery store. If the line asymptotically grows longer and longer without bound, the checkout process is not real-time. If the length of the line is bounded, customers are being "processed" and output as rapidly, on average, as they are being inputted then that processisreal-time. The grocer might go out of business or must at least lose business if they cannot make their checkout process real-time; thus, it is fundamentally important that this process is real-time. A signal processing algorithm that cannot keep up with the flow of input data with output falling further and further behind the input, is not real-time. If the delay of the output (relative to the input) is bounded regarding a process which operates over an unlimited time, then that signal processing algorithm is real-time, even if the throughput delay may be very long. Real-time signal processing is necessary, but not sufficient in and of itself, for live signal processing such as what is required inlive event support. Live audio digital signal processing requires both real-time operation and a sufficient limit to throughput delay so as to be tolerable to performers usingstage monitorsorin-ear monitorsand not noticeable aslip sync errorby the audience also directly watching the performers. Tolerable limits to latency for live, real-time processing is a subject of investigation and debate, but is estimated to be between 6 and 20 milliseconds.[10] Real-time bidirectionaltelecommunications delaysof less than 300 ms ("round trip" or twice the unidirectional delay) are considered "acceptable" to avoid undesired "talk-over" in conversation. Real-time computing is sometimes misunderstood to behigh-performance computing, but this is not an accurate classification.[11]For example, a massivesupercomputerexecuting a scientific simulation may offer impressive performance, yet it is not executing a real-time computation. Conversely, once the hardware and software for an anti-lock braking system have been designed to meet its required deadlines, no further performance gains are obligatory or even useful. Furthermore, if a network server is highly loaded with network traffic, its response time may be slower, but will (in most cases) still succeed before it times out (hits its deadline). Hence, such a network server would not be considered a real-time system: temporal failures (delays, time-outs, etc.) are typically small and compartmentalized (limited in effect), but are notcatastrophic failures. In a real-time system, such as theFTSE 100 Index, a slow-down beyond limits would often be considered catastrophic in its application context. The most important requirement of a real-time system is consistent output, not high throughput. Some kinds of software, such as manychess-playing programs, can fall into either category. For instance, a chess program designed to play in a tournament with a clock will need to decide on a move before a certain deadline or lose the game, and is therefore a real-time computation, but a chess program that is allowed to run indefinitely before moving is not. In both of these cases, however, high performance is desirable: the more work a tournament chess program can do in the allotted time, the better its moves will be, and the faster an unconstrained chess program runs, the sooner it will be able to move. This example also illustrates the essential difference between real-time computations and other computations: if the tournament chess program does not make a decision about its next move in its allotted time it loses the game—i.e., it fails as a real-time computation—while in the other scenario, meeting the deadline is assumed not to be necessary. High-performance is indicative of the amount of processing that is performed in a given amount of time, whereas real-time is the ability to get done with the processing to yield a useful output in the available time. The term "near real-time" or "nearly real-time" (NRT), intelecommunicationsandcomputing, refers to the timedelayintroduced, by automateddata processingornetworktransmission, between the occurrence of an event and the use of the processed data, such as for display orfeedbackand control purposes. For example, a near-real-time display depicts an event or situation as it existed at the current time minus the processing time, as nearly the time of the live event.[12] The distinction between the terms "near real time" and "real time" is somewhat nebulous and must be defined for the situation at hand. The term implies that there are no significant delays.[12]In many cases, processing described as "real-time" would be more accurately described as "near real-time". Near real-time also refers to delayed real-time transmission of voice and video. It allows playing video images, in approximately real-time, without having to wait for an entire large video file to download. Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so they can sync/share common data in "near real-time" with each other. Several methods exist to aid the design of real-time systems, an example of which isMASCOT, an old but very successful method that represents theconcurrentstructure of the system. Other examples areHOOD, Real-Time UML,AADL, theRavenscar profile, andReal-Time Java.
https://en.wikipedia.org/wiki/Real-time_computing
Incomputer science,resource starvationis a problem encountered inconcurrent computingwhere aprocessis perpetually denied necessaryresourcesto process its work.[1]Starvation may be caused by errors in a scheduling ormutual exclusionalgorithm, but can also be caused byresource leaks, and can be intentionally caused via adenial-of-service attacksuch as afork bomb. When starvation is impossible in aconcurrent algorithm, the algorithm is calledstarvation-free,lockout-freed[2]or said to havefinite bypass.[3]This property is an instance ofliveness, and is one of the two requirements for any mutual exclusion algorithm; the other beingcorrectness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to theshared resource.[3] Starvation is usually caused by an overly simplisticscheduling algorithm. For example, if a (poorly designed)multi-tasking systemalways switches between the first two tasks while a third never gets to run, then the third task is being starved ofCPU time. The scheduling algorithm, which is part of thekernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources. Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called apriority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation. In computer networks, especially wireless networks,scheduling algorithmsmay suffer from scheduling starvation. An example ismaximum throughput scheduling. Starvation is normally caused bydeadlockin that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into acritical sectionand picks one arbitrarily is deadlock-free, but not starvation-free.[3] A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses theagingtechnique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.[4]
https://en.wikipedia.org/wiki/Starvation_(computer_science)
InUnixandUnix-likecomputer operating systems, afile descriptor(FD, less frequentlyfildes) is a process-unique identifier (handle) for afileor otherinput/outputresource, such as apipeornetwork socket. File descriptors typically have non-negativeintegervalues, with negative values being reserved to indicate "no value" or error conditions. File descriptors are a part of thePOSIXAPI. Each Unixprocess(except perhapsdaemons) should have three standard POSIX file descriptors, corresponding to the threestandard streams: In the traditional implementation of Unix, file descriptors index into a per-processfile descriptor tablemaintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called thefile table. This table records themodewith which the file (or other resource) has been opened: for reading, writing, appending, and possibly other modes. It also indexes into a third table called theinode tablethat describes the actual underlying files.[3]To perform input or output, the process passes the file descriptor to the kernel through asystem call, and the kernel will access the file on behalf of the process. The process does not have direct access to the file or inode tables. OnLinux, the set of file descriptors open in a process can be accessed under the path/proc/PID/fd/, where PID is theprocess identifier. File descriptor/proc/PID/fd/0isstdin,/proc/PID/fd/1isstdout, and/proc/PID/fd/2isstderr. As a shortcut to these, any running process can also accessits ownfile descriptors through the folders/proc/self/fdand/dev/fd.[4] InUnix-likesystems, file descriptors can refer to anyUnix file typenamed in a file system. As well as regular files, this includesdirectories,blockandcharacter devices(also called "special files"),Unix domain sockets, andnamed pipes. File descriptors can also refer to other objects that do not normally exist in the file system, such asanonymous pipesandnetwork sockets. The FILE data structure in theC standard I/O libraryusually includes a low level file descriptor for the object in question on Unix-like systems. The overall data structure provides additional abstraction and is instead known as afilehandle. The following lists typical operations on file descriptors on modernUnix-likesystems. Most of these functions are declared in the<unistd.h>header, but some are in the<fcntl.h>header instead. Thefcntl()function is used to perform various operations on a file descriptor, depending on the command argument passed to it. There are commands to get and set attributes associated with a file descriptor, includingF_GETFD, F_SETFD, F_GETFLandF_SETFL. A series of new operations has been added to many modern Unix-like systems, as well as numerous C libraries, to be standardized in a future version ofPOSIX.[7]Theatsuffix signifies that the function takes an additional first argument supplying a file descriptor from whichrelative pathsare resolved, the forms lacking theatsuffix thus becoming equivalent to passing a file descriptor corresponding to the currentworking directory. The purpose of these new operations is to defend against a certain class ofTOCTOUattacks. Unix file descriptors behave in many ways ascapabilities. They can be passed between processes acrossUnix domain socketsusing thesendmsg()system call. Note, however, that what is actually passed is a reference to an "open file description" that has mutable state (the file offset, and the file status and access flags). This complicates the secure use of file descriptors as capabilities, since when programs share access to the same open file description, they can interfere with each other's use of it by changing its offset or whether it is blocking or non-blocking, for example.[8][9]In operating systems that are specifically designed as capability systems, there is very rarely any mutable state associated with a capability itself. A Unix process' file descriptor table is an example of aC-list.
https://en.wikipedia.org/wiki/File_descriptor
ext3, orthird extended filesystem, is ajournaled file systemthat is commonly used with theLinux kernel. It used to be the defaultfile systemfor many popularLinux distributionsbut generally has been supplanted by its successor versionext4.[3]The main advantage of ext3 over its predecessor,ext2, isjournaling, which improves reliability and eliminates the need to check the file system after an unclean or impropershutdown. Stephen Tweediefirst revealed that he was working on extending ext2 inJournaling the Linux ext2fs Filesystemin a 1998 paper, and later in a February 1999 kernel mailing list posting. The filesystem was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward.[4][5][6] The speed performance of ext3 is less attractive than competing Linux filesystems, such as ext4,JFS,ReiserFS, andXFS, but ext3 has a significant advantage in that it allows in-place upgrades from ext2 without having toback upand restore data. Benchmarks suggest that ext3 also uses less CPU power than ReiserFS and XFS.[7][8]It is also considered safer than the other Linux file systems, due to its relative simplicity and wider testing base.[9][10] ext3 adds the following features to ext2: Without these features, any ext3 file system is also a valid ext2 file system. This situation has allowed well-tested and mature file system maintenance utilities for maintaining and repairing ext2 file systems to also be used with ext3 without major changes. The ext2 and ext3 file systems share the same standard set of utilities,e2fsprogs, which includes anfscktool. The close relationship also makes conversion between the two file systems (both forward to ext3 and backward to ext2) straightforward. ext3 lacks "modern" filesystem features, such as dynamicinodeallocation andextents. This situation might sometimes be a disadvantage, but for recoverability, it is a significant advantage. The file system metadata is all in fixed, well-known locations, and data structures have some redundancy. In significant data corruption, ext2 or ext3 may be recoverable, while a tree-based file system may not. The maximum number ofblocksfor ext3 is 232. The size of a block can vary, affecting the maximum number of files and the maximum size of the file system:[12] There are three levels ofjournalingavailable in the Linux implementation of ext3: In all three modes, the internal structure of file system is assured to be consistent even after a crash. In any case, only the data content of files or directories which were being modified when the system crashed will be affected; the rest will be intact after recovery. Because ext3 aims to bebackward-compatiblewith the earlier ext2, many of the on-disk structures are similar to those of ext2. Consequently, ext3 lacks recent features, such asextents, dynamic allocation ofinodes, andblock sub-allocation.[15]A directory can have at most 31998subdirectories, because an inode can have at most 32,000 links (each direct subdirectory increases their parent folder inode link counter in the ".." reference).[16] On ext3, like for most current Linux filesystems, the system tool "fsck" should not be used while the filesystem is mounted for writing.[3]Attempting to check a filesystem that is already mounted in read/write mode will (very likely) detect inconsistencies in the filesystem metadata. Where filesystem metadata is changing, and fsck applies changes in an attempt to bring the "inconsistent" metadata into a "consistent" state, the attempt to "fix" the inconsistencies will corrupt the filesystem. There is no online ext3defragmentationtool that works on the filesystem level. There is an offline ext2 defragmenter,e2defrag. However,e2defragmay destroy data, depending on the feature bits turned on in the filesystem; it does not know how to handle many of the newer ext3 features.[17] There are userspace defragmentation tools, like Shake[18]and defrag.[19][20]Shake works by allocating space for the whole file as one operation, which will generally cause the allocator to find contiguous disk space. If there are files which are used at the same time, Shake will try to write them next to one another. Defrag works by copying each file over itself. However, this strategy works only if the file system has enough free space. A true defragmentation tool does not exist for ext3.[21] However, as the Linux System Administrator Guide states, "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."[22] While ext3 is resistant to file fragmentation, ext3 can get fragmented over time or for specific usage patterns, like slowly writing large files.[23][24]Consequently, ext4 (the successor to ext3) has an online filesystem defragmentation utility e4defrag[25]and currently supportsextents(contiguous file regions). ext3 does not support the recovery of deleted files. The ext3 driver actively deletes files by wiping file inodes[26]for crash safety reasons. There are still several techniques[27]and some free[28]and proprietary[29]software for recovery of deleted or lost files using file system journal analysis; however, they do not guarantee any specific file recovery. e3compr[30]is anunofficial patchfor ext3 that does transparentcompression. It is a direct port of e2compr and still needs further development. It compiles and boots well with upstream kernels[citation needed], but journaling is not implemented yet. Unlike a number of modern file systems, ext3 does not have native support forsnapshots, the ability to quickly capture the state of the filesystem at arbitrary times. Instead, it relies on less-space-efficient, volume-level snapshots provided by the LinuxLVM. TheNext3file system is a modified version of ext3 which offers snapshots support, yet retains compatibility with the ext3 on-disk format.[31] ext3 does not dochecksummingwhen writing to the journal. On a storage device with extra cache, ifbarrier=1is not enabled as a mount option (in/etc/fstab), and if the hardware is doing out-of-order write caching, one runs the risk of severe filesystem corruption during a crash.[32][33][34]This is because storage devices with write caches report to the system that the data has been completely written, even if it was written to the (volatile) cache. If hard disk writes are done out-of-order (due to modern hard disks caching writes in order toamortizewrite speeds), it is likely that one will write a commit block of a transaction before the other relevant blocks are written. If a power failure or unrecoverable crash should occur before the other blocks get written, the system will have to be rebooted. Upon reboot, the file system will replay the log as normal, and replay the "winners" (transactions with a commit block, including the invalid transaction above, which happened to be tagged with a valid commit block). The unfinished disk write above will thus proceed, but using corrupt journal data. The file system will thus mistakenly overwrite normal data with corrupt data while replaying the journal. If checksums had been used, where the blocks of the "fake winner" transaction were tagged with a mutual checksum, the file system could have known better and not replayed the corrupt data onto the disk. Journal checksumming has been added to ext4.[35] Filesystems going through the device mapper interface (including softwareRAIDand LVM implementations) may not support barriers, and will issue a warning if that mount option is used.[36][37]There are also some disks that do not properly implement the write cache flushing extension necessary for barriers to work, which causes a similar warning.[38]In these situations, where barriers are not supported or practical, reliable write ordering is possible by turning off the disk's write cache and using thedata=journalmount option.[32]Turning off the disk's write cache may be required even when barriers are available. Applications like databases expect a call tofsync()to flush pending writes to disk, and the barrier implementation doesn't always clear the drive's write cache in response to that call.[39]There is also a potential issue with the barrier implementation related to error handling during events, such as a drive failure.[40]It is also known that sometimes somevirtualizationtechnologies do not properly forward fsync or flush commands to the underlying devices (files, volumes, disk) from a guest operating system.[41]Similarly, some hard disks or controllers implement cache flushing incorrectly or not at all, but still advertise that it is supported, and do not return any error when it is used.[42]There are so many ways to handle fsync and write cache handling incorrectly, it is safer to assume that cache flushing does not work unless it is explicitly tested, regardless of how reliable individual components are believed to be. Ext3 stores dates asUnix timeusing four bytes in the file header. 32 bits does not give enough scope to continue processing files beyond January 18, 2038 - theYear 2038 problem.[43] On June 28, 2006,Theodore Ts'o, the principal developer of ext3,[44]announced an enhanced version, called ext4. On October 11, 2008, the patches that mark ext4 as stable code were merged in the Linux 2.6.28 source code repositories, marking the end of the development phase and recommending its adoption. In 2008, Ts'o stated that although ext4 has improved features such as being much faster than ext3, it is not a major advancement, it uses old technology, and is a stop-gap; Ts'o believes thatBtrfsis the better direction, because "it offers improvements in scalability, reliability, and ease of management".[45]Btrfs also has "a number of the same design ideas thatreiser3/4had".[46]
https://en.wikipedia.org/wiki/Ext3
EBD0A0A2-B9E5-4433-87C0-68B6B72699C7:GPTWindowsBDP.[1]0FC63DAF-8483-4772-8E79-3D69D8477DE4: GPT Linux filesystem data.[1]933AC7E1-2EB4-4F13-B844-0E14E2AEF915: GPT /home partition.[2] ext4(fourth extended filesystem) is ajournaling file systemforLinux, developed as the successor toext3. ext4 was initially a series ofbackward-compatibleextensions to ext3, many of them originally developed by Cluster File Systems for theLustre file systembetween 2003 and 2006, meant to extend storage limits and add other performance improvements.[4]However, otherLinux kerneldevelopers opposed accepting extensions to ext3 for stability reasons,[5]and proposed toforkthe source code of ext3, rename it as ext4, and perform all the development there, without affecting existing ext3 users. This proposal was accepted, and on 28 June 2006,Theodore Ts'o, the ext3 maintainer, announced the new plan of development for ext4.[6] A preliminary development version of ext4 was included in version 2.6.19[7]of the Linux kernel. On 11 October 2008, the patches that mark ext4 as stable code were merged in the Linux 2.6.28 source code repositories,[8]denoting the end of the development phase and recommending ext4 adoption. Kernel 2.6.28, containing the ext4 filesystem, was finally released on 25 December 2008.[9]On 15 January 2010,Googleannounced that it would upgrade its storage infrastructure fromext2to ext4.[10]On 14 December 2010, Google also announced it would use ext4, instead ofYAFFS, onAndroid 2.3.[11] ext4 is the default file system for many Linux distributions includingDebianandUbuntu.[12] In 2008, the principal developer of the ext3 and ext4 file systems,Theodore Ts'o, stated that although ext4 has improved features, it is not a major advancement, it uses old technology, and is a stop-gap. Ts'o believes thatBtrfsis the better direction because "it offers improvements in scalability, reliability, and ease of management".[31]Btrfs also has "a number of the same design ideas thatreiser3/4had".[32]However, ext4 has continued to gain new features such as file encryption and metadata checksums. The ext4 file system does not honor the "secure deletion"file attribute, which is supposed to cause overwriting of files upon deletion. A patch to implement secure deletion was proposed in 2011, but did not solve the problem of sensitive data ending up in the file-system journal.[33] Because delayed allocation changes the behavior that programmers have been relying on with ext3, the feature poses some additional risk of data loss in cases where the system crashes or loses power before all of the data has been written to disk. Due to this, ext4 in kernel versions 2.6.30 and later automatically handles these cases as ext3 does. The typical scenario in which this might occur is a program replacing the contents of a file without forcing a write to the disk withfsync. There are two common ways of replacing the contents of a file on Unix systems:[34] Usingfsync()more often to reduce the risk for ext4 could lead to performance penalties on ext3 filesystems mounted with thedata=orderedflag (the default on most Linux distributions). Given that both file systems will be in use for some time, this complicates matters for end-user application developers. In response, ext4 in Linux kernels 2.6.30 and newer detect the occurrence of these common cases and force the files to be allocated immediately. For a small cost in performance, this provides semantics similar to ext3 ordered mode and increases the chance that either version of the file will survive the crash. This new behavior is enabled by default, but can be disabled with the "noauto_da_alloc" mount option.[34] The new patches have become part of the mainline kernel 2.6.30, but various distributions chose to backport them to 2.6.28 or 2.6.29.[35] These patches don't completely prevent potential data loss or help at all with new files. The only way to be safe is to write and use software that doesfsync()when it needs to. Performance problems can be minimized by limiting crucial disk writes that needfsync()to occur less frequently.[36] Linux kernel Virtual File System is a subsystem or layer inside of the Linux kernel. It is the result of an attempt to integrate multiple file systems into an orderly single structure. The key idea, which dates back to the pioneering work done by Sun Microsystems employees in 1986,[37]is to abstract out that part of the file system that is common to all file systems and put that code in a separate layer that calls the underlying concrete file systems to actually manage the data. All system calls related to files (or pseudo files) are directed to the Linux kernel Virtual File System for initial processing. These calls, coming from user processes, are the standard POSIX calls, such asopen,read,write,lseek, etc. Although designed for and primarily used with Linux, an ext4 file system can be accessed via other operating systems via interoperability tools. Windowsprovides access via itsWindows Subsystem for Linux(WSL) technology. Specifically, the second major version, WSL 2, is the first version with ext4 support. It was first released inWindows 10 Insider Preview Build 20211.[38][39][40][41]WSL 2 requires Windows 10 version 1903 or higher, with build 18362 or higher, for x64 systems, and version 2004 or higher, with build 19041 or higher, for ARM64 systems.[42] Paragon Softwareoffers commercial products that provide full read/write access for ext2/3/4 –Linux File Systems for Windows[43]andextFS for Mac.[44] The free softwareext4fuseprovides limited (read-only) support. The ext4 filesystem divides the partition it resides into smaller chunks called blocks (a group of sectors, usually between 1 KiB and 64 KiB). By default, the block size is the same as the page size (4 KiB), but it can be configured withmkfsduring filesystem creation. Blocks are grouped into larger chunks called block groups. This is the heart of the filesystem; it resides in only one block of the disk.[45]It is usually the first item in a block group, except for group 0, where the first few bytes are reserved for theboot sector. The Superblock is vital for the filesystem – as such, backup copies are written across partitions at filesystem creation time, so it can be recovered in case of corruption. GDT comes in second after superblock. GDT stores block group descriptors of each block group on the filesystem. It resides on more than one block on disk. Each GDT is 64 bytes in size. This structure is also vital for the filesystem; as such, redundant backups are stored across the filesystem. The Block bitmap tracks the block usage status of all blocks of a block group. Each bit in the bitmap represents a block. If a block is in use, its corresponding bit will be set, otherwise it will be unset. The location of the block bitmap is not fixed, so its position is stored in respective block group descriptors. Similar to the Block bitmap, the Inode bitmap's location is also not fixed, therefore the group descriptor points to the location of the Inode bitmap. The Inode bitmap tracks usage of inodes. Each bit in the bitmap represents an inode. If an inode is in use then its corresponding bit in Inode bitmap will be set, otherwise it will be unset. Each block group is represented by its block group descriptor. It has vital information for the block group like freeinodes, free blocks and the location of inode bitmap, block bitmap and the inode table of that particular block group. Ext4 introduced flexible block groups. Inflex_bg, several block groups are grouped into one logical block group. Block bitmap and inode bitmap of first block group are expanded to include the bitmap and the inode table of other block groups.
https://en.wikipedia.org/wiki/Ext4
Incomputing, afile systemorfilesystem(often abbreviated toFSorfs) governsfileorganization and access. Alocalfile system is a capability of anoperating systemthat services the applications running on the samecomputer.[1][2]Adistributed file systemis aprotocolthat provides file access betweennetworkedcomputers. A file system provides adata storageservicethat allowsapplicationsto sharemass storage. Without a file system, applications could access the storage inincompatibleways that lead toresource contention,data corruptionanddata loss. There are many file systemdesignsandimplementations– with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more. File systems have been developed for many types ofstorage devices, includinghard disk drives(HDDs),solid-state drives(SSDs),magnetic tapesandoptical discs.[3] A portion of the computermain memorycan be set up as aRAM diskthat serves as a storage device for a file system. File systems such astmpfscan store files invirtual memory. Avirtualfile system provides access to files that are either computed on request, calledvirtual files(seeprocfsandsysfs), or are mapping into another, backing storage. Fromc.1900and before the advent of computers the termsfile system,filing systemandsystem for filingwere used to describe methods of organizing, storing and retrieving paper documents.[4]By 1961, the termfile systemwas being applied to computerized filing alongside the original meaning.[5]By 1964, it was in general use.[6] A local file system'sarchitecturecan be described aslayers of abstractioneven though a particular file system design may not actually separate the concepts.[7] Thelogical file systemlayer provides relatively high-level access via anapplication programming interface(API) for file operations including open, close, read and write – delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors.[8]It provides file access, directory operations, security and protection.[7] Thevirtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which is called a file system implementation.[8] Thephysical file systemlayer provides relatively low-level access to a storage device (e.g. disk). It reads and writesdata blocks, providesbufferingand othermemory managementand controls placement of blocks in specific locations on the storage medium. This layer usesdevice driversorchannel I/Oto drive the storage device.[7] Afile name, orfilename, identifies a file to consuming applications and in some cases users. A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory. Most file systems restrict the length of a file name. Some file systems match file names ascase sensitiveand others as case insensitive. For example, the namesMYFILEandmyfilematch the same file for case insensitive, but different files for case sensitive. Most modern file systems allow a file name to contain a wide range of characters from theUnicodecharacter set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type. File systems typically support organizing files intodirectories, also calledfolders, which segregate files into groups. This may be implemented by associating the file name with an index in atable of contentsor aninodein aUnix-likefile system. Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories. The first file system to support arbitrary hierarchies of directories was used in theMulticsoperating system.[9]The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do,Apple'sHierarchical File Systemand its successorHFS+inclassic Mac OS, theFATfile system inMS-DOS2.0 and later versions of MS-DOS and inMicrosoft Windows, theNTFSfile system in theWindows NTfamily of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of theFiles-11file system inOpenVMS. In addition to data, the file content, a file system also manages associatedmetadatawhich may include but is not limited to: A file system stores associated metadata separate from the content of the file. Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file. Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as theinode. Most file systems also store metadata not associated with any one particular file. Such metadata includes information about unused regions—free space bitmap,block availability map—and information aboutbad sectors. Often such information about anallocation groupis stored inside the allocation group itself. Additional attributes can be associated on file systems, such asNTFS,XFS,ext2,ext3, some versions ofUFS, andHFS+, usingextended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image. Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to asstreamsorforks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago. Seecomparison of file systems § Metadatafor details on which file systems support which kinds of metadata. A local file system tracks which areas of storage belong to which file and which are not being used. When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows. To delete a file, the file system records that the file's space is free; available to use for another file. A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e.bytes). For example, inApple DOSof the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used atrack/sector map.[citation needed] The granular nature results in unused space, sometimes calledslack space, for each file except for those that have the rare size that is a multiple of the granular allocation.[10]For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB. Generally, the allocation unit size is set when the storage is configured. Choosing a relatively small size compared to the files stored, results in excessive access overhead. Choosing a relatively large size results in excessive unused space. Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space. As a file system creates, modifies and deletes files, the underlying storage representation may becomefragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous. A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.[11] This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such ashard disk drives. Other hardware such assolid-state drivesare not affected by fragmentation. A file system often supports access control of data that it manages. The intent of access control is often to prevent certain users from reading or modifying certain files. Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere andfile permissionsin the form of permission bits,access control lists, orcapabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders. Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data. Some operating systems allow a system administrator to enabledisk quotasto limit a user's use of storage space. A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like: Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media. A file system might record events to allow analysis of issues such as: Many file systems access data as a stream ofbytes. Typically, to read file data, a program provides amemory bufferand the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium. Some file systems, or layers on top of a file system, allow a program to define arecordso that a program can read and write data as a structure; not an unorganized sequence of bytes. If afixed lengthrecord definition is used, then locating the nthrecord can be calculated mathematically, which is relatively fast compared to parsing the data for record separators. An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.[12] Typically, a file system can be managed by the user via various utility programs. Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system. Directory utilities may be used to create, rename and deletedirectory entries, which are also known asdentries(singular:dentry),[13]and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard linksinUnix), to rename parent links (".." inUnix-likeoperating systems),[clarification needed]and to create bidirectional links to files. File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category. Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file systemdefragmentationutilities. Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system. Utilities, libraries and programs usefile system APIsto make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal. Frequently, retail systems are configured with a single file system occupying the entirestorage device. Another approach is topartitionthe disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be setread-onlyand only periodically be set writable. Some file systems, such asZFSandAPFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.[14][15] A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using theext4file system) in a virtual machine under his/her production Windows environment (usingNTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on thehypervisorand settings) in the NTFS host file system. Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of thesystemfile system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition,defragmentationmay be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches). Adisk file systemtakes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples includeFAT(FAT12,FAT16,FAT32),exFAT,NTFS,ReFS,HFSandHFS+,HPFS,APFS,UFS,ext2,ext3,ext4,XFS,btrfs,Files-11,Veritas File System,VMFS,ZFS,ReiserFS,NSSand ScoutFS. Some disk file systems arejournaling file systemsorversioning file systems. ISO 9660andUniversal Disk Format(UDF) are two common formats that targetCompact Discs,DVDsandBlu-raydiscs.Mount Rainieris an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs. Aflash file systemconsiders the special abilities, performance and restrictions offlash memorydevices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.[16] Atape file systemis a file system and tape format designed to store files on tape.Magnetic tapesare sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system. In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks. Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other. Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file. Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to asstreaming, so that time-consuming and repeated tape motions are not required to write new data. However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future. IBM has developed a file system for tape called theLinear Tape File System. The IBM implementation of this file system has been released as the open-sourceIBM Linear Tape File System — Single Drive Edition (LTFS-SDE)product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape. Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes.[a]With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media. Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time. Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similarrich metadata.[17] IBM DB2 for i[18](formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object basedIBM i[19]operating system (formerly known as OS/400 and i5/OS), incorporating asingle level storeand running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish.[20]These technologies are informally known as 'Fortress Rochester'[citation needed]and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective[citation needed]. Some other projects that are not "pure" database file systems but that use some aspects of a database file system: Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the commandshell, may leave the entire system in an unusable state. Transaction processingintroduces theatomicityguarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide theisolationguarantee[clarification needed], meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properlyserializedwith the transaction. Windows, beginning with Vista, added transaction support toNTFS, in a feature calledTransactional NTFS, but its use is now discouraged.[21]There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system,[22]Amino,[23]LFS,[24]and a transactionalext3file system on the TxOS kernel,[25]as well as transactional file systems targeting embedded systems, such as TFFS.[26] Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions.File lockingcan be used as aconcurrency controlmechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot preventTOCTTOUrace conditions on symbolic links. File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity. Journaling file systemsis one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call. Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software. Anetwork file systemis a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for theNFS,[27]AFS,SMBprotocols, and file-system-like clients forFTPandWebDAV. Ashared disk file systemis one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually astorage area network). The file system arbitrates access to that subsystem, preventing write collisions.[28]Examples includeGFS2fromRed Hat,GPFS, now known as Spectrum Scale, from IBM,SFSfrom DataPlow,CXFSfromSGI,StorNextfromQuantum Corporationand ScoutFS from Versity. Some file systems expose elements of the operating system as files so they can be acted on via thefile system API. This is common inUnix-likeoperating systems, and to a lesser extent in other operating systems. Examples include: In the 1970s disk and digital tape devices were too expensive for some earlymicrocomputerusers. An inexpensive basic data storage system was devised that used commonaudio cassettetape. When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, thenmodulated soundsthat encoded a prefix, the data, achecksumand a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system wouldlistento the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as theCommodore PETseries of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data. In a flat file system, there are nosubdirectories; directory entries for all files are stored in a single directory. Whenfloppy diskmedia was first available this type of file system was adequate due to the relatively small amount of data space available.CP/Mmachines featured a flat file system, where files could be assigned to one of 16user areasand generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specificquotafor each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The earlyApple Macintoshalso featured a flat file system, theMacintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder.IBMDOS/360andOS/360store entries for all files on a disk pack (volume) in a directory on the pack called aVolume Table of Contents(VTOC). While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files. A recent addition to the flat file system family isAmazon'sS3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes. Anoperating system(OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently. An OS typically provides file system access to the user. Often an OS providescommand line interface, such asUnix shell, WindowsCommand PromptandPowerShell, andOpenVMS DCL. An OS often also providesgraphical user interfacefile browserssuch as MacOSFinderand WindowsFile Explorer. Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory. Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called themount point– it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems. Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose. Linuxsupports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2,ext3andext4),XFS,JFS, andbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there areUBIFS,JFFS2andYAFFS, among others.SquashFSis a common compressed read-only file system. Solarisin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS. Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (journaling)VxFS, Sun Microsystems (clustering)QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS. Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orjournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS. Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools inZFS. macOS (formerly Mac OS X)uses theApple File System(APFS), which in 2017 replaced a file system inherited fromclassic Mac OScalledHFS Plus(HFS+). Apple also uses the term "Mac OS Extended" for HFS+.[29]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter. File names can be up to 255 characters. HFS Plus usesUnicodeto store file names. On macOS, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension. HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic links, andaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland. macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses theApple File Systemonsolid-state drives. macOS also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[30]As ofMac OS X LionUFS support was completely dropped. Newer versions of macOS are capable of reading and writing to the legacyFATfile systems (16 and 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on macOS versions prior toMac OS X Snow Leopardthird-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).[31] Finally, macOS supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[32] OS/21.2 introduced theHigh Performance File System(HPFS). HPFS supports mixed case file names in differentcode pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data,extent-basedspace allocation, aB+ treestructure for directories, and the root directory located at the midpoint of the disk, for faster average access. Ajournaled filesystem(JFS) was shipped in 1999. PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28. Plan 9 from Bell Labstreats everything as a file and accesses all objects as a file would be accessed (i.e., there is noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system. TheInfernooperating system shares these concepts with Plan 9. Windows makes use of theFAT,NTFS,exFAT,Live File SystemandReFSfile systems (the last of these is only supported and usable inWindows Server 2012,Windows Server 2016,Windows 8,Windows 8.1, andWindows 10; Windows cannot boot from it). Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primaryhard disk drivepartition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967. The family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PC DOS,OS/2, andDR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age. The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed] Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows. The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions. FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS. FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion. NTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links). exFAThas certain advantages over NTFS with regard tofile system overhead.[citation needed] exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11. exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard).[32]Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.[33][34] Prior to the introduction ofVSAM,OS/360systems implemented a hybrid file system. The system was designed to easily supportremovable disk packs, so the information relating to all files on one disk (volumein IBM terminology) is stored on that disk in aflat system filecalled theVolume Table of Contents(VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of theSystem Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access. The IBMConversational Monitor System(CMS) component ofVM/370uses a separate flat file system for eachvirtual disk(minidisk). File data and control information are scattered and intermixed. The anchor is a record called theMaster File Directory(MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels ofindirection, where the file's directory entry (called aFile Status Table(FST) entry) points to blocks containing a list of addresses of the individual records. Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in asingle-level store. Many types ofobjectsare defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integratedrelational database. File systems limitstorable data capacity– generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future. Since storage sizes have increased at nearexponentialrate (seeMoore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity. With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980shome computerswith 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems. It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason. In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended.[39]On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse.[39]On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back),[40]and both ext3 and ext4 can be converted tobtrfs, and converted back until the undo information is deleted.[41]These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases usingsparse filesupport.[41] Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system. For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted. An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup. Inhierarchical file systems, files are accessed by means of apaththat is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name. Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
https://en.wikipedia.org/wiki/File_system#Allocation_methods
A modern computeroperating systemusually usesvirtual memoryto provide separate address spaces or regions of a single address space, calleduser space and kernel space.[1][a]This separation primarily providesmemory protectionand hardware protection from malicious or errant software behaviour. Kernel space is strictly reserved for running a privilegedoperating system kernel, kernel extensions, and mostdevice drivers. In contrast, user space is the memory area whereapplication softwareand some drivers execute, typically one address space per process. The termuser space(oruserland) refers to all code that runs outside the operating system's kernel.[2]User space usually refers to the various programs andlibrariesthat the operating system uses to interact with the kernel: software that performsinput/output, manipulatesfile systemobjects,application software, etc. Each user spaceprocessusually runs in its ownvirtual memoryspace, and, unless explicitly allowed, cannot access the memory of other processes. This is the basis formemory protectionin today's mainstream operating systems, and a building block forprivilege separation. A separate user mode can also be used to build efficient virtual machines – seePopek and Goldberg's virtualization requirements. With enough privileges, processes can request the kernel to map part of another process's memory space to their own, as is the case fordebuggers. Programs can also requestshared memoryregions with other processes, although other techniques are also available to allowinter-process communication. The most common way of implementing auser modeseparate fromkernel modeinvolves operating systemprotection rings. Protection rings, in turn, are implemented usingCPU modes. Typically, kernel space programs run inkernel mode, also calledsupervisor mode; standard applications in user space run in user mode. Some operating systems aresingle address space operating systems—with a single address space for all user-mode code. (The kernel-mode code may be in the same address space, or it may be in a second address space). Other operating systems have a per-process address space, with a separate address space for each user-mode process. Another approach taken in experimental operating systems is to have a singleaddress spacefor all software, and rely on a programming language's semantics to ensure that arbitrary memory cannot be accessed – applications cannot acquire anyreferencesto the objects that they are not allowed to access.[4][5]This approach has been implemented inJXOS, Unununium and Microsoft'sSingularityresearch project.
https://en.wikipedia.org/wiki/User_space
A modern computeroperating systemusually usesvirtual memoryto provide separate address spaces or regions of a single address space, calleduser space and kernel space.[1][a]This separation primarily providesmemory protectionand hardware protection from malicious or errant software behaviour. Kernel space is strictly reserved for running a privilegedoperating system kernel, kernel extensions, and mostdevice drivers. In contrast, user space is the memory area whereapplication softwareand some drivers execute, typically one address space per process. The termuser space(oruserland) refers to all code that runs outside the operating system's kernel.[2]User space usually refers to the various programs andlibrariesthat the operating system uses to interact with the kernel: software that performsinput/output, manipulatesfile systemobjects,application software, etc. Each user spaceprocessusually runs in its ownvirtual memoryspace, and, unless explicitly allowed, cannot access the memory of other processes. This is the basis formemory protectionin today's mainstream operating systems, and a building block forprivilege separation. A separate user mode can also be used to build efficient virtual machines – seePopek and Goldberg's virtualization requirements. With enough privileges, processes can request the kernel to map part of another process's memory space to their own, as is the case fordebuggers. Programs can also requestshared memoryregions with other processes, although other techniques are also available to allowinter-process communication. The most common way of implementing auser modeseparate fromkernel modeinvolves operating systemprotection rings. Protection rings, in turn, are implemented usingCPU modes. Typically, kernel space programs run inkernel mode, also calledsupervisor mode; standard applications in user space run in user mode. Some operating systems aresingle address space operating systems—with a single address space for all user-mode code. (The kernel-mode code may be in the same address space, or it may be in a second address space). Other operating systems have a per-process address space, with a separate address space for each user-mode process. Another approach taken in experimental operating systems is to have a singleaddress spacefor all software, and rely on a programming language's semantics to ensure that arbitrary memory cannot be accessed – applications cannot acquire anyreferencesto the objects that they are not allowed to access.[4][5]This approach has been implemented inJXOS, Unununium and Microsoft'sSingularityresearch project.
https://en.wikipedia.org/wiki/Kernel_space
Inmathematics,division by zero,divisionwhere the divisor (denominator) iszero, is a unique and problematic special case. Usingfractionnotation, the general example can be written asa0{\displaystyle {\tfrac {a}{0}}}, wherea{\displaystyle a}is the dividend (numerator). The usual definition of thequotientinelementary arithmeticis the number which yields the dividend whenmultipliedby the divisor. That is,c=ab{\displaystyle c={\tfrac {a}{b}}}is equivalent toc⋅b=a.{\displaystyle c\cdot b=a.}By this definition, the quotientq=a0{\displaystyle q={\tfrac {a}{0}}}is nonsensical, as the productq⋅0{\displaystyle q\cdot 0}is always0{\displaystyle 0}rather than some other numbera.{\displaystyle a.}Following the ordinary rules ofelementary algebrawhile allowing division by zero can create amathematical fallacy, a subtle mistake leading to absurd results. To prevent this, the arithmetic ofreal numbersand more general numerical structures calledfieldsleaves division by zeroundefined, and situations where division by zero might occur must be treated with care. Since any number multiplied by zero is zero, the expression00{\displaystyle {\tfrac {0}{0}}}is also undefined. Calculusstudies the behavior offunctionsin thelimitas their input tends to some value. When areal functioncan be expressed as a fraction whose denominator tends to zero, the output of the function becomes arbitrarily large, and is said to "tend to infinity", a type ofmathematical singularity. For example, thereciprocal function,f(x)=1x,{\displaystyle f(x)={\tfrac {1}{x}},}tends to infinity asx{\displaystyle x}tends to0.{\displaystyle 0.}When both the numerator and the denominator tend to zero at the same input, the expression is said to take anindeterminate form, as the resulting limit depends on the specific functions forming the fraction and cannot be determined from their separate limits. As an alternative to the common convention of working with fields such as the real numbers and leaving division by zero undefined, it is possible to define the result of division by zero in other ways, resulting in different number systems. For example, the quotienta0{\displaystyle {\tfrac {a}{0}}}can be defined to equal zero; it can be defined to equal a new explicitpoint at infinity, sometimes denoted by theinfinity symbol∞{\displaystyle \infty };or it can be defined to result in signed infinity, with positive or negative sign depending on the sign of the dividend. In these number systems division by zero is no longer a special exception per se, but the point or points at infinity involve their own new types of exceptional behavior. Incomputing, an error may result from an attempt to divide by zero. Depending on the context and the type of number involved, dividing by zero may evaluate topositive or negative infinity, return a specialnot-a-numbervalue, orcrashthe program, among other possibilities. ThedivisionN/D=Q{\displaystyle N/D=Q}can be conceptually interpreted in several ways.[1] Inquotitive division, the dividendN{\displaystyle N}is imagined to be split up into parts of sizeD{\displaystyle D}(the divisor), and the quotientQ{\displaystyle Q}is the number of resulting parts. For example, imagine ten slices of bread are to be made into sandwiches, each requiring two slices of bread. A total of five sandwiches can be made(102=5{\displaystyle {\tfrac {10}{2}}=5}).Now imagine instead that zero slices of bread are required per sandwich (perhaps alettuce wrap). Arbitrarily many such sandwiches can be made from ten slices of bread, as the bread is irrelevant.[2] The quotitive concept of division lends itself to calculation by repeatedsubtraction: dividing entails counting how many times the divisor can be subtracted before the dividend runs out. Because no finite number of subtractions of zero will ever exhaust a non-zero dividend, calculating division by zero in this waynever terminates.[3]Such an interminable division-by-zeroalgorithmis physically exhibited by somemechanical calculators.[4] Inpartitive division, the dividendN{\displaystyle N}is imagined to be split intoD{\displaystyle D}parts, and the quotientQ{\displaystyle Q}is the resulting size of each part. For example, imagine ten cookies are to be divided among two friends. Each friend will receive five cookies(102=5{\displaystyle {\tfrac {10}{2}}=5}).Now imagine instead that the ten cookies are to be divided among zero friends. How many cookies will each friend receive? Since there are no friends, this is an absurdity.[5] In another interpretation, the quotientQ{\displaystyle Q}represents theratioN:D.{\displaystyle N:D.}[6]For example, a cake recipe might call for ten cups of flour and two cups of sugar, a ratio of10:2{\displaystyle 10:2}or, proportionally,5:1.{\displaystyle 5:1.}To scale this recipe to larger or smaller quantities of cake, a ratio of flour to sugar proportional to5:1{\displaystyle 5:1}could be maintained, for instance one cup of flour and one-fifth cup of sugar, or fifty cups of flour and ten cups of sugar.[7]Now imagine a sugar-free cake recipe calls for ten cups of flour and zero cups of sugar. The ratio10:0,{\displaystyle 10:0,}or proportionally1:0,{\displaystyle 1:0,}is perfectly sensible:[8]it just means that the cake has no sugar. However, the question "How many parts flour for each part sugar?" still has no meaningful numerical answer. A geometrical appearance of the division-as-ratio interpretation is theslopeof astraight linein theCartesian plane.[9]The slope is defined to be the "rise" (change in vertical coordinate) divided by the "run" (change in horizontal coordinate) along the line. When this is written using the symmetrical ratio notation, a horizontal line has slope0:1{\displaystyle 0:1}and a vertical line has slope1:0.{\displaystyle 1:0.}However, if the slope is taken to be a singlereal numberthen a horizontal line has slope01=0{\displaystyle {\tfrac {0}{1}}=0}while a vertical line has an undefined slope, since in real-number arithmetic the quotient10{\displaystyle {\tfrac {1}{0}}}is undefined.[10]The real-valued slopeyx{\displaystyle {\tfrac {y}{x}}}of a line through the origin is the vertical coordinate of theintersectionbetween the line and a vertical line at horizontal coordinate1,{\displaystyle 1,}dashed black in the figure. The vertical red and dashed black lines areparallel, so they have no intersection in the plane. Sometimes they are said to intersect at apoint at infinity, and the ratio1:0{\displaystyle 1:0}is represented by a new number∞{\displaystyle \infty };[11]see§ Projectively extended real linebelow. Vertical lines are sometimes said to have an "infinitely steep" slope. Division is the inverse ofmultiplication, meaning that multiplying and then dividing by the same non-zero quantity, or vice versa, leaves an original quantity unchanged; for example(5×3)/3={\displaystyle (5\times 3)/3={}}(5/3)×3=5{\displaystyle (5/3)\times 3=5}.[12]Thus a division problem such as63=?{\displaystyle {\tfrac {6}{3}}={?}}can be solved by rewriting it as an equivalent equation involving multiplication,?×3=6,{\displaystyle {?}\times 3=6,}where?{\displaystyle {?}}represents the same unknown quantity, and then finding the value for which the statement is true; in this case the unknown quantity is2,{\displaystyle 2,}because2×3=6,{\displaystyle 2\times 3=6,}so therefore63=2.{\displaystyle {\tfrac {6}{3}}=2.}[13] An analogous problem involving division by zero,60=?,{\displaystyle {\tfrac {6}{0}}={?},}requires determining an unknown quantity satisfying?×0=6.{\displaystyle {?}\times 0=6.}However, any number multiplied by zero is zero rather than six, so there exists no number which can substitute for?{\displaystyle {?}}to make a true statement.[14] When the problem is changed to00=?,{\displaystyle {\tfrac {0}{0}}={?},}the equivalent multiplicative statement is?×0=0{\displaystyle {?}\times 0=0};in this caseanyvalue can be substituted for the unknown quantity to yield a true statement, so there is no single number which can be assigned as the quotient00.{\displaystyle {\tfrac {0}{0}}.} Because of these difficulties, quotients where the divisor is zero are traditionally taken to beundefined, and division by zero is not allowed.[15][16] A compelling reason for not allowing division by zero is that allowing it leads tofallacies. When working with numbers, it is easy to identify an illegal division by zero. For example: The fallacy here arises from the assumption that it is legitimate to cancel0like any other number, whereas, in fact, doing so is a form of division by0. Usingalgebra, it is possible to disguise a division by zero[17]to obtain aninvalid proof. For example:[18] This is essentially the same fallacious computation as the previous numerical version, but the division by zero was obfuscated because we wrote0asx− 1. TheBrāhmasphuṭasiddhāntaofBrahmagupta(c. 598–668) is the earliest text to treatzeroas a number in its own right and to define operations involving zero.[17]According to Brahmagupta, A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero. In 830,Mahāvīraunsuccessfully tried to correct the mistake Brahmagupta made in his bookGanita Sara Samgraha: "A number remains unchanged when divided by zero."[17] Bhāskara II'sLīlāvatī(12th century) proposed that division by zero results in an infinite quantity,[19] A quantity divided by zero becomes a fraction the denominator of which is zero. This fraction is termed an infinite quantity. In this quantity consisting of that which has zero for its divisor, there is no alteration, though many may be inserted or extracted; as no change takes place in the infinite and immutable God when worlds are created or destroyed, though numerous orders of beings are absorbed or put forth. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value toa0{\textstyle {\tfrac {a}{0}}}is contained inAnglo-IrishphilosopherGeorge Berkeley's criticism ofinfinitesimal calculusin 1734 inThe Analyst("ghosts of departed quantities").[20] Calculusstudies the behavior offunctionsusing the concept of alimit, the value to which a function's output tends as its input tends to some specific value. The notationlimx→cf(x)=L{\textstyle \lim _{x\to c}f(x)=L}means that the value of the functionf{\displaystyle f}can be made arbitrarily close toL{\displaystyle L}by choosingx{\displaystyle x}sufficiently close toc.{\displaystyle c.} In the case where the limit of thereal functionf{\displaystyle f}increases without bound asx{\displaystyle x}tends toc,{\displaystyle c,}the function is not defined atx,{\displaystyle x,}a type ofmathematical singularity. Instead, the function is said to "tend to infinity", denotedlimx→cf(x)=∞,{\textstyle \lim _{x\to c}f(x)=\infty ,}and itsgraphhas the linex=c{\displaystyle x=c}as a verticalasymptote. While such a function is not formally defined forx=c,{\displaystyle x=c,}and theinfinity symbol∞{\displaystyle \infty }in this case does not represent any specificreal number, such limits are informally said to "equal infinity". If the value of the function decreases without bound, the function is said to "tend to negative infinity",−∞.{\displaystyle -\infty .}In some cases a function tends to two different values whenx{\displaystyle x}tends toc{\displaystyle c}from above(x→c+{\displaystyle x\to c^{+}})and below(x→c−{\displaystyle x\to c^{-}}); such a function has two distinctone-sided limits.[21] A basic example of an infinite singularity is thereciprocal function,f(x)=1/x,{\displaystyle f(x)=1/x,}which tends to positive or negative infinity asx{\displaystyle x}tends to0{\displaystyle 0}: limx→0+1x=+∞,limx→0−1x=−∞.{\displaystyle \lim _{x\to 0^{+}}{\frac {1}{x}}=+\infty ,\qquad \lim _{x\to 0^{-}}{\frac {1}{x}}=-\infty .} In most cases, the limit of a quotient of functions is equal to the quotient of the limits of each function separately, limx→cf(x)g(x)=limx→cf(x)limx→cg(x).{\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}={\frac {\displaystyle \lim _{x\to c}f(x)}{\displaystyle \lim _{x\to c}g(x)}}.} However, when a function is constructed by dividing two functions whose separate limits are both equal to0,{\displaystyle 0,}then the limit of the result cannot be determined from the separate limits, so is said to take anindeterminate form, informally written00.{\displaystyle {\tfrac {0}{0}}.}(Another indeterminate form,∞∞,{\displaystyle {\tfrac {\infty }{\infty }},}results from dividing two functions whose limits both tend to infinity.) Such a limit may equal any real value, may tend to infinity, or may not converge at all, depending on the particular functions. For example, in limx→1x2−1x−1,{\displaystyle \lim _{x\to 1}{\dfrac {x^{2}-1}{x-1}},} the separate limits of the numerator and denominator are0{\displaystyle 0}, so we have the indeterminate form00{\displaystyle {\tfrac {0}{0}}}, but simplifying the quotient first shows that the limit exists: limx→1x2−1x−1=limx→1(x−1)(x+1)x−1=limx→1(x+1)=2.{\displaystyle \lim _{x\to 1}{\frac {x^{2}-1}{x-1}}=\lim _{x\to 1}{\frac {(x-1)(x+1)}{x-1}}=\lim _{x\to 1}(x+1)=2.} Theaffinely extended real numbersare obtained from thereal numbersR{\displaystyle \mathbb {R} }by adding two new numbers+∞{\displaystyle +\infty }and−∞,{\displaystyle -\infty ,}read as "positive infinity" and "negative infinity" respectively, and representingpoints at infinity. With the addition of±∞,{\displaystyle \pm \infty ,}the concept of a "limit at infinity" can be made to work like a finite limit. When dealing with both positive and negative extended real numbers, the expression1/0{\displaystyle 1/0}is usually left undefined. However, in contexts where only non-negative values are considered, it is often convenient to define1/0=+∞{\displaystyle 1/0=+\infty }. The setR∪{∞}{\displaystyle \mathbb {R} \cup \{\infty \}}is theprojectively extended real line, which is aone-point compactificationof the real line. Here∞{\displaystyle \infty }means an unsigned infinity orpoint at infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies−∞=∞{\displaystyle -\infty =\infty }, which is necessary in this context. In this structure,a0=∞{\displaystyle {\frac {a}{0}}=\infty }can be defined for nonzeroa, anda∞=0{\displaystyle {\frac {a}{\infty }}=0}whenais not∞{\displaystyle \infty }. It is the natural way to view the range of thetangent functionand cotangent functions oftrigonometry:tan(x)approaches the single point at infinity asxapproaches either+⁠π/2⁠or−⁠π/2⁠from either direction. This definition leads to many interesting results. However, the resulting algebraic structure is not afield, and should not be expected to behave like one. For example,∞+∞{\displaystyle \infty +\infty }is undefined in this extension of the real line. The subject ofcomplex analysisapplies the concepts of calculus in thecomplex numbers. Of major importance in this subject is theextended complex numbersC∪{∞},{\displaystyle \mathbb {C} \cup \{\infty \},}the set of complex numbers with a single additional number appended, usually denoted by theinfinity symbol∞{\displaystyle \infty }and representing apoint at infinity, which is defined to be contained in everyexterior domain, making those itstopologicalneighborhoods. This can intuitively be thought of as wrapping up the infinite edges of the complex plane and pinning them together at the single point∞,{\displaystyle \infty ,}aone-point compactification, making the extended complex numbers topologically equivalent to asphere. This equivalence can be extended to a metrical equivalence by mapping each complex number to a point on the sphere via inversestereographic projection, with the resultingspherical distanceapplied as a new definition of distance between complex numbers; and in general the geometry of the sphere can be studied using complex arithmetic, and conversely complex arithmetic can be interpreted in terms of spherical geometry. As a consequence, the set of extended complex numbers is often called theRiemann sphere. The set is usually denoted by the symbol for the complex numbers decorated by an asterisk, overline, tilde, or circumflex, for exampleC^=C∪{∞}.{\displaystyle {\hat {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}.} In the extended complex numbers, for any nonzero complex numberz,{\displaystyle z,}ordinary complex arithmetic is extended by the additional rulesz0=∞,{\displaystyle {\tfrac {z}{0}}=\infty ,}z∞=0,{\displaystyle {\tfrac {z}{\infty }}=0,}∞+0=∞,{\displaystyle \infty +0=\infty ,}∞+z=∞,{\displaystyle \infty +z=\infty ,}∞⋅z=∞.{\displaystyle \infty \cdot z=\infty .}However,00{\displaystyle {\tfrac {0}{0}}},∞∞{\displaystyle {\tfrac {\infty }{\infty }}}, and0⋅∞{\displaystyle 0\cdot \infty }are left undefined. The four basic operations – addition, subtraction, multiplication and division – as applied to whole numbers (positive integers), with some restrictions, in elementary arithmetic are used as a framework to support the extension of the realm of numbers to which they apply. For instance, to make it possible to subtract any whole number from another, the realm of numbers must be expanded to the entire set ofintegersin order to incorporate the negative integers. Similarly, to support division of any integer by any other, the realm of numbers must expand to therational numbers. During this gradual expansion of the number system, care is taken to ensure that the "extended operations", when applied to the older numbers, do not produce different results. Loosely speaking, since division by zero has no meaning (isundefined) in the whole number setting, this remains true as the setting expands to therealor evencomplex numbers.[22] As the realm of numbers to which these operations can be applied expands there are also changes in how the operations are viewed. For instance, in the realm of integers, subtraction is no longer considered a basic operation since it can be replaced by addition of signed numbers.[23]Similarly, when the realm of numbers expands to include the rational numbers, division is replaced by multiplication by certain rational numbers. In keeping with this change of viewpoint, the question, "Why can't we divide by zero?", becomes "Why can't a rational number have a zero denominator?". Answering this revised question precisely requires close examination of the definition of rational numbers. In the modern approach to constructing the field of real numbers, the rational numbers appear as an intermediate step in the development that is founded onset theory. First, the natural numbers (including zero) are established on an axiomatic basis such asPeano's axiom systemand then this is expanded to thering of integers. The next step is to define the rational numbers keeping in mind that this must be done using only the sets and operations that have already been established, namely, addition, multiplication and the integers. Starting with the set ofordered pairsof integers,{(a,b)}withb≠ 0, define abinary relationon this set by(a,b) ≃ (c,d)if and only ifad=bc. This relation is shown to be anequivalence relationand itsequivalence classesare then defined to be the rational numbers. It is in the formal proof that this relation is an equivalence relation that the requirement that the second coordinate is not zero is needed (for verifyingtransitivity).[24][25][26] Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures. In thehyperreal numbers, division by zero is still impossible, but division by non-zeroinfinitesimalsis possible.[27]The same holds true in thesurreal numbers.[28] Indistribution theoryone can extend the function1x{\textstyle {\frac {1}{x}}}to a distribution on the whole space of real numbers (in effect by usingCauchy principal values). It does not, however, make sense to ask for a "value" of this distribution atx= 0; a sophisticated answer refers to thesingular supportof the distribution. Inmatrixalgebra, square or rectangular blocks of numbers are manipulated as though they were numbers themselves: matrices can beaddedandmultiplied, and in some cases, a version of division also exists. Dividing by a matrix means, more precisely, multiplying by itsinverse. Not all matrices have inverses.[29]For example, amatrix containing only zerosis not invertible. One can define a pseudo-division, by settinga/b=ab+, in whichb+represents thepseudoinverseofb. It can be proven that ifb−1exists, thenb+=b−1. Ifbequals 0, then b+= 0. Inabstract algebra, the integers, the rational numbers, the real numbers, and the complex numbers can be abstracted to more general algebraic structures, such as acommutative ring, which is a mathematical structure where addition, subtraction, and multiplication behave as they do in the more familiar number systems, but division may not be defined. Adjoining a multiplicative inverses to a commutative ring is calledlocalization. However, the localization of every commutative ring at zero is thetrivial ring, where0=1{\displaystyle 0=1}, so nontrivial commutative rings do not have inverses at zero, and thus division by zero is undefined for nontrivial commutative rings. Nevertheless, any number system that forms acommutative ringcan be extended to a structure called awheelin which division by zero is always possible.[30]However, the resulting mathematical structure is no longer a commutative ring, as multiplication no longer distributes over addition. Furthermore, in a wheel, division of an element by itself no longer results in the multiplicative identity element1{\displaystyle 1}, and if the original system was anintegral domain, the multiplication in the wheel no longer results in acancellative semigroup. The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such asringsandfields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in askew field(which for this reason is called adivision ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ringZ/6Zof integers mod 6. The meaning of the expression22{\textstyle {\frac {2}{2}}}should be the solutionxof the equation2x=2{\displaystyle 2x=2}. But in the ringZ/6Z, 2 is azero divisor. This equation has two distinct solutions,x= 1andx= 4, so the expression22{\textstyle {\frac {2}{2}}}isundefined. In field theory, the expressionab{\textstyle {\frac {a}{b}}}is only shorthand for the formal expressionab−1, whereb−1is the multiplicative inverse ofb. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning whenbis zero. Modern texts, that define fields as a special type of ring, include the axiom0 ≠ 1for fields (or its equivalent) so that thezero ringis excluded from being a field. In the zero ring, division by zero is possible, which shows that the other field axioms are not sufficient to exclude division by zero in a field. In computing, most numerical calculations are done withfloating-point arithmetic, which since the 1980s has been standardized by theIEEE 754specification. In IEEE floating-point arithmetic, numbers are represented using a sign (positive or negative), a fixed-precisionsignificandand an integerexponent. Numbers whose exponent is too large to represent instead "overflow" to positive or negativeinfinity(+∞ or −∞), while numbers whose exponent is too small to represent instead "underflow" topositive or negative zero(+0 or −0). ANaN(not a number) value represents undefined results. In IEEE arithmetic, division of 0/0 or ∞/∞ results in NaN, but otherwise division always produces a well-defined result. Dividing any non-zero number by positive zero (+0) results in an infinity of the same sign as the dividend. Dividing any non-zero number bynegative zero(−0) results in an infinity of the opposite sign as the dividend. This definition preserves the sign of the result in case ofarithmetic underflow.[31] For example, using single-precision IEEE arithmetic, ifx= −2−149, thenx/2 underflows to −0, and dividing 1 by this result produces 1/(x/2) = −∞. The exact result −2150is too large to represent as a single-precision number, so an infinity of the same sign is used instead to indicate overflow. Integerdivision by zero is usually handled differently from floating point since there is no integer representation for the result.CPUsdiffer in behavior: for instancex86processors trigger ahardware exception, whilePowerPCprocessors silently generate an incorrect result for the division and continue, andARMprocessors can either cause a hardware exception or return zero.[32]Because of this inconsistency between platforms, theCandC++programming languagesconsider the result of dividing by zeroundefined behavior.[33]In typicalhigher-level programming languages, such asPython,[34]anexceptionis raised for attempted division by zero, which can be handled in another part of the program. Manyproof assistants, such asRocq(previously known asCoq) andLean, define 1/0 = 0. This is due to the requirement that all functions aretotal. Such a definition does not create contradictions, as further manipulations (such ascancelling out) still require that the divisor is non-zero.[35][36]
https://en.wikipedia.org/wiki/Divide_by_zero#Operating_systems
Apage tableis adata structureused by avirtual memorysystem in acomputerto store mappings betweenvirtual addressesandphysical addresses. Virtual addresses are used by the program executed by the accessingprocess, while physical addresses are used by the hardware, or more specifically, by therandom-access memory(RAM) subsystem. The page table is a key component ofvirtual address translationthat is necessary to accessdatain memory. The page table is set up by the computer'soperating system, and may be read and written during the virtual address translation process by thememory management unitor by low-level system software or firmware. In operating systems that use virtual memory, every process is given the impression that it is working with large, contiguous sections of memory. Physically, the memory of each process may be dispersed across different areas of physical memory, or may have been moved (paged out) to secondary storage, typically to ahard disk drive(HDD) orsolid-state drive(SSD). When a process requests access to data in its memory, it is the responsibility of the operating system to map the virtual address provided by the process to the physical address of the actual memory where that data is stored. The page table is where mappings of virtual addresses to physical addresses are stored, with each mapping also known as apage table entry(PTE).[1][2] Thememory management unit(MMU) inside the CPU stores a cache of recently used mappings from the operating system's page table. This is called thetranslation lookaside buffer(TLB), which is an associative cache. When a virtual address needs to be translated into a physical address, the TLB is searched first. If a match is found, which is known as aTLB hit, the physical address is returned and memory access can continue. However, if there is no match, which is called aTLB miss, the MMU, the system firmware, or the operating system's TLB miss handler will typically look up the address mapping in the page table to see whether a mapping exists, which is called apage walk. If one exists, it is written back to the TLB, which must be done because the hardware accesses memory through the TLB in a virtual memory system, and the faulting instruction is restarted, which may happen in parallel as well. The subsequent translation will result in a TLB hit, and the memory access will continue. The page table lookup may fail, triggering apage fault, for two reasons: When physical memory is not full this is a simple operation; the page is written back into physical memory, the page table and TLB are updated, and the instruction is restarted. However, when physical memory is full, one or more pages in physical memory will need to be paged out to make room for the requested page. The page table needs to be updated to mark that the pages that were previously in physical memory are no longer there, and to mark that the page that was on disk is now in physical memory. The TLB also needs to be updated, including removal of the paged-out page from it, and the instruction restarted. Which page to page out is the subject ofpage replacement algorithms. Some MMUs trigger a page fault for other reasons, whether or not the page is currently resident in physical memory and mapped into the virtual address space of a process: The simplest page table systems often maintain aframetable and a page table. The frame table holds information about which frames are mapped. In more advanced systems, the frame table can also hold information about which address space a page belongs to, statistics information, or other background information. The page table is an array of page table entries. Each page table entry (PTE) holds the mapping between a virtual address of a page and the address of a physical frame. There is also auxiliary information about the page such as a present bit, adirtyor modified bit, address space or process ID information, amongst others. Secondary storage, such as a hard disk drive, can be used to augment physical memory. Pages can be paged in and out of physical memory and the disk. The present bit can indicate what pages are currently present in physical memory or are on disk, and can indicate how to treat these different pages, i.e. whether to load a page from disk and page another page in physical memory out. The dirty bit allows for a performance optimization. A page on disk that is paged in to physical memory, then read from, and subsequently paged out again does not need to be written back to disk, since the page has not changed. However, if the page was written to after it is paged in, its dirty bit will be set, indicating that the page must be written back to the backing store. This strategy requires that the backing store retain a copy of the page after it is paged in to memory. When a dirty bit is not used, the backing store need only be as large as the instantaneous total size of all paged-out pages at any moment. When a dirty bit is used, at all times some pages will exist in both physical memory and the backing store. In operating systems that are notsingle address space operating systems, address space or process ID information is necessary so the virtual memory management system knows what pages to associate to what process. Two processes may use two identical virtual addresses for different purposes. The page table must supply different virtual memory mappings for the two processes. This can be done by assigning the two processes distinct address map identifiers, or by using process IDs. Associating process IDs with virtual memory pages can also aid in selection of pages to page out, as pages associated with inactive processes, particularly processes whose code pages have been paged out, are less likely to be needed immediately than pages belonging to active processes. As an alternative to tagging page table entries with process-unique identifiers, the page table itself may occupy a different virtual-memory page for each process so that the page table becomes a part of the process context. In such an implementation, the process's page table can be paged out whenever the process is no longer resident in memory. There are several types of page tables, which are optimized for different requirements. Essentially, a bare-bones page table must store the virtual address, the physical address that is "under" this virtual address, and possibly some address space information. Aninverted page table(IPT) is best thought of as an off-chip extension of theTLBwhich uses normal system RAM. Unlike a true page table, it is not necessarily able to hold all current mappings. The operating system must be prepared to handle misses, just as it would with a MIPS-style software-filled TLB. The IPT combines a page table and aframe tableinto one data structure. At its core is a fixed-size table with the number of rows equal to the number of frames in memory. If there are 4,000 frames, the inverted page table has 4,000 rows. For each row there is an entry for the virtual page number (VPN), the physical page number (not the physical address), some other data and a means for creating acollisionchain, as we will see later. Searching through all entries of the core IPT structure is inefficient, and ahash tablemay be used to map virtual addresses (and address space/PID information if need be) to an index in the IPT - this is where the collision chain is used. This hash table is known as ahash anchor table. The hashing function is not generally optimized for coverage - raw speed is more desirable. Of course, hash tables experience collisions. Due to this chosen hashing function, we may experience a lot of collisions in usage, so for each entry in the table the VPN is provided to check if it is the searched entry or a collision. In searching for a mapping, the hash anchor table is used. If no entry exists, a page fault occurs. Otherwise, the entry is found. Depending on the architecture, the entry may be placed in the TLB again and the memory reference is restarted, or the collision chain may be followed until it has been exhausted and a page fault occurs. A virtual address in this schema could be split into two, the first half being a virtual page number and the second half being the offset in that page. A major problem with this design is poorcache localitycaused by thehash function. Tree-based designs avoid this by placing the page table entries for adjacent pages in adjacent locations, but an inverted page table destroys spatiallocality of referenceby scattering entries all over. An operating system may minimize the size of the hash table to reduce this problem, with the trade-off being an increased miss rate. There is normally one hash table, contiguous in physical memory, shared by all processes. A per-process identifier is used to disambiguate the pages of different processes from each other. It is somewhat slow to remove the page table entries of a given process; the OS may avoid reusing per-process identifier values to delay facing this. Alternatively, per-process hash tables may be used, but they are impractical because ofmemory fragmentation, which requires the tables to be pre-allocated. Inverted page tables are used for example on thePowerPC, theUltraSPARCand theIA-64architecture.[4] The inverted page table keeps a listing of mappings installed for all frames in physical memory. However, this could be quite wasteful. Instead of doing so, we could create a page table structure that contains mappings for virtual pages. It is done by keeping several page tables that cover a certain block of virtual memory. For example, we can create smaller 1024-entry 4 KB pages that cover 4 MB of virtual memory. This is useful since often the top-most parts and bottom-most parts of virtual memory are used in running a process - the top is often used for text and data segments while the bottom for stack, with free memory in between. The multilevel page table may keep a few of the smaller page tables to cover just the top and bottom parts of memory and create new ones only when strictly necessary. Now, each of these smaller page tables are linked together by a master page table, effectively creating atreedata structure. There need not be only two levels, but possibly multiple ones. For example, a virtual address in this schema could be split into three parts: the index in the root page table, the index in the sub-page table, and the offset in that page. Multilevel page tables are also referred to as "hierarchical page tables". It was mentioned that creating a page table structure that contained mappings for every virtual page in the virtual address space could end up being wasteful. But, we can get around the excessive space concerns by putting the page table in virtual memory, and letting the virtual memory system manage the memory for the page table. However, part of this linear page table structure must always stay resident in physical memory in order to prevent circularpage faultsand look for a key part of the page table that is not present in the page table. Nested page tables can be implemented to increase the performance ofhardware virtualization. By providing hardware support forpage-table virtualization, the need to emulate is greatly reduced. Forx86 virtualizationthe current choices areIntel'sExtended Page Tablefeature andAMD'sRapid Virtualization Indexingfeature.
https://en.wikipedia.org/wiki/Page_table_entry#Permissions
In computing, apage faultis anexceptionthat thememory management unit(MMU) raises when aprocessaccesses amemory pagewithout proper preparations. Accessing the page requires a mapping to be added to the process'svirtual address space. Furthermore, the actual page contents may need to be loaded from a back-up, e.g. adisk. The MMU detects the page fault, but the operating system'skernelhandles the exception by making the required page accessible in the physical memory or denying an illegal memory access. Valid page faults are common and necessary to increase the amount of memory available to programs in any operating system that usesvirtual memory, such asWindows,macOS, and theLinux kernel.[1] If the page is loaded in memory at the time the fault is generated, but is not marked in thememory management unitas being loaded in memory, then it is called aminororsoftpage fault. The page fault handler in theoperating systemmerely needs to make the entry for that page in the memory management unit point to the page in memory and indicate that the page is loaded in memory; it does not need to read the page into memory. This could happen if thememory is sharedby different programs and the page is already brought into memory for other programs. This is the mechanism used by an operating system to increase the amount of program memory available on demand. The operating system delays loading parts of the program from disk until the program attempts to use it and the page fault is generated. If the page is not loaded in memory at the time of the fault, then it is called amajororhardpage fault. The page fault handler in the OS needs to find a free location: either a free page in memory, or a non-free page in memory. This latter might be used by another process, in which case the OS needs to write out the data in that page (if it has not been written out since it was last modified) and mark that page as not being loaded in memory in its processpage table. Once the space has been made available, the OS can read the data for the new page into memory, add an entry to its location in the memory management unit, and indicate that the page is loaded. Thus major faults are more expensive than minor faults and add storage access latency to the interrupted program's execution. If a page fault occurs for a reference to an address that is not part of the virtualaddress space, meaning there cannot be a page in memory corresponding to it, then it is called aninvalidpage fault. The page fault handler in the operating system will then generally pass asegmentation faultto the offending process, indicating that the access was invalid; this usually results inabnormal terminationof the code that made the invalid reference. Anull pointeris usually represented as a pointer to address 0 in the address space; many operating systems set up the MMU to indicate that the page that contains that address is not in memory, and do not include that page in the virtual address space, so that attempts to read or write the memory referenced by a null pointer get an invalid page fault. Illegal accesses and invalid page faults can result in asegmentation faultorbus error, resulting in an app or OScrash.Software bugsare often the causes of these problems, but hardware memory errors, such as those caused byoverclocking, may corrupt pointers and cause valid code to fail. Operating systems provide differing mechanisms for reporting page fault errors.Microsoft Windowsusesstructured exception handlingto report invalid page faults asaccess violationexceptions.UNIX-likesystems typically usesignals, such asSIGSEGV, to report these error conditions to programs. If the program receiving the error does not handle it, the operating system performs a default action, typically involving the termination of the runningprocessthat caused the error condition, and notifying the user that the program has malfunctioned. Windows often reports such crashes without going to any details. An experienced user can retrieve detailed information usingWinDbgand theminidumpthat Windows creates during the crash. UNIX-like operating systems report these conditions with such error messages as "segmentation violation" or "bus error", and may produce acore dump. Page faults degrade system performance and can causethrashing. Major page faults on a conventional computer usinghard disk drivescan have a significant impact on their performance, as a typical hard disk drive had an averagerotational latencyof 3 ms, aseek timeof 5 ms and a transfer time of 0.05 ms/page. Therefore, the total time for paging is near 8 ms (8,000 μs). If the memory access time is 0.2 μs, then the page fault would make the operation about 40,000 times slower. With a more modern system using a fastsolid-state drivewith a page read latency of 0.030 ms (30 μs)[2]and a memory access latency of 70 ns (0.070 μs),[3]a hard page fault is still over 400 times slower. Performanceoptimizationof programs or operating systems often involves efforts to reduce the number of page faults. Two primary focuses of the optimization are reducing overall memory usage and improvingmemory locality. To reduce the page faults, OS developers must use an appropriatepage replacement algorithmthat maximizes the page hits. Many have been proposed, such as implementingheuristic algorithmsto reduce the incidence of page faults. Larger physical memory also reduces the likelihood of page faults. Chunks ofmemory-mapped filescan remain in memory longer and avoid slow re-reads from storage. Similarly, lower memory pressure lessens the need for frequentswapping outof memory pages to a backing storage device used for swap.
https://en.wikipedia.org/wiki/Page_fault
Superblockmay refer to:
https://en.wikipedia.org/wiki/Superblock
Jos/ˈdʒɔːs/is a city in the North-Central region ofNigeria. The city has a population of about 900,000 residents based on the 2006census.[2]Popularly called "J-Town",[3]it is the administrative capital and largest city ofPlateau State. The city is situated on the Josplateauwhich is lies within theGuinea Savannahof North-Central Nigeria. It connects most of the North-Eastern capitals to theFederal Capital TerritoryAbuja, by road. Driving in and out of Jos, traffic encounters very steep and windy bends and mountainous sceneries typical of the plateau, from which the state derives its name. During the period ofBritish colonial rule, Jos became an important centre fortin miningafter large deposits ofcassiterite, the main ore for the metal, were discovered. It is also the trading hub of Plateau State as commercial activities are steadily increasing. The earliest known settlers of the land that would come to be known as Nigeria were theNok people(c.1000BC), skilled artisans from around the Jos area who mysteriously vanished in the late first millennium.[4] According to the historian Sen Luka Gwom Zangabadt,[5]the area known as Jos today was inhabited byindigenousethnic groupswho were mostly farmers. During theBritish colonial period, direct rule was introduced for the indigenous ethnic groups on the Jos Plateau since they were not under theFulaniemirateswhere indirect rule was used.[6]According to the historian Samuel N Nwabara, theFulaniempire controlled most ofnorthern Nigeria, except the Plateau province and theBerom,Ngas,Tiv,JukunandIdomaethnic groups.[7]It was the discovery oftinby the British that led to the influx of other ethnic groups such as theHausafrom the north, southeasternIgbo, andYorubafrom the country's southwest. As such, Jos is often recognised as a cosmopolitan Nigerian city. According to the white paper of the commission of inquiry into the 1894 crisis, Ames, a British colonial administrator, said that the original name for Jos wasGwoshin theIzere language(spoken by theAfusari, the first settlers in the area), which was a village situated at the current site of the city; according to Ames, theHausa, who arrived there after, wrongly pronounced Gwosh as "Jos" and it stuck.[8]Another version was that "Jos" came from the word "Jasad" meaning body inArabic. To distinguish it from the hilltops, it was called "Jas", which was mispronounced by the British as "Jos". It grew rapidly after the British discovered vasttindeposits in the vicinity. Both tin andcolumbitewere extensively mined in the area up until the 1960s. They were transported by railway to bothPort HarcourtandLagoson the coast, and then exported from those ports. Jos is still often referred to as "Tin City". It was made capital ofBenue-Plateau Statein 1967 and became the capital of the new Plateau State in 1975. Jos has become an important national administrative, commercial, and tourist centre. Tin mining led to the influx ofmigrants(mostly Igbos, Yorubas andEuropeans) who constitute more than half of the population of Jos. This "melting pot" of race, ethnicity and religion makes Jos one of the most cosmopolitan cities in Nigeria. For this reason,Plateau Stateis known in Nigeria as the "home of peace and tourism". Excellent footage of Jos in 1936 including the tin mines, local people and the colonial population is held by theCinema MuseuminLondon[ref HM0172]. The city is divided into 2 local government areas ofJos NorthandJos South. The city proper lies between Jos North and parts of Jos South headquartered in Bukuru. The Local Government Council administration is headquartered here. Jos North is the commercial nerve centre of the state as it houses the state's branch of Nigeria's Central Bank and the headquarters of the commercial banks are mostly located here as well as the currency exchanges along Ahmadu Bello Way. Moreover all basic and essential services can be found in Jos North from the Jos Main market (terminus) to the Kabong or Rukuba Road satellite market. Due to recent communal clashes, however, a lot of commercial activities are shifting to Jos South. The palace and office of the Gbong Gwom Jos (traditional ruler of Jos) are located in an area in Jos North calledJishein the Berom language. In 1956, Her MajestyQueen Elizabeth IItogether with her consortPrince Philiphad a weekend stopover to rest atJisheduring her Nigeria tour.[9]Jishewas known at that time as Tudun Wada cottage. Jos North has a significant slum.[10]Jos North is the location of theUniversity of Josand its teaching hospital at Laminga & theNational Commission for Museums and Monuments. The Nigerian Film Institute is also located in Jos-North at the British America junction alongMurtala Mohammedway. Both theEvangelical Church Winning All(ECWA) and theChurch of Christ in Nations(COCIN) are headquartered in this part of the metropolis. Jos South is the second most populous Local Government Area in Plateau State and has its Council located along Bukuru expressway. Jos South is the seat of the Governor i.e. the old Government House in Rayfield and the New Government House in Little Rayfield and the industrial centre of Plateau State due to the presence of industries like the NASCO group of companies, Standard Biscuits, Grand Cereals and Oil Mills, Zuma steel west Africa, aluminium roofing industries, Jos International Breweries among others. Jos South also houses prestigious institutions like theNational Institute of Policy and Strategic Studies(NIPSS), the highest academic awarding institution in Nigeria, the Police Staff College, theNTAtelevision college,Nigerian Film CorporationandKarl Kumm University. Jos South also houses the prestigious National Centre For Remote Sensing. The city has formed anagglomerationwith the town ofBukuruto form the Jos-Bukuru metropolis (JBM). Jos also is the seat of the famous National Veterinary Research Institute (NVRI), situated in Vom.[11]and the Industrial Training Fund (ITF). Situated almost at the geographical centre of Nigeria and about 179 kilometres (111 miles) fromAbuja, the nation's capital, Jos is linked by road, rail and air to the rest of the country. The city is served byYakubu Gowon Airport, but its rail connections no longer operate as the only currently operational section ofNigeria's rail networkis the western line from Lagos toKano. At an altitude of 1,217 m (3,993 ft) above sea level, Jos' climate is closer to temperate than that of the vast majority of Nigeria. Average monthly temperatures range from 21–25 °C (70–77 °F), and from mid-November to late January, night-time temperatures drop as low as 7 °C (45 °F). Hail sometimes falls during the rainy season because of the cooler temperatures at high altitudes.[12]These cooler temperatures have, fromcolonialtimes until the present day, made Jos a favourite holiday location for both tourists andexpatriatesbased in Nigeria.[citation needed] Jos receives about 1,400 millimetres (55 inches) of rainfall annually, theprecipitationarising from both convectional andorographicsources, owing to the location of the city on theJos Plateau.[13] According to theKöppen climate classificationsystem, Jos has atropical savanna climate, abbreviatedAw.[14] the 330Nigerian Air Forcestation is located in the Jos South Local Government area along the old airport road. The station boasts blocks of barracks for air personnel, an airstrip, a primary school, a military secondary school and a hospital which is arguably one of the best in the state. Covering roughly 3 square miles (7.8 km2) ofsavannahbush and established in 1972 under the administration of then Governor of Benue-PlateauJoseph Gomwalkin alliance with a mandate by the thenOrganisation of African Unityto African heads of state to earmark one-third of their landmass to establish conservation areas in each of their countries, It has since then become a major attraction in the state, attracting tourists from within and outside the country. The park has become a home to various species of wildlife including Lions,Rock pythons,marabou storks,Baboons,Honey Badgers, Camels as well as variantflora. TheNational Museumin Jos was founded in 1952 byBernard Fagg,[18]and was recognized as one of the best in the country. It has unfortunately been left to fall to ruin as is the case with most of the cultural establishments in Nigeria.The Pottery Hallis also a part of the museum that has an exceptional collection of finely crafted pottery from all over Nigeria and boasts some fine specimens ofNok terracottaheads and artefacts dating from 500 BCE to 200 CE. It also incorporates the Museum of Traditional Nigerian Architecture with life-size replicas of a variety of buildings, from the walls ofKanoand the Mosque atZariato a Tiv village. Articles of interest from colonial times relating to the railway and tin mining can also be found on display. A School for Museum Technicians is attached to the museum, established with the help ofUNESCO. The Jos Museum is also located beside the zoo. Situated at the end of Joseph Gomwalk Road, the Jos Polo Club is one of the prominent sports institutions in the state. A 40,000-seat capacity located along Farin-Gada road which has become home to the Plateau United Football Club, Current champions of The Nigerian Professional League. Rwang Pam township stadium Jos. The golf course located in Rayfield, Jos has hosted many golfing competitions with players coming from both within and outside the state. Other local enterprises include food processing, beer brewing, and the manufacture of cosmetics, soap, rope, jute bags, and furniture. Heavy industry produces cement and asbestos cement, crushed stone, rolled steel, and tyre retreads. Jos is also a centre for the construction industry and has several printing and publishing firms. The Jos-Bukurudam and reservoir on the Shen River provide water for the city's industries. Jos is a base for exploring Plateau State. The Shere Hills, seen to the east of Jos, offer a prime view of the city below. Assop Falls is a small waterfall which makes a picnic spot on a drive from Jos to Abuja. Riyom Rock is a dramatic and photogenic pile of rocks balanced precariously on top of one another, with one resembling a clown's hat, observable from the main Jos-Akwangaroad.[19] The city is home to theUniversity of Jos(founded in 1975),St Luke's Cathedral, an airport and a railway station. Jos is served by several teaching hospitals includingBingham University Teaching HospitalandJos University Teaching Hospital(JUTH), a federalgovernment-funded referral hospital.[20]TheNigerian College of Accountancy, with over 3,000 students in 2011, is based in Kwall,Plateau State.[21]
https://en.wikipedia.org/wiki/Jos
Atranslation lookaside buffer(TLB) is a memorycachethat stores the recent translations ofvirtual memorytophysical memory. It is used to reduce the time taken to access a user memory location.[1]It can be called an address-translation cache. It is a part of the chip'smemory-management unit(MMU). A TLB may reside between theCPUand theCPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that usespagedorsegmentedvirtual memory. The TLB is sometimes implemented ascontent-addressable memory(CAM). The CAM search key is the virtual address, and the search result is aphysical address. If the requested address is present in the TLB, the CAM search yields a match quickly and the retrieved physical address can be used to access memory. This is called a TLB hit. If the requested address is not in the TLB, it is a miss, and the translation proceeds by looking up thepage tablein a process called apage walk. The page walk is time-consuming when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB. ThePowerPC 604, for example, has a two-wayset-associativeTLB for data loads and stores.[2]Some processors have different instruction and data address TLBs. A TLB has a fixed number of slots containingpage-tableentries and segment-table entries; page-table entries map virtual addresses tophysical addressesand intermediate-table addresses, while segment-table entries map virtual addresses to segment addresses, intermediate-table addresses and page-table addresses. Thevirtual memoryis the memory space as seen from a process; this space is often split intopagesof a fixed size (in paged memory), or less commonly intosegmentsof variable sizes (in segmented memory). The page table, generally stored inmain memory, keeps track of where the virtual pages are stored in the physical memory. This method uses two memory accesses (one for the page-table entry, one for the byte) to access a byte. First, the page table is looked up for the frame number. Second, the frame number with the page offset gives the actual address. Thus, any straightforward virtual memory scheme would have the effect of doubling the memory access time. Hence, the TLB is used to reduce the time taken to access the memory locations in the page-table method. The TLB is a cache of the page table, representing only a subset of the page-table contents. Referencing the physical memory addresses, a TLB may reside between the CPU and theCPU cache, between the CPU cache andprimary storagememory, or between levels of a multi-level cache. The placement determines whether the cache uses physical or virtual addressing. If the cache is virtually addressed, requests are sent directly from the CPU to the cache, and the TLB is accessed only on acache miss. If the cache is physically addressed, the CPU does a TLB lookup on every memory operation, and the resulting physical address is sent to the cache. In aHarvard architectureormodified Harvard architecture, a separate virtual address space or memory-access hardware may exist for instructions and data. This can lead to distinct TLBs for each access type, aninstruction translation lookaside buffer(ITLB) and adata translation lookaside buffer(DTLB). Various benefits have been demonstrated with separate data and instruction TLBs.[4] The TLB can be used as a fast lookup hardware cache. The figure shows the working of a TLB. Each entry in the TLB consists of two parts: a tag and a value. If the tag of the incoming virtual address matches the tag in the TLB, the corresponding value is returned. Since the TLB lookup is usually a part of the instruction pipeline, searches are fast and cause essentially no performance penalty. However, to be able to search within the instruction pipeline, the TLB has to be small. A common optimization for physically addressed caches is to perform the TLB lookup in parallel with the cache access. Upon each virtual memory reference, the hardware checks the TLB to see whether the page number is held therein. If yes, it is a TLB hit, and the translation is made. The frame number is returned and is used to access the memory. If the page number is not in the TLB, the page table must be checked. Depending on the CPU, this can be done automatically in hardware or using an interrupt to the operating system. When the frame number is obtained, it can be used to access the memory. In addition, we add the page number and frame number to the TLB, so that they will be found quickly on the next reference. If the TLB is already full, a suitable block must be selected for replacement. There are different replacement methods likeleast recently used(LRU),first in, first out(FIFO) etc.; see theaddress translationsection in the cache article for more details about virtual addressing as it pertains to caches and TLBs. The CPU has to access main memory for an instruction-cache miss, data-cache miss, or TLB miss. The third case (the simplest one) is where the desired information itself actuallyisin a cache, but the information for virtual-to-physical translation is not in a TLB. These are all slow, due to the need to access a slower level of the memory hierarchy, so a well-functioning TLB is important. Indeed, a TLB miss can be more expensive than an instruction or data cache miss, due to the need for not just a load from main memory, but a page walk, requiring several memory accesses. The flowchart provided explains the working of a TLB. If it is a TLB miss, then the CPU checks the page table for the page table entry. If thepresent bitis set, then the page is in main memory, and the processor can retrieve the frame number from the page-table entry to form the physical address.[6]The processor also updates the TLB to include the new page-table entry. Finally, if the present bit is not set, then the desired page is not in the main memory, and apage faultis issued. Then a page-fault interrupt is called, which executes the page-fault handling routine. If the pageworking setdoes not fit into the TLB, thenTLB thrashingoccurs, where frequent TLB misses occur, with each newly cached page displacing one that will soon be used again, degrading performance in exactly the same way as thrashing of the instruction or data cache does. TLB thrashing can occur even if instruction-cache or data-cachethrashingare not occurring, because these are cached in different-size units. Instructions and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus, even if the code and data working sets fit into cache, if the working sets are fragmented across many pages, the virtual-address working set may not fit into TLB, causing TLB thrashing. Appropriate sizing of the TLB thus requires considering not only the size of the corresponding instruction and data caches, but also how these are fragmented across multiple pages. Similar to caches, TLBs may have multiple levels. CPUs can be (and nowadays usually are) built with multiple TLBs, for example a small L1 TLB (potentially fully associative) that is extremely fast, and a larger L2 TLB that is somewhat slower. When instruction-TLB (ITLB) and data-TLB (DTLB) are used, a CPU can have three (ITLB1, DTLB1, TLB2) or four TLBs. For instance,Intel'sNehalemmicroarchitecture has a four-way set associative L1 DTLB with 64 entries for 4 KiB pages and 32 entries for 2/4 MiB pages, an L1 ITLB with 128 entries for 4 KiB pages using four-way associativity and 14 fully associative entries for 2/4 MiB pages (both parts of the ITLB divided statically between two threads)[7]and a unified 512-entry L2 TLB for 4 KiB pages,[8]both 4-way associative.[9] Some TLBs may have separate sections for small pages and huge pages. For example, IntelSkylakemicroarchitecture separates the TLB entries for 1 GiB pages from those for 4 KiB/2 MiB pages.[10] Three schemes for handling TLB misses are found in modern architectures: TheMIPS architecturespecifies a software-managed TLB.[12] TheSPARC V9architecture allows an implementation of SPARC V9 to have no MMU, an MMU with a software-managed TLB, or an MMU with a hardware-managed TLB,[13]and the UltraSPARC Architecture 2005 specifies a software-managed TLB.[14] TheItaniumarchitecture provides an option of using either software- or hardware-managed TLBs.[15] TheAlphaarchitecture has a firmware-managed TLB, with the TLB miss handling code being inPALcode, rather than in the operating system. As the PALcode for a processor can be processor-specific and operating-system-specific, this allows different versions of PALcode to implement different page-table formats for different operating systems, without requiring that the TLB format, and the instructions to control the TLB, to be specified by the architecture.[16] These are typical performance levels of a TLB:[17] The average effective memory cycle rate is defined asm+(1−p)h+pm{\displaystyle m+(1-p)h+pm}cycles, wherem{\displaystyle m}is the number of cycles required for a memory read,p{\displaystyle p}is the miss rate, andh{\displaystyle h}is the hit time in cycles. If a TLB hit takes 1 clock cycle, a miss takes 30 clock cycles, a memory read takes 30 clock cycles, and the miss rate is 1%, the effective memory cycle rate is an average of30+0.99×1+0.01×30{\displaystyle 30+0.99\times 1+0.01\times 30}(31.29 clock cycles per memory access).[18] On an address-space switch, as occurs whencontext switchingbetween processes (but not between threads), some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB. This means that after a switch, the TLB is empty, andanymemory reference will be a miss, so it will be some time before things are running back at full speed. Newer CPUs use more effective strategies marking which process an entry is for. This means that if a second process runs for only a short time and jumps back to a first process, the TLB may still have valid entries, saving the time to reload them.[19] Other strategies avoid flushing the TLB on a context switch: (a) Asingle address space operating systemuses the same virtual-to-physical mapping for all processes. (b) Some CPUs have a process ID register, and the hardware uses TLB entries only if they match the current process ID. For example, in theAlpha 21264, each TLB entry is tagged with anaddress space number(ASN), and only TLB entries with an ASN matching the current task are considered valid. For another example, in theIntel Pentium Pro, the page global enable (PGE) flag in the registerCR4and the global (G) flag of a page-directory or page-table entry can be used to prevent frequently used pages from being automatically invalidated in the TLBs on a task switch or a load of register CR3. Since the 2010Westmere microarchitectureIntel 64processors also support 12-bitprocess-context identifiers(PCIDs), which allow retaining TLB entries for multiple linear-address spaces, with only those that match the current PCID being used for address translation.[20][21] While selective flushing of the TLB is an option in software-managed TLBs, the only option in some hardware TLBs (for example, the TLB in theIntel 80386) is the complete flushing of the TLB on an address-space switch. Other hardware TLBs (for example, the TLB in theIntel 80486and later x86 processors, and the TLB inARMprocessors) allow the flushing of individual entries from the TLB indexed by virtual address. Flushing of the TLB can be an important security mechanism for memory isolation between processes to ensure a process can't access data stored in memory pages of another process. Memory isolation is especially critical during switches between the privileged operating system kernel process and the user processes – as was highlighted by theMeltdownsecurity vulnerability. Mitigation strategies such askernel page-table isolation(KPTI) rely heavily on performance-impacting TLB flushes and benefit greatly from hardware-enabled selective TLB entry management such as PCID.[22] With the advent of virtualization for server consolidation, a lot of effort has gone into making the x86 architecture easier to virtualize and to ensure better performance of virtual machines on x86 hardware.[23][24] Normally, entries in the x86 TLBs are not associated with a particular address space; they implicitly refer to the current address space. Hence, every time there is a change in address space, such as a context switch, the entire TLB has to be flushed. Maintaining a tag that associates each TLB entry with an address space in software and comparing this tag during TLB lookup and TLB flush is very expensive, especially since the x86 TLB is designed to operate with very low latency and completely in hardware. In 2008, bothIntel(Nehalem)[25]andAMD(SVM)[26]have introduced tags as part of the TLB entry and dedicated hardware that checks the tag during lookup. Not all operating systems made full use of these tags immediately, but Linux 4.14 started using them to identify recently used address spaces, since the 12-bits PCIDs (4095 different values) are insufficient for all tasks running on a given CPU.[27]
https://en.wikipedia.org/wiki/Translation_lookaside_buffer
Apage tableis adata structureused by avirtual memorysystem in acomputerto store mappings betweenvirtual addressesandphysical addresses. Virtual addresses are used by the program executed by the accessingprocess, while physical addresses are used by the hardware, or more specifically, by therandom-access memory(RAM) subsystem. The page table is a key component ofvirtual address translationthat is necessary to accessdatain memory. The page table is set up by the computer'soperating system, and may be read and written during the virtual address translation process by thememory management unitor by low-level system software or firmware. In operating systems that use virtual memory, every process is given the impression that it is working with large, contiguous sections of memory. Physically, the memory of each process may be dispersed across different areas of physical memory, or may have been moved (paged out) to secondary storage, typically to ahard disk drive(HDD) orsolid-state drive(SSD). When a process requests access to data in its memory, it is the responsibility of the operating system to map the virtual address provided by the process to the physical address of the actual memory where that data is stored. The page table is where mappings of virtual addresses to physical addresses are stored, with each mapping also known as apage table entry(PTE).[1][2] Thememory management unit(MMU) inside the CPU stores a cache of recently used mappings from the operating system's page table. This is called thetranslation lookaside buffer(TLB), which is an associative cache. When a virtual address needs to be translated into a physical address, the TLB is searched first. If a match is found, which is known as aTLB hit, the physical address is returned and memory access can continue. However, if there is no match, which is called aTLB miss, the MMU, the system firmware, or the operating system's TLB miss handler will typically look up the address mapping in the page table to see whether a mapping exists, which is called apage walk. If one exists, it is written back to the TLB, which must be done because the hardware accesses memory through the TLB in a virtual memory system, and the faulting instruction is restarted, which may happen in parallel as well. The subsequent translation will result in a TLB hit, and the memory access will continue. The page table lookup may fail, triggering apage fault, for two reasons: When physical memory is not full this is a simple operation; the page is written back into physical memory, the page table and TLB are updated, and the instruction is restarted. However, when physical memory is full, one or more pages in physical memory will need to be paged out to make room for the requested page. The page table needs to be updated to mark that the pages that were previously in physical memory are no longer there, and to mark that the page that was on disk is now in physical memory. The TLB also needs to be updated, including removal of the paged-out page from it, and the instruction restarted. Which page to page out is the subject ofpage replacement algorithms. Some MMUs trigger a page fault for other reasons, whether or not the page is currently resident in physical memory and mapped into the virtual address space of a process: The simplest page table systems often maintain aframetable and a page table. The frame table holds information about which frames are mapped. In more advanced systems, the frame table can also hold information about which address space a page belongs to, statistics information, or other background information. The page table is an array of page table entries. Each page table entry (PTE) holds the mapping between a virtual address of a page and the address of a physical frame. There is also auxiliary information about the page such as a present bit, adirtyor modified bit, address space or process ID information, amongst others. Secondary storage, such as a hard disk drive, can be used to augment physical memory. Pages can be paged in and out of physical memory and the disk. The present bit can indicate what pages are currently present in physical memory or are on disk, and can indicate how to treat these different pages, i.e. whether to load a page from disk and page another page in physical memory out. The dirty bit allows for a performance optimization. A page on disk that is paged in to physical memory, then read from, and subsequently paged out again does not need to be written back to disk, since the page has not changed. However, if the page was written to after it is paged in, its dirty bit will be set, indicating that the page must be written back to the backing store. This strategy requires that the backing store retain a copy of the page after it is paged in to memory. When a dirty bit is not used, the backing store need only be as large as the instantaneous total size of all paged-out pages at any moment. When a dirty bit is used, at all times some pages will exist in both physical memory and the backing store. In operating systems that are notsingle address space operating systems, address space or process ID information is necessary so the virtual memory management system knows what pages to associate to what process. Two processes may use two identical virtual addresses for different purposes. The page table must supply different virtual memory mappings for the two processes. This can be done by assigning the two processes distinct address map identifiers, or by using process IDs. Associating process IDs with virtual memory pages can also aid in selection of pages to page out, as pages associated with inactive processes, particularly processes whose code pages have been paged out, are less likely to be needed immediately than pages belonging to active processes. As an alternative to tagging page table entries with process-unique identifiers, the page table itself may occupy a different virtual-memory page for each process so that the page table becomes a part of the process context. In such an implementation, the process's page table can be paged out whenever the process is no longer resident in memory. There are several types of page tables, which are optimized for different requirements. Essentially, a bare-bones page table must store the virtual address, the physical address that is "under" this virtual address, and possibly some address space information. Aninverted page table(IPT) is best thought of as an off-chip extension of theTLBwhich uses normal system RAM. Unlike a true page table, it is not necessarily able to hold all current mappings. The operating system must be prepared to handle misses, just as it would with a MIPS-style software-filled TLB. The IPT combines a page table and aframe tableinto one data structure. At its core is a fixed-size table with the number of rows equal to the number of frames in memory. If there are 4,000 frames, the inverted page table has 4,000 rows. For each row there is an entry for the virtual page number (VPN), the physical page number (not the physical address), some other data and a means for creating acollisionchain, as we will see later. Searching through all entries of the core IPT structure is inefficient, and ahash tablemay be used to map virtual addresses (and address space/PID information if need be) to an index in the IPT - this is where the collision chain is used. This hash table is known as ahash anchor table. The hashing function is not generally optimized for coverage - raw speed is more desirable. Of course, hash tables experience collisions. Due to this chosen hashing function, we may experience a lot of collisions in usage, so for each entry in the table the VPN is provided to check if it is the searched entry or a collision. In searching for a mapping, the hash anchor table is used. If no entry exists, a page fault occurs. Otherwise, the entry is found. Depending on the architecture, the entry may be placed in the TLB again and the memory reference is restarted, or the collision chain may be followed until it has been exhausted and a page fault occurs. A virtual address in this schema could be split into two, the first half being a virtual page number and the second half being the offset in that page. A major problem with this design is poorcache localitycaused by thehash function. Tree-based designs avoid this by placing the page table entries for adjacent pages in adjacent locations, but an inverted page table destroys spatiallocality of referenceby scattering entries all over. An operating system may minimize the size of the hash table to reduce this problem, with the trade-off being an increased miss rate. There is normally one hash table, contiguous in physical memory, shared by all processes. A per-process identifier is used to disambiguate the pages of different processes from each other. It is somewhat slow to remove the page table entries of a given process; the OS may avoid reusing per-process identifier values to delay facing this. Alternatively, per-process hash tables may be used, but they are impractical because ofmemory fragmentation, which requires the tables to be pre-allocated. Inverted page tables are used for example on thePowerPC, theUltraSPARCand theIA-64architecture.[4] The inverted page table keeps a listing of mappings installed for all frames in physical memory. However, this could be quite wasteful. Instead of doing so, we could create a page table structure that contains mappings for virtual pages. It is done by keeping several page tables that cover a certain block of virtual memory. For example, we can create smaller 1024-entry 4 KB pages that cover 4 MB of virtual memory. This is useful since often the top-most parts and bottom-most parts of virtual memory are used in running a process - the top is often used for text and data segments while the bottom for stack, with free memory in between. The multilevel page table may keep a few of the smaller page tables to cover just the top and bottom parts of memory and create new ones only when strictly necessary. Now, each of these smaller page tables are linked together by a master page table, effectively creating atreedata structure. There need not be only two levels, but possibly multiple ones. For example, a virtual address in this schema could be split into three parts: the index in the root page table, the index in the sub-page table, and the offset in that page. Multilevel page tables are also referred to as "hierarchical page tables". It was mentioned that creating a page table structure that contained mappings for every virtual page in the virtual address space could end up being wasteful. But, we can get around the excessive space concerns by putting the page table in virtual memory, and letting the virtual memory system manage the memory for the page table. However, part of this linear page table structure must always stay resident in physical memory in order to prevent circularpage faultsand look for a key part of the page table that is not present in the page table. Nested page tables can be implemented to increase the performance ofhardware virtualization. By providing hardware support forpage-table virtualization, the need to emulate is greatly reduced. Forx86 virtualizationthe current choices areIntel'sExtended Page Tablefeature andAMD'sRapid Virtualization Indexingfeature.
https://en.wikipedia.org/wiki/Page_table
Anoperating systemshellis acomputer programthat provides relatively broad and direct access to the system on which it runs. The termshellrefers to how it is a relatively thinlayeraround an operating system.[1][2] A shell is generally acommand-line interface(CLI) program although somegraphical user interface(GUI) programs are arguably classified as shells too. Operating systems provide various services to their users, includingfile management,processmanagement (running and terminatingapplications),batch processing, and operating system monitoring and configuration. Most operating system shells are notdirectinterfaces to the underlyingkernel, even if a shell communicates with the user viaperipheral devicesattached to the computer directly. Shells are actually special applications that use the kernelAPIin just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling output from the underlying operating system (much like aread–eval–print loop, REPL).[3]Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems. In addition to shells running on local systems, there are different ways to make remote systems available to local users; such approaches are usually referred to as remote access or remote administration. Initially available onmulti-usermainframes, which provided text-based UIs for each active usersimultaneouslyby means of atext terminalconnected to the mainframe via serial line ormodem, remote access has extended toUnix-likesystems and Microsoft Windows. On Unix-like systems,Secure Shell protocol (SSH)is usually used for text-based shells, whileSSH tunnelingcan be used forX Window System–based graphical user interfaces (GUIs). On Microsoft Windows, Remote Desktop Protocol can be used to provide GUI remote access, sinceWindows Vista,PowerShell Remote, sinceWindows 10build 1809 SSH[4]can also be used for text-based remote access via WMI, RPC, and WS-Management.[5] Most operating system shells fall into one of two categories – command-line and graphical. Command-line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include a voice user interface and various implementations of a text-based user interface (TUI) that are not CLI, such as text-based menu systems. The relative merits of CLI- and GUI-based shells are often debated. Many computer users use both depending on the task to be performed. Early interactive systems provided a simple command-line interpreter as part of theresident monitor. This interpreter might be called by different names, such asCOMCONon DECTOPS-10systems.[6]The interpreter would execute one of a number of predefined commands, one of which would be to run a user program. Common commands would log the user on and off the system, allocate, free, and manipulate devices and files, and query various pieces of information about the system or a user process.[7] The purpose of such a procedure is to create a medium of exchange into which one could activate any procedure,as if it were called from the inside of another program. Hereafter, for simplification, we shall refer to that procedure as the "SHELL". In 1964, for theMulticsoperating system,Louis Pouzinconceived the idea of "using commands somehow like a programming language," and coined the termshellto describe it.[9]In a 1965 document, the shell is defined as "a common procedure called automatically by the supervisor whenever a user types in some message at his console, at a time when he has no other process in active execution under console control. This procedure acts as an interface between console messages and subroutine [in the supervisor]."[10]This system was first implemented byGlenda Schroederand an unnamed man fromGeneral Electric.[11] Multics also introduced theactive function, a key concept in all later shells. This is defined as a string... which is replaced by a character string return value before the command line containing it is executed. Active functions are often used... to implement command-language macros.[12] In 1971,Ken Thompsondeveloped theThompson shellin the first version of Unix. While simpler than the Multics shell, it contained some innovative features, which have been carried forward in modern shells, including the use of < and > for input and outputredirection. The graphical shell first appeared inDouglas Engelbart’sNLSsystem, demonstrated in December, 1968 at theFall Joint Computer Conferencein San Francisco, in what has been calledThe Mother of All Demos. Engelbart’s colleagues atStanford Research Institutebrought the concept to the XeroxPalo Alto Research Center(PARC), where it appeared on theAlto, introduced in 1973. From there the idea spread toNiklaus Wirth’sLilithin 1980, and theAppleLisain 1983, then became ubiquitous. Acommand-line interface(CLI) is an operating system shell that usesalphanumericcharacters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, ateletypewritercan send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others. Operating systems such as UNIX have a large variety ofshellprograms with different commands, syntax and capabilities, with thePOSIX shellbeing a baseline. Some operating systems had only a single style of command interface; commodity operating systems such asMS-DOScame with a standard command interface (COMMAND.COM) but third-party interfaces were also often available, providing additional features or functions such as menuing or remote program execution. Application programs may also implement a command-line interface. For example, in Unix-like systems, thetelnetprogram has a number of commands for controlling a link to a remote computer system. Since the commands to the program are made of the same keystrokes as the data being sent to a remote computer, some means of distinguishing the two are required. Anescape sequencecan be defined, using either a special local keystroke that is never passed on but always interpreted by the local system. The program becomes modal, switching between interpreting commands from the keyboard or passing keystrokes on as data to be processed. A feature of many command-line shells is the ability to save sequences of commands for re-use. A data file can contain sequences of commands which the CLI can be made to follow as if typed in by a user. Special features in the CLI may apply when it is carrying out these stored instructions. Suchbatch files(script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program. Several command-line shells, such as Nushell, Xonsh,Bash (Unix shell), andZ shell, offercommand-line completion, enabling the interpreter to expand commands based on a few characters input by the user.[13] A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide. A graphical user interface (GUI) provides means for manipulating programs graphically, by allowing for operations such as opening, closing, moving and resizingwindows, as well as switchingfocusbetween windows. Graphical shells may be included withdesktop environmentsor come separately, even as a set of loosely coupled utilities. Most graphical user interfaces develop themetaphor of an "electronic desktop", where data files are represented as if they were paper documents on a desk, and application programs similarly have graphical representations instead of being invoked by command names. Graphical shells typically build on top of awindowing system. In the case ofX Window SystemorWayland, the shell consists of anX window manageror aWayland compositor, respectively, as well as of one or multiple programs providing the functionality to start installed applications, to manage open windows and virtual desktops, and often to support a widget engine. In the case ofmacOS,Quartz Compositoracts as the windowing system, and the shell consists of theFinder,[14]theDock,[14]SystemUIServer,[14]andMission Control.[15] Modern versions of the Microsoft Windows operating system use theWindows shellas their shell. Windows Shell providesdesktop environment,start menu, andtask bar, as well as agraphical user interfacefor accessing the file management functions of the operating system. Older versions also includeProgram Manager, which was the shell for the 3.x series of Microsoft Windows, and which in fact shipped with later versions of Windows of both the 95 and NT types at least through Windows XP. The interfaces of Windows versions 1 and 2 were markedly different. Desktop applications are also considered shells, as long as they use a third-party engine. Likewise, many individuals and developers dissatisfied with the interface of Windows Explorer have developed software that either alters the functioning and appearance of the shell or replaces it entirely.WindowBlindsbyStarDockis a good example of the former sort of application.LiteStepand Emerge Desktop are good examples of the latter. Interoperability programmes and purpose-designed software lets Windows users use equivalents of many of the various Unix-based GUIs discussed below, as well as Macintosh. An equivalent of the OS/2 Presentation Manager for version 3.0 can run some OS/2 programmes under some conditions using the OS/2environmental subsystemin versions of Windows NT. "Shell" is also used loosely to describe application software that is "built around" a particular component, such as web browsers and email clients, in analogy to the shells found in nature. Indeed, the (command-line) shell encapsulates the operating systemkernel. These are also sometimes referred to as "wrappers".[2] Inexpert systems, a shell is a piece of software that is an "empty" expert system without the knowledge base for any particular application.[16]
https://en.wikipedia.org/wiki/Shell_(computing)
AUnix shellis acommand-line interpreterorshellthat provides a command lineuser interfaceforUnix-likeoperating systems. The shell is both an interactivecommand languageand ascripting language, and is used by the operating system to control the execution of the system usingshell scripts.[2] Users typically interact with a Unix shell using aterminal emulator; however, direct operation via serial hardware connections orSecure Shellare common for server systems. All Unix shells provide filenamewildcarding,piping,here documents,command substitution,variablesandcontrol structuresforcondition-testinganditeration. Generally, ashellis a program that executes other programs in response to text commands. A sophisticated shell can also change the environment in which other programs execute by passingnamed variables, a parameter list, or an input source. In Unix-like operating systems, users typically have many choices of command-line interpreters for interactive sessions. When a userlogs intothe system interactively, a shell program is automatically executed for the duration of the session. The type of shell, which may be customized for each user, is typically stored in the user's profile, for example in the localpasswdfile or in a distributed configuration system such asNISorLDAP; however, the user may execute any other available shell interactively. On operating systems with awindowing system, such asmacOSand desktopLinux distributions, some users may never use the shell directly. On Unix systems, the shell has historically been the implementation language of system startup scripts, including the program that starts a windowing system, configures networking, and many other essential functions. However, some system vendors have replaced the traditional shell-based startup system (init) with different approaches, such assystemd. The first Unix shell was theThompson shell,sh, written byKen ThompsonatBell Labsand distributed with Versions 1 through 6 of Unix, from 1971 to 1975.[3]Though rudimentary by modern standards, it introduced many of the basic features common to all later Unix shells, including piping, simple control structures usingifandgoto, and filename wildcarding. Though not in current use, it is still available as part of someAncient UNIXsystems. It was modeled after theMulticsshell, developed in 1965 by American software engineerGlenda Schroeder. Schroeder's Multics shell was itself modeled after theRUNCOMprogramLouis Pouzinshowed to the Multics Team. The "rc" suffix on some Unix configuration files (e.g. ''.bashrc" or ".vimrc"), is a remnant of the RUNCOM ancestry of Unix shells.[1][4] ThePWB shellor Mashey shell,sh, was an upward-compatible version of the Thompson shell, augmented byJohn Masheyand others and distributed with theProgrammer's Workbench UNIX, circa 1975–1977. It focused on making shell programming practical, especially in large shared computing centers. It added shell variables (precursors ofenvironment variables, including the search path mechanism that evolved into $PATH), user-executable shell scripts, and interrupt-handling. Control structures were extended from if/goto to if/then/else/endif, switch/breaksw/endsw, and while/end/break/continue. As shell programming became widespread, these external commands were incorporated into the shell itself for performance. But the most widely distributed and influential of the early Unix shells were theBourne shelland theC shell. Both shells have been used as the coding base and model for many derivative and work-alike shells with extended feature sets.[5] TheBourne shell,sh, was a new Unix shell byStephen Bourneat Bell Labs.[6]Distributed as the shell for UNIX Version 7 in 1979, it introduced the rest of the basic features considered common to all the later Unix shells, includinghere documents,command substitution, more genericvariablesand more extensive builtincontrol structures. The language, including the use of a reversed keyword to mark the end of a block, was influenced byALGOL 68.[7]Traditionally, the Bourne shell program name isshand its path in the Unix file system hierarchy is/bin/sh. But a number of compatible work-alikes are also available with various improvements and additional features. On many systems, sh may be asymbolic linkorhard linkto one of these alternatives: ThePOSIXstandard specifies its standard shell as a strict subset of theKorn shell, an enhanced version of the Bourne shell. From a user's perspective the Bourne shell was immediately recognized when active by its characteristic default command line prompt character, the dollar sign ($). TheC shell,csh, was modeled on the C programming language, including the control structures and the expression grammar. It was written byBill Joyas a graduate student atUniversity of California, Berkeley, and was widely distributed withBSD Unix.[9][better source needed] The C shell also introduced many features for interactive work, including thehistoryandeditingmechanisms,aliases,directory stacks,tilde notation,cdpath,job controlandpath hashing. On many systems, csh may be asymbolic linkorhard linktoTENEX C shell(tcsh), an improved version of Joy's original version. Although the interactive features of csh have been copied to most other shells, the language structure has not been widely copied. The only work-alike isHamilton C shell, written by Nicole Hamilton, first distributed onOS/2in 1988 and onWindowssince 1992.[10] Shells read configuration files in various circumstances. These files usually contain commands for the shell and are executed when loaded; they are usually used to set important variables used to find executables, like$PATH, and others that control the behavior and appearance of the shell. The table in this section shows the configuration files for popular shells.[11] Explanation: Variations on the Unix shell concept that don't derive from Bourne shell or C shell include the following:[15]
https://en.wikipedia.org/wiki/Unix_shell
Exception(s),The Exception(s), orexceptionalmay refer to:
https://en.wikipedia.org/wiki/Exception#Synchronous_exceptions
Inmathematics,division by zero,divisionwhere the divisor (denominator) iszero, is a unique and problematic special case. Usingfractionnotation, the general example can be written asa0{\displaystyle {\tfrac {a}{0}}}, wherea{\displaystyle a}is the dividend (numerator). The usual definition of thequotientinelementary arithmeticis the number which yields the dividend whenmultipliedby the divisor. That is,c=ab{\displaystyle c={\tfrac {a}{b}}}is equivalent toc⋅b=a.{\displaystyle c\cdot b=a.}By this definition, the quotientq=a0{\displaystyle q={\tfrac {a}{0}}}is nonsensical, as the productq⋅0{\displaystyle q\cdot 0}is always0{\displaystyle 0}rather than some other numbera.{\displaystyle a.}Following the ordinary rules ofelementary algebrawhile allowing division by zero can create amathematical fallacy, a subtle mistake leading to absurd results. To prevent this, the arithmetic ofreal numbersand more general numerical structures calledfieldsleaves division by zeroundefined, and situations where division by zero might occur must be treated with care. Since any number multiplied by zero is zero, the expression00{\displaystyle {\tfrac {0}{0}}}is also undefined. Calculusstudies the behavior offunctionsin thelimitas their input tends to some value. When areal functioncan be expressed as a fraction whose denominator tends to zero, the output of the function becomes arbitrarily large, and is said to "tend to infinity", a type ofmathematical singularity. For example, thereciprocal function,f(x)=1x,{\displaystyle f(x)={\tfrac {1}{x}},}tends to infinity asx{\displaystyle x}tends to0.{\displaystyle 0.}When both the numerator and the denominator tend to zero at the same input, the expression is said to take anindeterminate form, as the resulting limit depends on the specific functions forming the fraction and cannot be determined from their separate limits. As an alternative to the common convention of working with fields such as the real numbers and leaving division by zero undefined, it is possible to define the result of division by zero in other ways, resulting in different number systems. For example, the quotienta0{\displaystyle {\tfrac {a}{0}}}can be defined to equal zero; it can be defined to equal a new explicitpoint at infinity, sometimes denoted by theinfinity symbol∞{\displaystyle \infty };or it can be defined to result in signed infinity, with positive or negative sign depending on the sign of the dividend. In these number systems division by zero is no longer a special exception per se, but the point or points at infinity involve their own new types of exceptional behavior. Incomputing, an error may result from an attempt to divide by zero. Depending on the context and the type of number involved, dividing by zero may evaluate topositive or negative infinity, return a specialnot-a-numbervalue, orcrashthe program, among other possibilities. ThedivisionN/D=Q{\displaystyle N/D=Q}can be conceptually interpreted in several ways.[1] Inquotitive division, the dividendN{\displaystyle N}is imagined to be split up into parts of sizeD{\displaystyle D}(the divisor), and the quotientQ{\displaystyle Q}is the number of resulting parts. For example, imagine ten slices of bread are to be made into sandwiches, each requiring two slices of bread. A total of five sandwiches can be made(102=5{\displaystyle {\tfrac {10}{2}}=5}).Now imagine instead that zero slices of bread are required per sandwich (perhaps alettuce wrap). Arbitrarily many such sandwiches can be made from ten slices of bread, as the bread is irrelevant.[2] The quotitive concept of division lends itself to calculation by repeatedsubtraction: dividing entails counting how many times the divisor can be subtracted before the dividend runs out. Because no finite number of subtractions of zero will ever exhaust a non-zero dividend, calculating division by zero in this waynever terminates.[3]Such an interminable division-by-zeroalgorithmis physically exhibited by somemechanical calculators.[4] Inpartitive division, the dividendN{\displaystyle N}is imagined to be split intoD{\displaystyle D}parts, and the quotientQ{\displaystyle Q}is the resulting size of each part. For example, imagine ten cookies are to be divided among two friends. Each friend will receive five cookies(102=5{\displaystyle {\tfrac {10}{2}}=5}).Now imagine instead that the ten cookies are to be divided among zero friends. How many cookies will each friend receive? Since there are no friends, this is an absurdity.[5] In another interpretation, the quotientQ{\displaystyle Q}represents theratioN:D.{\displaystyle N:D.}[6]For example, a cake recipe might call for ten cups of flour and two cups of sugar, a ratio of10:2{\displaystyle 10:2}or, proportionally,5:1.{\displaystyle 5:1.}To scale this recipe to larger or smaller quantities of cake, a ratio of flour to sugar proportional to5:1{\displaystyle 5:1}could be maintained, for instance one cup of flour and one-fifth cup of sugar, or fifty cups of flour and ten cups of sugar.[7]Now imagine a sugar-free cake recipe calls for ten cups of flour and zero cups of sugar. The ratio10:0,{\displaystyle 10:0,}or proportionally1:0,{\displaystyle 1:0,}is perfectly sensible:[8]it just means that the cake has no sugar. However, the question "How many parts flour for each part sugar?" still has no meaningful numerical answer. A geometrical appearance of the division-as-ratio interpretation is theslopeof astraight linein theCartesian plane.[9]The slope is defined to be the "rise" (change in vertical coordinate) divided by the "run" (change in horizontal coordinate) along the line. When this is written using the symmetrical ratio notation, a horizontal line has slope0:1{\displaystyle 0:1}and a vertical line has slope1:0.{\displaystyle 1:0.}However, if the slope is taken to be a singlereal numberthen a horizontal line has slope01=0{\displaystyle {\tfrac {0}{1}}=0}while a vertical line has an undefined slope, since in real-number arithmetic the quotient10{\displaystyle {\tfrac {1}{0}}}is undefined.[10]The real-valued slopeyx{\displaystyle {\tfrac {y}{x}}}of a line through the origin is the vertical coordinate of theintersectionbetween the line and a vertical line at horizontal coordinate1,{\displaystyle 1,}dashed black in the figure. The vertical red and dashed black lines areparallel, so they have no intersection in the plane. Sometimes they are said to intersect at apoint at infinity, and the ratio1:0{\displaystyle 1:0}is represented by a new number∞{\displaystyle \infty };[11]see§ Projectively extended real linebelow. Vertical lines are sometimes said to have an "infinitely steep" slope. Division is the inverse ofmultiplication, meaning that multiplying and then dividing by the same non-zero quantity, or vice versa, leaves an original quantity unchanged; for example(5×3)/3={\displaystyle (5\times 3)/3={}}(5/3)×3=5{\displaystyle (5/3)\times 3=5}.[12]Thus a division problem such as63=?{\displaystyle {\tfrac {6}{3}}={?}}can be solved by rewriting it as an equivalent equation involving multiplication,?×3=6,{\displaystyle {?}\times 3=6,}where?{\displaystyle {?}}represents the same unknown quantity, and then finding the value for which the statement is true; in this case the unknown quantity is2,{\displaystyle 2,}because2×3=6,{\displaystyle 2\times 3=6,}so therefore63=2.{\displaystyle {\tfrac {6}{3}}=2.}[13] An analogous problem involving division by zero,60=?,{\displaystyle {\tfrac {6}{0}}={?},}requires determining an unknown quantity satisfying?×0=6.{\displaystyle {?}\times 0=6.}However, any number multiplied by zero is zero rather than six, so there exists no number which can substitute for?{\displaystyle {?}}to make a true statement.[14] When the problem is changed to00=?,{\displaystyle {\tfrac {0}{0}}={?},}the equivalent multiplicative statement is?×0=0{\displaystyle {?}\times 0=0};in this caseanyvalue can be substituted for the unknown quantity to yield a true statement, so there is no single number which can be assigned as the quotient00.{\displaystyle {\tfrac {0}{0}}.} Because of these difficulties, quotients where the divisor is zero are traditionally taken to beundefined, and division by zero is not allowed.[15][16] A compelling reason for not allowing division by zero is that allowing it leads tofallacies. When working with numbers, it is easy to identify an illegal division by zero. For example: The fallacy here arises from the assumption that it is legitimate to cancel0like any other number, whereas, in fact, doing so is a form of division by0. Usingalgebra, it is possible to disguise a division by zero[17]to obtain aninvalid proof. For example:[18] This is essentially the same fallacious computation as the previous numerical version, but the division by zero was obfuscated because we wrote0asx− 1. TheBrāhmasphuṭasiddhāntaofBrahmagupta(c. 598–668) is the earliest text to treatzeroas a number in its own right and to define operations involving zero.[17]According to Brahmagupta, A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero. In 830,Mahāvīraunsuccessfully tried to correct the mistake Brahmagupta made in his bookGanita Sara Samgraha: "A number remains unchanged when divided by zero."[17] Bhāskara II'sLīlāvatī(12th century) proposed that division by zero results in an infinite quantity,[19] A quantity divided by zero becomes a fraction the denominator of which is zero. This fraction is termed an infinite quantity. In this quantity consisting of that which has zero for its divisor, there is no alteration, though many may be inserted or extracted; as no change takes place in the infinite and immutable God when worlds are created or destroyed, though numerous orders of beings are absorbed or put forth. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value toa0{\textstyle {\tfrac {a}{0}}}is contained inAnglo-IrishphilosopherGeorge Berkeley's criticism ofinfinitesimal calculusin 1734 inThe Analyst("ghosts of departed quantities").[20] Calculusstudies the behavior offunctionsusing the concept of alimit, the value to which a function's output tends as its input tends to some specific value. The notationlimx→cf(x)=L{\textstyle \lim _{x\to c}f(x)=L}means that the value of the functionf{\displaystyle f}can be made arbitrarily close toL{\displaystyle L}by choosingx{\displaystyle x}sufficiently close toc.{\displaystyle c.} In the case where the limit of thereal functionf{\displaystyle f}increases without bound asx{\displaystyle x}tends toc,{\displaystyle c,}the function is not defined atx,{\displaystyle x,}a type ofmathematical singularity. Instead, the function is said to "tend to infinity", denotedlimx→cf(x)=∞,{\textstyle \lim _{x\to c}f(x)=\infty ,}and itsgraphhas the linex=c{\displaystyle x=c}as a verticalasymptote. While such a function is not formally defined forx=c,{\displaystyle x=c,}and theinfinity symbol∞{\displaystyle \infty }in this case does not represent any specificreal number, such limits are informally said to "equal infinity". If the value of the function decreases without bound, the function is said to "tend to negative infinity",−∞.{\displaystyle -\infty .}In some cases a function tends to two different values whenx{\displaystyle x}tends toc{\displaystyle c}from above(x→c+{\displaystyle x\to c^{+}})and below(x→c−{\displaystyle x\to c^{-}}); such a function has two distinctone-sided limits.[21] A basic example of an infinite singularity is thereciprocal function,f(x)=1/x,{\displaystyle f(x)=1/x,}which tends to positive or negative infinity asx{\displaystyle x}tends to0{\displaystyle 0}: limx→0+1x=+∞,limx→0−1x=−∞.{\displaystyle \lim _{x\to 0^{+}}{\frac {1}{x}}=+\infty ,\qquad \lim _{x\to 0^{-}}{\frac {1}{x}}=-\infty .} In most cases, the limit of a quotient of functions is equal to the quotient of the limits of each function separately, limx→cf(x)g(x)=limx→cf(x)limx→cg(x).{\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}={\frac {\displaystyle \lim _{x\to c}f(x)}{\displaystyle \lim _{x\to c}g(x)}}.} However, when a function is constructed by dividing two functions whose separate limits are both equal to0,{\displaystyle 0,}then the limit of the result cannot be determined from the separate limits, so is said to take anindeterminate form, informally written00.{\displaystyle {\tfrac {0}{0}}.}(Another indeterminate form,∞∞,{\displaystyle {\tfrac {\infty }{\infty }},}results from dividing two functions whose limits both tend to infinity.) Such a limit may equal any real value, may tend to infinity, or may not converge at all, depending on the particular functions. For example, in limx→1x2−1x−1,{\displaystyle \lim _{x\to 1}{\dfrac {x^{2}-1}{x-1}},} the separate limits of the numerator and denominator are0{\displaystyle 0}, so we have the indeterminate form00{\displaystyle {\tfrac {0}{0}}}, but simplifying the quotient first shows that the limit exists: limx→1x2−1x−1=limx→1(x−1)(x+1)x−1=limx→1(x+1)=2.{\displaystyle \lim _{x\to 1}{\frac {x^{2}-1}{x-1}}=\lim _{x\to 1}{\frac {(x-1)(x+1)}{x-1}}=\lim _{x\to 1}(x+1)=2.} Theaffinely extended real numbersare obtained from thereal numbersR{\displaystyle \mathbb {R} }by adding two new numbers+∞{\displaystyle +\infty }and−∞,{\displaystyle -\infty ,}read as "positive infinity" and "negative infinity" respectively, and representingpoints at infinity. With the addition of±∞,{\displaystyle \pm \infty ,}the concept of a "limit at infinity" can be made to work like a finite limit. When dealing with both positive and negative extended real numbers, the expression1/0{\displaystyle 1/0}is usually left undefined. However, in contexts where only non-negative values are considered, it is often convenient to define1/0=+∞{\displaystyle 1/0=+\infty }. The setR∪{∞}{\displaystyle \mathbb {R} \cup \{\infty \}}is theprojectively extended real line, which is aone-point compactificationof the real line. Here∞{\displaystyle \infty }means an unsigned infinity orpoint at infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies−∞=∞{\displaystyle -\infty =\infty }, which is necessary in this context. In this structure,a0=∞{\displaystyle {\frac {a}{0}}=\infty }can be defined for nonzeroa, anda∞=0{\displaystyle {\frac {a}{\infty }}=0}whenais not∞{\displaystyle \infty }. It is the natural way to view the range of thetangent functionand cotangent functions oftrigonometry:tan(x)approaches the single point at infinity asxapproaches either+⁠π/2⁠or−⁠π/2⁠from either direction. This definition leads to many interesting results. However, the resulting algebraic structure is not afield, and should not be expected to behave like one. For example,∞+∞{\displaystyle \infty +\infty }is undefined in this extension of the real line. The subject ofcomplex analysisapplies the concepts of calculus in thecomplex numbers. Of major importance in this subject is theextended complex numbersC∪{∞},{\displaystyle \mathbb {C} \cup \{\infty \},}the set of complex numbers with a single additional number appended, usually denoted by theinfinity symbol∞{\displaystyle \infty }and representing apoint at infinity, which is defined to be contained in everyexterior domain, making those itstopologicalneighborhoods. This can intuitively be thought of as wrapping up the infinite edges of the complex plane and pinning them together at the single point∞,{\displaystyle \infty ,}aone-point compactification, making the extended complex numbers topologically equivalent to asphere. This equivalence can be extended to a metrical equivalence by mapping each complex number to a point on the sphere via inversestereographic projection, with the resultingspherical distanceapplied as a new definition of distance between complex numbers; and in general the geometry of the sphere can be studied using complex arithmetic, and conversely complex arithmetic can be interpreted in terms of spherical geometry. As a consequence, the set of extended complex numbers is often called theRiemann sphere. The set is usually denoted by the symbol for the complex numbers decorated by an asterisk, overline, tilde, or circumflex, for exampleC^=C∪{∞}.{\displaystyle {\hat {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}.} In the extended complex numbers, for any nonzero complex numberz,{\displaystyle z,}ordinary complex arithmetic is extended by the additional rulesz0=∞,{\displaystyle {\tfrac {z}{0}}=\infty ,}z∞=0,{\displaystyle {\tfrac {z}{\infty }}=0,}∞+0=∞,{\displaystyle \infty +0=\infty ,}∞+z=∞,{\displaystyle \infty +z=\infty ,}∞⋅z=∞.{\displaystyle \infty \cdot z=\infty .}However,00{\displaystyle {\tfrac {0}{0}}},∞∞{\displaystyle {\tfrac {\infty }{\infty }}}, and0⋅∞{\displaystyle 0\cdot \infty }are left undefined. The four basic operations – addition, subtraction, multiplication and division – as applied to whole numbers (positive integers), with some restrictions, in elementary arithmetic are used as a framework to support the extension of the realm of numbers to which they apply. For instance, to make it possible to subtract any whole number from another, the realm of numbers must be expanded to the entire set ofintegersin order to incorporate the negative integers. Similarly, to support division of any integer by any other, the realm of numbers must expand to therational numbers. During this gradual expansion of the number system, care is taken to ensure that the "extended operations", when applied to the older numbers, do not produce different results. Loosely speaking, since division by zero has no meaning (isundefined) in the whole number setting, this remains true as the setting expands to therealor evencomplex numbers.[22] As the realm of numbers to which these operations can be applied expands there are also changes in how the operations are viewed. For instance, in the realm of integers, subtraction is no longer considered a basic operation since it can be replaced by addition of signed numbers.[23]Similarly, when the realm of numbers expands to include the rational numbers, division is replaced by multiplication by certain rational numbers. In keeping with this change of viewpoint, the question, "Why can't we divide by zero?", becomes "Why can't a rational number have a zero denominator?". Answering this revised question precisely requires close examination of the definition of rational numbers. In the modern approach to constructing the field of real numbers, the rational numbers appear as an intermediate step in the development that is founded onset theory. First, the natural numbers (including zero) are established on an axiomatic basis such asPeano's axiom systemand then this is expanded to thering of integers. The next step is to define the rational numbers keeping in mind that this must be done using only the sets and operations that have already been established, namely, addition, multiplication and the integers. Starting with the set ofordered pairsof integers,{(a,b)}withb≠ 0, define abinary relationon this set by(a,b) ≃ (c,d)if and only ifad=bc. This relation is shown to be anequivalence relationand itsequivalence classesare then defined to be the rational numbers. It is in the formal proof that this relation is an equivalence relation that the requirement that the second coordinate is not zero is needed (for verifyingtransitivity).[24][25][26] Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures. In thehyperreal numbers, division by zero is still impossible, but division by non-zeroinfinitesimalsis possible.[27]The same holds true in thesurreal numbers.[28] Indistribution theoryone can extend the function1x{\textstyle {\frac {1}{x}}}to a distribution on the whole space of real numbers (in effect by usingCauchy principal values). It does not, however, make sense to ask for a "value" of this distribution atx= 0; a sophisticated answer refers to thesingular supportof the distribution. Inmatrixalgebra, square or rectangular blocks of numbers are manipulated as though they were numbers themselves: matrices can beaddedandmultiplied, and in some cases, a version of division also exists. Dividing by a matrix means, more precisely, multiplying by itsinverse. Not all matrices have inverses.[29]For example, amatrix containing only zerosis not invertible. One can define a pseudo-division, by settinga/b=ab+, in whichb+represents thepseudoinverseofb. It can be proven that ifb−1exists, thenb+=b−1. Ifbequals 0, then b+= 0. Inabstract algebra, the integers, the rational numbers, the real numbers, and the complex numbers can be abstracted to more general algebraic structures, such as acommutative ring, which is a mathematical structure where addition, subtraction, and multiplication behave as they do in the more familiar number systems, but division may not be defined. Adjoining a multiplicative inverses to a commutative ring is calledlocalization. However, the localization of every commutative ring at zero is thetrivial ring, where0=1{\displaystyle 0=1}, so nontrivial commutative rings do not have inverses at zero, and thus division by zero is undefined for nontrivial commutative rings. Nevertheless, any number system that forms acommutative ringcan be extended to a structure called awheelin which division by zero is always possible.[30]However, the resulting mathematical structure is no longer a commutative ring, as multiplication no longer distributes over addition. Furthermore, in a wheel, division of an element by itself no longer results in the multiplicative identity element1{\displaystyle 1}, and if the original system was anintegral domain, the multiplication in the wheel no longer results in acancellative semigroup. The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such asringsandfields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in askew field(which for this reason is called adivision ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ringZ/6Zof integers mod 6. The meaning of the expression22{\textstyle {\frac {2}{2}}}should be the solutionxof the equation2x=2{\displaystyle 2x=2}. But in the ringZ/6Z, 2 is azero divisor. This equation has two distinct solutions,x= 1andx= 4, so the expression22{\textstyle {\frac {2}{2}}}isundefined. In field theory, the expressionab{\textstyle {\frac {a}{b}}}is only shorthand for the formal expressionab−1, whereb−1is the multiplicative inverse ofb. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning whenbis zero. Modern texts, that define fields as a special type of ring, include the axiom0 ≠ 1for fields (or its equivalent) so that thezero ringis excluded from being a field. In the zero ring, division by zero is possible, which shows that the other field axioms are not sufficient to exclude division by zero in a field. In computing, most numerical calculations are done withfloating-point arithmetic, which since the 1980s has been standardized by theIEEE 754specification. In IEEE floating-point arithmetic, numbers are represented using a sign (positive or negative), a fixed-precisionsignificandand an integerexponent. Numbers whose exponent is too large to represent instead "overflow" to positive or negativeinfinity(+∞ or −∞), while numbers whose exponent is too small to represent instead "underflow" topositive or negative zero(+0 or −0). ANaN(not a number) value represents undefined results. In IEEE arithmetic, division of 0/0 or ∞/∞ results in NaN, but otherwise division always produces a well-defined result. Dividing any non-zero number by positive zero (+0) results in an infinity of the same sign as the dividend. Dividing any non-zero number bynegative zero(−0) results in an infinity of the opposite sign as the dividend. This definition preserves the sign of the result in case ofarithmetic underflow.[31] For example, using single-precision IEEE arithmetic, ifx= −2−149, thenx/2 underflows to −0, and dividing 1 by this result produces 1/(x/2) = −∞. The exact result −2150is too large to represent as a single-precision number, so an infinity of the same sign is used instead to indicate overflow. Integerdivision by zero is usually handled differently from floating point since there is no integer representation for the result.CPUsdiffer in behavior: for instancex86processors trigger ahardware exception, whilePowerPCprocessors silently generate an incorrect result for the division and continue, andARMprocessors can either cause a hardware exception or return zero.[32]Because of this inconsistency between platforms, theCandC++programming languagesconsider the result of dividing by zeroundefined behavior.[33]In typicalhigher-level programming languages, such asPython,[34]anexceptionis raised for attempted division by zero, which can be handled in another part of the program. Manyproof assistants, such asRocq(previously known asCoq) andLean, define 1/0 = 0. This is due to the requirement that all functions aretotal. Such a definition does not create contradictions, as further manipulations (such ascancelling out) still require that the divisor is non-zero.[35][36]
https://en.wikipedia.org/wiki/Divide_by_zero#Divide_error
Incomputeroperating systems,memory pagingis amemory managementscheme that allows the physical memory used by a program to be non-contiguous.[1]This also helps avoid the problem of memory fragmentation and requiring compaction to reduce fragmentation. It is often combined with the related technique of allocating and freeingpage framesand storing pages on and retrieving them fromsecondary storage[a]in order to allow the aggregate size of the address spaces to exceed the physical memory of the system.[2]For historical reasons, this technique is sometimes referred to as "swapping". When combined withvirtual memory, it is known aspaged virtual memory. In this scheme, the operating system retrieves data from secondary storage inblocksof the same size. These blocks are calledpages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory. Hardware support is necessary for efficient translation of logical addresses tophysical addresses. As such, paged memory functionality is usually hardwired into a CPU through itsMemory Management Unit(MMU) orMemory Protection Unit(MPU), and separately enabled by privileged system code in theoperating system'skernel. In CPUs implementing thex86instruction set architecture(ISA) for instance, the memory paging is enabled via the CR0control register. In the 1960s, swapping was an early virtual memory technique. An entire program or entiresegmentwould be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would beswapped in(orrolled in).[3][4]A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in. A program might include multipleoverlaysthat occupy the same memory at different times. Overlays are not a method of paging RAM to secondary storage[a]but merely of minimizing the program's RAM use. Subsequent architectures usedmemory segmentation, and individual program segments became the units exchanged between secondary storage and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to becontiguouswhen resident in RAM, requiring additional computation and movement to remedyfragmentation.[5] Ferranti'sAtlas, and theAtlas Supervisordeveloped at theUniversity of Manchester,[6](1962), was the first system to implement memory paging. Subsequent early machines, and their operating systems, supporting paging include theIBM M44/44Xand its MOS operating system (1964),[7]theSDS 940[8]and theBerkeley Timesharing System(1966), a modifiedIBM System/360 Model 40and theCP-40operating system (1967), theIBM System/360 Model 67and operating systems such asTSS/360andCP/CMS(1967), theRCA 70/46and theTime Sharing Operating System(1967), theGE 645andMultics(1969), and thePDP-10with addedBBN-designed paging hardware and theTENEXoperating system (1969). Those machines, and subsequent machines supporting memory paging, use either a set ofpage address registersor in-memorypage tables[d]to allow the processor to operate on arbitrary pages anywhere in RAM as a seemingly contiguouslogical addressspace. These pages became the units exchanged between secondary storage[a]and RAM. When a process tries to reference a page not currently mapped to apage framein RAM, the processor treats this invalid memory reference as apage faultand transfers control from the program to the operating system. The operating system must: When all page frames are in use, the operating system must select a page frame to reuse for the page the program now needs. If the evicted page frame wasdynamically allocatedby a program to hold data, or if a program modified it since it was read into RAM (in other words, if it has become "dirty"), it must be written out to secondary storage before being freed. If a program later references the evicted page, another page fault occurs and the page must be read back into RAM. The method the operating system uses to select the page frame to reuse, which is itspage replacement algorithm, affects efficiency. The operating system predicts the page frame least likely to be needed soon, often through theleast recently used(LRU) algorithm or an algorithm based on the program'sworking set. To further increase responsiveness, paging systems may predict which pages will be needed soon, preemptively loading them into RAM before a program references them, and may steal page frames from pages that have been unreferenced for a long time, making them available. Some systems clear new pages to avoid data leaks that compromise security; some set them to installation defined or random values to aid debugging. When pure demand paging is used, pages are loaded only when they are referenced. A program from a memory mapped file begins execution with none of its pages in RAM. As the program commits page faults, the operating system copies the needed pages from a file, e.g.,memory-mapped file, paging file, or a swap partition containing the page data into RAM. Some systems use onlydemand paging—waiting until a page is actually requested before loading it into RAM. Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage). When a page fault occurs, anticipatory paging systems will not only bring in the referenced page, but also other pages that are likely to be referenced soon. A simple anticipatory paging algorithm will bring in the next few consecutive pages even though they are not yet needed (a prediction usinglocality of reference); this is analogous to aprefetch input queuein a CPU. Swap prefetching will prefetch recently swapped-out pages if there are enough free pages for them.[9] If a program ends, the operating system may delay freeing its pages, in case the user runs the same program again. Some systems allow application hints; the application may request that a page be made available and continue without delay. The free page queue is a list of page frames that are available for assignment. Preventing this queue from being empty minimizes the computing necessary to service a page fault. Some operating systems periodically look for pages that have not been recently referenced and then free the page frame and add it to the free page queue, a process known as "page stealing". Some operating systems[e]supportpage reclamation; if a program commits a page fault by referencing a page that was stolen, the operating system detects this and restores the page frame without having to read the contents back into RAM. The operating system may periodically pre-clean dirty pages: write modified pages back to secondary storage[a]even though they might be further modified. This minimizes the amount of cleaning needed to obtain new page frames at the moment a new program starts or a new data file is opened, and improves responsiveness. (Unix operating systems periodically usesyncto pre-clean all dirty pages; Windows operating systems use "modified page writer" threads.) Some systems allow application hints; the application may request that a page be cleared or paged out and continue without delay. After completing initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed are called theworking set. When the working set is a small percentage of the system's total number of pages, virtual memory systems work most efficiently and an insignificant amount of computing is spent resolving page faults. As the working set grows, resolving page faults remains manageable until the growth reaches a critical point. Then faults go up dramatically and the time spent resolving them overwhelms time spent on the computing the program was written to do. This condition is referred to asthrashing. Thrashing occurs on a program that works with huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from secondary storage.[a]"Thrashing" is also used in contexts other than virtual memory systems; for example, to describecacheissues in computing orsilly window syndromein networking. A worst case might occur onVAXprocessors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. As each fault occurs the operating system needs to go through the extensive memory management routines perhaps causing multiple I/Os which might include writing other process pages to disk and reading pages of the active process from disk. If the operating system could not allocate ten pages to this program, then remedying the page fault would discard another page the instruction needs, and any restart of the instruction would fault again. To decrease excessive paging and resolve thrashing problems, a user can increase the number of pages available per program, either by running fewer programs concurrently or increasing the amount of RAM in the computer. Inmulti-programmingor in amulti-userenvironment, many users may execute the same program, written so that its code and data are in separate pages. To minimize RAM use, all users share a single copy of the program. Each process'spage tableis set up so that the pages that address code point to the single shared copy, while the pages that address data point to different physical pages for each process. Different programs might also use the same libraries. To save space, only one copy of the shared library is loaded into physical memory. Programs which use the same library have virtual addresses that map to the same pages (which contain the library's code and data). When programs want to modify the library's code, they usecopy-on-write, so memory is only allocated when needed. Shared memory is an efficient means of communication between programs. Programs can share pages in memory, and then write and read to exchange data. The first computer to support paging was the supercomputerAtlas,[10][11][12]jointly developed byFerranti, theUniversity of ManchesterandPlesseyin 1963. The machine had an associative (content-addressable) memory with one entry for each 512 word page. The Supervisor[13]handled non-equivalence interruptions[f]and managed the transfer of pages between core and drum in order to provide a one-level store[14]to programs. Paging has been a feature ofMicrosoft WindowssinceWindows 3.0in 1990. Windows 3.x creates ahidden filenamed386SPART.PARorWIN386.SWPfor use as a swap file. It is generally found in theroot directory, but it may appear elsewhere (typically in the WINDOWS directory). Its size depends on how much swap space the system has (a setting selected by the user underControl Panel→ Enhanced under "Virtual Memory"). If the user moves or deletes this file, ablue screenwill appear the next time Windows is started, with theerror message"The permanent swap file is corrupt". The user will be prompted to choose whether or not to delete the file (even if it does not exist). Windows 95,Windows 98andWindows Meuse a similar file, and the settings for it are located under Control Panel → System → Performance tab → Virtual Memory. Windows automatically sets the size of the page file to start at 1.5× the size of physical memory, and expand up to 3× physical memory if necessary. If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default. The file used for paging in theWindows NTfamily ispagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for page files. It is required, however, for the boot partition (i.e., the drive containing the Windows directory) to have a page file on it if the system is configured to write either kernel or full memory dumps after aBlue Screen of Death. Windows uses the paging file as temporary storage for the memory dump. When the system is rebooted, Windows copies the memory dump from the page file to a separate file and frees the space that was used in the page file.[15] In the default configuration of Windows, the page file is allowed to expand beyond its initial allocation when necessary. If this happens gradually, it can become heavilyfragmentedwhich can potentially cause performance problems.[16]The common advice given to avoid this is to set a single "locked" page file size so that Windows will not expand it. However, the page file only expands when it has been filled, which, in its default configuration, is 150% of the total amount of physical memory.[17]Thus the total demand for page file-backed virtual memory must exceed 250% of the computer's physical memory before the page file will expand. The fragmentation of the page file that occurs when it expands is temporary. As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the page file is back to its original state. Locking a page file size can be problematic if a Windows application requests more memory than the total size of physical memory and the page file, leading to failed requests to allocate memory that may cause applications and system processes to fail. Also, the page file is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. However, a large page file generally allows the use of memory-heavy applications, with no penalties besides using more disk space. While a fragmented page file may not be an issue by itself, fragmentation of a variable size page file will over time create several fragmented blocks on the drive, causing other files to become fragmented. For this reason, a fixed-size contiguous page file is better, providing that the size allocated is large enough to accommodate the needs of all applications. The required disk space may be easily allocated on systems with more recent specifications (i.e. a system with 3 GB of memory having a 6 GB fixed-size page file on a 750 GB disk drive, or a system with 6 GB of memory and a 16 GB fixed-size page file and 2 TB of disk space). In both examples, the system uses about 0.8% of the disk space with the page file pre-extended to its maximum. Defragmentingthe page file is also occasionally recommended to improve performance when a Windows system is chronically using much more memory than its total physical memory.[18]This view ignores the fact that, aside from the temporary results of expansion, the page file does not become fragmented over time. In general, performance concerns related to page file access are much more effectively dealt with by adding more physical memory. Unixsystems, and otherUnix-likeoperating systems, use the term "swap" to describe the act of substituting disk space for RAM when physical RAM is full.[19]In some of those systems, it is common to dedicate an entire partition of a hard disk to swapping. These partitions are calledswap partitions. Many systems have an entire hard drive dedicated to swapping, separate from the data drive(s), containing only a swap partition. A hard drive dedicated to swapping is called a "swap drive" or a "scratch drive" or a "scratch disk". Some of those systems only support swapping to a swap partition; others also support swapping to files. The Linux kernel supports a virtually unlimited number of swap backends (devices or files), and also supports assignment of backend priorities. When the kernel swaps pages out of physical memory, it uses the highest-priority backend with available free space. If multiple swap backends are assigned the same priority, they are used in around-robinfashion (which is somewhat similar toRAID 0storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.[20] From the end-user perspective, swap files in versions 2.6.x and later of the Linux kernel are virtually as fast as swap partitions; the limitation is that swap files should be contiguously allocated on their underlying file systems. To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the cache and avoiding filesystem overhead.[21][22]When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time. However, the administrative flexibility of swap files can outweigh certain advantages of swap partitions. For example, a swap file can be placed on any mounted file system, can be set to any desired size, and can be added or changed as needed. Swap partitions are not as flexible; they cannot be enlarged without using partitioning orvolume managementtools, which introduce various complexities and potential downtimes. Swappinessis aLinux kernelparameter that controls the relative weight given toswapping outofruntime memory, as opposed to droppingpagesfrom the systempage cache, whenever a memory allocation request cannot be met from free memory. Swappiness can be set to a value from 0 to 200.[23]A low value causes the kernel to prefer to evict pages from the page cache while a higher value causes the kernel to prefer to swap out "cold" memory pages. Thedefault valueis60; setting it higher can cause high latency if cold pages need to be swapped back in (when interacting with a program that had been idle for example), while setting it lower (even 0) may cause high latency when files that had been evicted from the cache need to be read again, but will make interactive programs more responsive as they will be less likely to need to swap back cold pages. Swapping can also slow downHDDsfurther because it involves a lot of random writes, whileSSDsdo not have this problem. Certainly the default values work well in most workloads, but desktops and interactive systems for any expected task may want to lower the setting while batch processing and less interactive systems may want to increase it.[24] When the system memory is highly insufficient for the current tasks and a large portion of memory activity goes through a slow swap, the system can become practically unable to execute any task, even if the CPU is idle. When every process is waiting on the swap, the system is considered to be inswap death.[25][26] Swap death can happen due to incorrectly configuredmemory overcommitment.[27][28][29] The original description of the "swapping to death" problem relates to theX server. If code or data used by the X server to respond to a keystroke is not in main memory, then if the user enters a keystroke, the server will take one or more page faults, requiring those pages to read from swap before the keystroke can be processed, slowing the response to it. If those pages do not remain in memory, they will have to be faulted in again to handle the next keystroke, making the system practically unresponsive even if it's actually executing other tasks normally.[30] macOSuses multiple swap files. The default (and Apple-recommended) installation places them on the root partition, though it is possible to place them instead on a separate partition or device.[31] AmigaOS 4.0introduced a new system for allocating RAM and defragmenting physical memory. It still uses flat shared address space that cannot be defragmented. It is based onslab allocationand paging memory that allows swapping. Paging was implemented inAmigaOS 4.1. It can lock up the system if all physical memory is used up.[32]Swap memory could be activated and deactivated, allowing the user to choose to use only physical RAM. The backing store for a virtual memory operating system is typically manyorders of magnitudeslower thanRAM. Additionally, using mechanical storage devices introducesdelay, several milliseconds for a hard disk. Therefore, it is desirable to reduce or eliminate swapping, where practical. Some operating systems offer settings to influence the kernel's decisions. ManyUnix-likeoperating systems (for exampleAIX,Linux, andSolaris) allow using multiple storage devices for swap space in parallel, to increase performance. In some older virtual memory operating systems, space in swap backing store is reserved when programs allocate memory for runtime data. Operating system vendors typically issue guidelines about how much swap space should be allocated. Paging is one way of allowing the size of the addresses used by a process, which is the process's "virtual address space" or "logical address space", to be different from the amount of main memory actually installed on a particular computer, which is the physical address space. In most systems, the size of a process's virtual address space is much larger than the available main memory.[35]For example: A computer with truen-bit addressing may have 2naddressable units of RAM installed. An example is a 32-bitx86processor with 4GBand withoutPhysical Address Extension(PAE). In this case, the processor is able to address all the RAM installed and no more. However, even in this case, paging can be used to support more virtual memory than physical memory. For instance, many programs may be running concurrently. Together, they may require more physical memory than can be installed on the system, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM. In addition the operating system may provide services to programs that envision a larger memory, such as files that can grow beyond the limit of installed RAM. Not all of the file can be concurrently mapped into the address space of a process, but the operating system might allow regions of the file to be mapped into the address space, and unmapped if another region needs to be mapped in. A few computers have a main memory larger than the virtual address space of a process, such as the Magic-1,[35]somePDP-11machines, and some systems using 32-bitx86processors withPhysical Address Extension. This nullifies a significant advantage of paging, since a single process cannot use more main memory than the amount of its virtual address space. Such systems often use paging techniques to obtain secondary benefits: The size of the cumulative total of virtual address spaces is still limited by the amount of secondary storage available.
https://en.wikipedia.org/wiki/Paging
CR3orCR-3may refer to:
https://en.wikipedia.org/wiki/CR3
Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:[1][2][3][4][5] Concurrency is a broader concept that encompasses several related ideas, including:[1][2][3][4][5] Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can beindeterminate. Concurrent use of sharedresourcescan be a source of indeterminacy leading to issues such asdeadlocks, andresource starvation.[7] Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange,memory allocation, and execution scheduling to minimizeresponse timeand maximisethroughput.[8] Concurrency theory has been an active field of research intheoretical computer science. One of the first proposals wasCarl Adam Petri's seminal work onPetri netsin the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency. A number of formalisms for modeling and understanding concurrent systems have been developed, including:[9] Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based onmessage passing, while others have different mechanisms for concurrency. The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining thedenotational semanticsof a variety of different models of concurrency,[11]while Nielsen, Sassone, and Winskel have demonstrated thatcategory theorycan be used to provide a similar unified understanding of different models.[12] The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g.,process calculican be modeled in the actor model using atwo-phase commit protocol.[13]) The mathematical denotation denoted by a closed systemSis constructed increasingly better approximations from an initial behavior called⊥Susing a behavior approximating functionprogressionSto construct a denotation (meaning ) forSas follows:[14] In this way,Scan be mathematically characterized in terms of all its possible behaviors. Various types oftemporal logic[15]can be used to help reason about concurrent systems. Some of these logics, such aslinear temporal logicandcomputation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such asaction computational tree logic,Hennessy–Milner logic, andLamport'stemporal logic of actions, build their assertions from sequences ofactions(changes in state). The principal application of these logics is in writing specifications for concurrent systems.[7] Concurrent programmingencompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered[by whom?]to be more general thanparallel programmingbecause it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally[according to whom?]have a predefined and well-structured communications pattern. The base goals of concurrent programming includecorrectness,performanceandrobustness. Concurrent systems such asOperating systemsandDatabase management systemsare generally designed[by whom?]to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (seeConcurrency control). Some[example needed]concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer. Because they use shared resources, concurrent systems in general[according to whom?]require the inclusion of some[example needed]kind ofarbitersomewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility ofindeterminacy in concurrent computationwhich has major implications for practice including correctness and performance. For example, arbitration introducesunbounded nondeterminismwhich raises issues withmodel checkingbecause it causes explosion in the state space and can even cause models to have an infinite number of states. Some concurrent programming models includecoprocessesanddeterministic concurrency. In these models, threads of control explicitlyyieldtheir timeslices, either to the system or to another process.
https://en.wikipedia.org/wiki/Concurrency_(computer_science)
Parallel computingis a type ofcomputationin which many calculations orprocessesare carried out simultaneously.[1]Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing:bit-level,instruction-level,data, andtask parallelism. Parallelism has long been employed inhigh-performance computing, but has gained broader interest due to the physical constraints preventingfrequency scaling.[2]As power consumption (and consequently heat generation) by computers has become a concern in recent years,[3]parallel computing has become the dominant paradigm incomputer architecture, mainly in the form ofmulti-core processors.[4] Incomputer science,parallelismand concurrency are two different things: a parallel program usesmultiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e.threads) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics.[5] Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core andmulti-processorcomputers having multipleprocessing elementswithin a single machine, whileclusters,MPPs, andgridsuse multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitlyparallel algorithms, particularly those that use concurrency, are more difficult to write thansequentialones,[6]because concurrency introduces several new classes of potentialsoftware bugs, of whichrace conditionsare the most common.Communicationandsynchronizationbetween the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance. A theoreticalupper boundon thespeed-upof a single program as a result of parallelization is given byAmdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised. Traditionally,computer softwarehas been written forserial computation. To solve a problem, analgorithmis constructed and implemented as a serial stream of instructions. These instructions are executed on acentral processing uniton one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed.[7] Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above.[7]Historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural andengineering sciences, such asmeteorology. This led to the design of parallel hardware and software, as well ashigh performance computing.[8] Frequency scalingwas the dominant reason for improvements incomputer performancefrom the mid-1980s until 2004. Theruntimeof a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for allcompute-boundprograms.[9]However, power consumptionPby a chip is given by the equationP=C×V2×F, whereCis thecapacitancebeing switched per clock cycle (proportional to the number of transistors whose inputs change),Visvoltage, andFis the processor frequency (cycles per second).[10]Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led ultimately toIntel's May 8, 2004 cancellation of itsTejas and Jayhawkprocessors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.[11] To deal with the problem of power consumption and overheating the majorcentral processing unit(CPU or processor) manufacturers started to produce power efficient processors with multiple cores. The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently.Multi-core processorshave brought parallel computing todesktop computers. Thus parallelization of serial programs has become a mainstream programming task. In 2012 quad-core processors became standard fordesktop computers, whileservershad 10+ core processors. By 2023 some processors had over hundred cores. Some designs having a mix of performance and efficiency cores (such asARM's big.LITTLEdesign) due to thermal and design constraints.[12][citation needed]FromMoore's lawit can be predicted that the number of cores per processor will double every 18–24 months. Anoperating systemcan ensure that different tasks and user programs are run in parallel on the available cores. However, for a serial software program to take full advantage of the multi-core architecture the programmer needs to restructure and parallelize the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelize their software code to take advantage of the increasing computing power of multicore architectures.[13] Main article:Amdahl's law Optimally, thespeedupfrom parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements. The maximum potential speedup of an overall system can be calculated byAmdahl's law.[14]Amdahl's Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond a certain point.[15][16] Amdahl's Law has limitations, including assumptions of fixed workload, neglectinginter-process communicationandsynchronizationoverheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads.[17][18][19] Gustafson's lawandUniversal Scalability Lawgive a more realistic assessment of the parallel performance.[20][21] Understandingdata dependenciesis fundamental in implementingparallel algorithms. No program can run more quickly than the longest chain of dependent calculations (known as thecritical path), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of just a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel. LetPiandPjbe two program segments. Bernstein's conditions[22]describe when the two are independent and can be executed in parallel. ForPi, letIibe all of the input variables andOithe output variables, and likewise forPj.PiandPjare independent if they satisfy Violation of the first condition introduces a flow dependency, corresponding to the first segment producing a result used by the second segment. The second condition represents an anti-dependency, when the second segment produces a variable needed by the first segment. The third and final condition represents an output dependency: when two segments write to the same location, the result comes from the logically last executed segment.[23] Consider the following functions, which demonstrate several kinds of dependencies: In this example, instruction 3 cannot be executed before (or even in parallel with) instruction 2, because instruction 3 uses a result from instruction 2. It violates condition 1, and thus introduces a flow dependency. In this example, there are no dependencies between the instructions, so they can all be run in parallel. Bernstein's conditions do not allow memory to be shared between different processes. For that, some means of enforcing an ordering between accesses is necessary, such assemaphores,barriersor some othersynchronization method. Subtasks in a parallel program are often calledthreads. Some parallel computer architectures use smaller, lightweight versions of threads known asfibers, while others use bigger versions known asprocesses. However, "threads" is generally accepted as a generic term for subtasks.[24]Threads will often needsynchronizedaccess to anobjector otherresource, for example when they must update avariablethat is shared between them. Without synchronization, the instructions between the two threads may be interleaved in any order. For example, consider the following program: If instruction 1B is executed between 1A and 3A, or if instruction 1A is executed between 1B and 3B, the program will produce incorrect data. This is known as arace condition. The programmer must use alockto providemutual exclusion. A lock is a programming language construct that allows one thread to take control of a variable and prevent other threads from reading or writing it, until that variable is unlocked. The thread holding the lock is free to execute itscritical section(the section of a program that requires exclusive access to some variable), and to unlock the data when it is finished. Therefore, to guarantee correct program execution, the above program can be rewritten to use locks: One thread will successfully lock variable V, while the other thread will belocked out—unable to proceed until V is unlocked again. This guarantees correct execution of the program. Locks may be necessary to ensure correct program execution when threads must serialize access to resources, but their use can greatly slow a program and may affect itsreliability.[25] Locking multiple variables usingnon-atomiclocks introduces the possibility of programdeadlock. Anatomic locklocks multiple variables all at once. If it cannot lock all of them, it does not lock any of them. If two threads each need to lock the same two variables using non-atomic locks, it is possible that one thread will lock one of them and the second thread will lock the second variable. In such a case, neither thread can complete, and deadlock results.[26] Many parallel programs require that their subtasksact in synchrony. This requires the use of abarrier. Barriers are typically implemented using a lock or asemaphore.[27]One class of algorithms, known aslock-free and wait-free algorithms, altogether avoids the use of locks and barriers. However, this approach is generally difficult to implement and requires correctly designed data structures.[28] Not all parallelization results in speed-up. Generally, as a task is split up into more and more threads, those threads spend an ever-increasing portion of their time communicating with each other or waiting on each other for access to resources.[29][30]Once the overhead from resource contention or communication dominates the time spent on other computation, further parallelization (that is, splitting the workload over even more threads) increases rather than decreases the amount of time required to finish. This problem, known asparallel slowdown,[31]can be improved in some cases by software analysis and redesign.[32] Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it exhibitsembarrassing parallelismif they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize. Michael J. Flynncreated one of the earliest classification systems for parallel (and sequential) computers and programs, now known asFlynn's taxonomy. Flynn classified programs and computers by whether they were operating using a single set or multiple sets of instructions, and whether or not those instructions were using a single set or multiple sets of data. The single-instruction-single-data (SISD) classification is equivalent to an entirely sequential program. The single-instruction-multiple-data (SIMD) classification is analogous to doing the same operation repeatedly over a large data set. This is commonly done insignal processingapplications. Multiple-instruction-single-data (MISD) is a rarely used classification. While computer architectures to deal with this were devised (such assystolic arrays), few applications that fit this class materialized. Multiple-instruction-multiple-data (MIMD) programs are by far the most common type of parallel programs. According toDavid A. PattersonandJohn L. Hennessy, "Some machines are hybrids of these categories, of course, but this classic model has survived because it is simple, easy to understand, and gives a good first approximation. It is also—perhaps because of its understandability—the most widely used scheme."[34] Parallel computing can incur significant overhead in practice, primarily due to the costs associated with merging data from multiple processes. Specifically, inter-process communication and synchronization can lead to overheads that are substantially higher—often by two or more orders of magnitude—compared to processing the same data on a single thread.[35][36][37]Therefore, the overall improvement should be carefully evaluated. From the advent ofvery-large-scale integration(VLSI) computer-chip fabrication technology in the 1970s until about 1986, speed-up in computer architecture was driven by doublingcomputer word size—the amount of information the processor can manipulate per cycle.[38]Increasing the word size reduces the number of instructions the processor must execute to perform an operation on variables whose sizes are greater than the length of the word. For example, where an8-bitprocessor must add two16-bitintegers, the processor must first add the 8 lower-order bits from each integer using the standard addition instruction, then add the 8 higher-order bits using an add-with-carry instruction and thecarry bitfrom the lower order addition; thus, an 8-bit processor requires two instructions to complete a single operation, where a 16-bit processor would be able to complete the operation with a single instruction. Historically,4-bitmicroprocessors were replaced with 8-bit, then 16-bit, then 32-bit microprocessors. This trend generally came to an end with the introduction of 32-bit processors, which has been a standard in general-purpose computing for two decades. Not until the early 2000s, with the advent ofx86-64architectures, did64-bitprocessors become commonplace. A computer program is, in essence, a stream of instructions executed by a processor. Without instruction-level parallelism, a processor can only issue less than oneinstruction per clock cycle(IPC < 1). These processors are known assubscalarprocessors. These instructions can bere-orderedand combined into groups which are then executed in parallel without changing the result of the program. This is known as instruction-level parallelism. Advances in instruction-level parallelism dominated computer architecture from the mid-1980s until the mid-1990s.[39] All modern processors have multi-stageinstruction pipelines. Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with anN-stage pipeline can have up toNdifferent instructions at different stages of completion and thus can issue one instruction per clock cycle (IPC = 1). These processors are known asscalarprocessors. The canonical example of a pipelined processor is aRISCprocessor, with five stages: instruction fetch (IF), instruction decode (ID), execute (EX), memory access (MEM), and register write back (WB). ThePentium 4processor had a 35-stage pipeline.[40] Most modern processors also have multipleexecution units. They usually combine this feature with pipelining and thus can issue more than one instruction per clock cycle (IPC > 1). These processors are known assuperscalarprocessors. Superscalar processors differ frommulti-core processorsin that the several execution units are not entire processors (i.e. processing units). Instructions can be grouped together only if there is nodata dependencybetween them.Scoreboardingand theTomasulo algorithm(which is similar to scoreboarding but makes use ofregister renaming) are two of the most common techniques for implementing out-of-order execution and instruction-level parallelism. Task parallelisms is the characteristic of a parallel program that "entirely different calculations can be performed on either the same or different sets of data".[41]This contrasts with data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for execution. The processors would then execute these sub-tasks concurrently and often cooperatively. Task parallelism does not usually scale with the size of a problem.[42] Superword level parallelism is avectorizationtechnique based onloop unrollingand basic block vectorization. It is distinct from loop vectorization algorithms in that it can exploitparallelismofinline code, such as manipulating coordinates, color channels or in loops unrolled by hand.[43] Main memory in a parallel computer is eithershared memory(shared between all processing elements in a singleaddress space), ordistributed memory(in which each processing element has its own local address space).[44]Distributed memory refers to the fact that the memory is logically distributed, but often implies that it is physically distributed as well.Distributed shared memoryandmemory virtualizationcombine the two approaches, where the processing element has its own local memory and access to the memory on non-local processors. Accesses to local memory are typically faster than accesses to non-local memory. On thesupercomputers, distributed shared memory space can be implemented using the programming model such asPGAS. This model allows processes on one compute node to transparently access the remote memory of another compute node. All compute nodes are also connected to an external shared memory system via high-speed interconnect, such asInfiniband, this external shared memory system is known asburst buffer, which is typically built from arrays ofnon-volatile memoryphysically distributed across multiple I/O nodes. Computer architectures in which each element of main memory can be accessed with equallatencyandbandwidthare known asuniform memory access(UMA) systems. Typically, that can be achieved only by ashared memorysystem, in which the memory is not physically distributed. A system that does not have this property is known as anon-uniform memory access(NUMA) architecture. Distributed memory systems have non-uniform memory access. Computer systems make use ofcaches—small and fast memories located close to the processor which store temporary copies of memory values (nearby in both the physical and logical sense). Parallel computer systems have difficulties with caches that may store the same value in more than one location, with the possibility of incorrect program execution. These computers require acache coherencysystem, which keeps track of cached values and strategically purges them, thus ensuring correct program execution.Bus snoopingis one of the most common methods for keeping track of which values are being accessed (and thus should be purged). Designing large, high-performance cache coherence systems is a very difficult problem in computer architecture. As a result, shared memory computer architectures do not scale as well as distributed memory systems do.[44] Processor–processor and processor–memory communication can be implemented in hardware in several ways, including via shared (either multiported ormultiplexed) memory, acrossbar switch, a sharedbusor an interconnect network of a myriad oftopologiesincludingstar,ring,tree,hypercube, fat hypercube (a hypercube with more than one processor at a node), orn-dimensional mesh. Parallel computers based on interconnected networks need to have some kind ofroutingto enable the passing of messages between nodes that are not directly connected. The medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common. A multi-core processor is a processor that includes multipleprocessing units(called "cores") on the same chip. This processor differs from asuperscalarprocessor, which includes multipleexecution unitsand can issue multiple instructions per clock cycle from one instruction stream (thread); in contrast, a multi-core processor can issue multiple instructions per clock cycle from multiple instruction streams.IBM'sCell microprocessor, designed for use in theSonyPlayStation 3, is a prominent multi-core processor. Each core in a multi-core processor can potentially be superscalar as well—that is, on every clock cycle, each core can issue multiple instructions from one thread. Simultaneous multithreading(of which Intel'sHyper-Threadingis the best known) was an early form of pseudo-multi-coreism. A processor capable of concurrent multithreading includes multiple execution units in the same processing unit—that is it has a superscalar architecture—and can issue multiple instructions per clock cycle frommultiplethreads.Temporal multithreadingon the other hand includes a single execution unit in the same processing unit and can issue one instruction at a time frommultiplethreads. A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via abus.[45]Bus contentionprevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors.[46]Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists.[45] A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them.[47]The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel.[48] A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer.[49]Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric,load balancingis more difficult if they are not. The most common type of cluster is theBeowulf cluster, which is a cluster implemented on multiple identicalcommercial off-the-shelfcomputers connected with aTCP/IPEthernetlocal area network.[50]Beowulf technology was originally developed byThomas SterlingandDonald Becker. 87% of allTop500supercomputers are clusters.[51]The remaining are Massively Parallel Processors, explained below. Because grid computing systems (described below) can easily handle embarrassingly parallel problems, modern clusters are typically designed to handle more difficult problems—problems that require nodes to share intermediate results with each other more often. This requires a high bandwidth and, more importantly, a low-latencyinterconnection network. Many historic and current supercomputers use customized high-performance network hardware specifically designed for cluster computing, such as the Cray Gemini network.[52]As of 2014, most current supercomputers use some off-the-shelf standard network hardware, oftenMyrinet,InfiniBand, orGigabit Ethernet. A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having "far more" than 100 processors.[53]In an MPP, "each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect."[54] IBM'sBlue Gene/L, the fifth fastestsupercomputerin the world according to the June 2009TOP500ranking, is an MPP. Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over theInternetto work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, distributed computing typically deals only withembarrassingly parallelproblems. Most grid computing applications usemiddleware(software that sits between the operating system and the application to manage network resources and standardize the software interface). The most common grid computing middleware is theBerkeley Open Infrastructure for Network Computing(BOINC). Oftenvolunteer computingsoftware makes use of "spare cycles", performing computations at times when a computer is idling.[55] The ubiquity of Internet brought the possibility of large-scale cloud computing. Within parallel computing, there are specialized parallel devices that remain niche areas of interest. While notdomain-specific, they tend to be applicable to only a few classes of parallel problems. Reconfigurable computingis the use of afield-programmable gate array(FPGA) as a co-processor to a general-purpose computer. An FPGA is, in essence, a computer chip that can rewire itself for a given task. FPGAs can be programmed withhardware description languagessuch asVHDL[56]orVerilog.[57]Several vendors have createdC to HDLlanguages that attempt to emulate the syntax and semantics of theC programming language, with which most programmers are familiar. The best known C to HDL languages areMitrion-C,Impulse C, andHandel-C. Specific subsets ofSystemCbased on C++ can also be used for this purpose. AMD's decision to open itsHyperTransporttechnology to third-party vendors has become the enabling technology for high-performance reconfigurable computing.[58]According to Michael R. D'Amour, Chief Operating Officer of DRC Computer Corporation, "when we first walked into AMD, they called us 'thesocketstealers.' Now they call us their partners."[58] General-purpose computing ongraphics processing units(GPGPU) is a fairly recent trend in computer engineering research. GPUs are co-processors that have been heavily optimized forcomputer graphicsprocessing.[59]Computer graphics processing is a field dominated by data parallel operations—particularlylinear algebramatrixoperations. In the early days, GPGPU programs used the normal graphics APIs for executing programs. However, several new programming languages and platforms have been built to do general purpose computation on GPUs with bothNvidiaandAMDreleasing programming environments withCUDAandStream SDKrespectively. Other GPU programming languages includeBrookGPU,PeakStream, andRapidMind. Nvidia has also released specific products for computation in theirTesla series. The technology consortium Khronos Group has released theOpenCLspecification, which is a framework for writing programs that execute across platforms consisting of CPUs and GPUs.AMD,Apple,Intel,Nvidiaand others are supportingOpenCL. Severalapplication-specific integrated circuit(ASIC) approaches have been devised for dealing with parallel applications.[60][61][62] Because an ASIC is (by definition) specific to a given application, it can be fully optimized for that application. As a result, for a given application, an ASIC tends to outperform a general-purpose computer. However, ASICs are created byUV photolithography. This process requires a mask set, which can be extremely expensive. A mask set can cost over a million US dollars.[63](The smaller the transistors required for the chip, the more expensive the mask will be.) Meanwhile, performance increases in general-purpose computing over time (as described byMoore's law) tend to wipe out these gains in only one or two chip generations.[58]High initial cost, and the tendency to be overtaken by Moore's-law-driven general-purpose computing, has rendered ASICs unfeasible for most parallel computing applications. However, some have been built. One example is the PFLOPSRIKEN MDGRAPE-3machine which uses custom ASICs formolecular dynamicssimulation. A vector processor is a CPU or computer system that can execute the same instruction on large sets of data. Vector processors have high-level operations that work on linear arrays of numbers or vectors. An example vector operation isA=B×C, whereA,B, andCare each 64-element vectors of 64-bitfloating-pointnumbers.[64]They are closely related to Flynn's SIMD classification.[64] Craycomputers became famous for their vector-processing computers in the 1970s and 1980s. However, vector processors—both as CPUs and as full computer systems—have generally disappeared. Modernprocessor instruction setsdo include some vector processing instructions, such as withFreescale Semiconductor'sAltiVecandIntel'sStreaming SIMD Extensions(SSE). Concurrent programming languages,libraries,APIs, andparallel programming models(such asalgorithmic skeletons) have been created for programming parallel computers. These can generally be divided into classes based on the assumptions they make about the underlying memory architecture—shared memory, distributed memory, or shared distributed memory. Shared memory programming languages communicate by manipulating shared memory variables. Distributed memory usesmessage passing.POSIX ThreadsandOpenMPare two of the most widely used shared memory APIs, whereasMessage Passing Interface(MPI) is the most widely used message-passing system API.[65]One concept used in programming parallel programs is thefuture concept, where one part of a program promises to deliver a required datum to another part of a program at some future time. Efforts to standardize parallel programming include an open standard calledOpenHMPPfor hybrid multi-core parallel programming. The OpenHMPP directive-based programming model offers a syntax to efficiently offload computations on hardware accelerators and to optimize data movement to/from the hardware memory usingremote procedure calls. The rise of consumer GPUs has led to support forcompute kernels, either in graphics APIs (referred to ascompute shaders), in dedicated APIs (such asOpenCL), or in other language extensions. Automatic parallelizationof a sequential program by acompileris the "holy grail" of parallel computing, especially with the aforementioned limit of processor frequency. Despite decades of work by compiler researchers, automatic parallelization has had only limited success.[66] Mainstream parallel programming languages remain eitherexplicitly parallelor (at best)partially implicit, in which a programmer gives the compilerdirectivesfor parallelization. A few fully implicit parallel programming languages exist—SISAL, ParallelHaskell,SequenceL,SystemC(forFPGAs),Mitrion-C,VHDL, andVerilog. As a computer system grows in complexity, themean time between failuresusually decreases.Application checkpointingis a technique whereby the computer system takes a "snapshot" of the application—a record of all current resource allocations and variable states, akin to acore dump—; this information can be used to restore the program if the computer should fail. Application checkpointing means that the program has to restart from only its last checkpoint rather than the beginning. While checkpointing provides benefits in a variety of situations, it is especially useful in highly parallel systems with a large number of processors used inhigh performance computing.[67] As parallel computers become larger and faster, we are now able to solve problems that had previously taken too long to run. Fields as varied asbioinformatics(forprotein foldingandsequence analysis) and economics have taken advantage of parallel computing. Common types of problems in parallel computing applications include:[68] Parallel computing can also be applied to the design offault-tolerant computer systems, particularly vialockstepsystems performing the same operation in parallel. This providesredundancyin case one component fails, and also allows automaticerror detectionanderror correctionif the results differ. These methods can be used to help prevent single-event upsets caused by transient errors.[70]Although additional measures may be required in embedded or specialized systems, this method can provide a cost-effective approach to achieve n-modular redundancy in commercial off-the-shelf systems. The origins of true (MIMD) parallelism go back toLuigi Federico Menabreaand hisSketch of theAnalytic EngineInvented byCharles Babbage.[72][73][74] In 1957,Compagnie des Machines Bullannounced the first computer architecture specifically designed for parallelism, theGamma 60.[75]It utilized afork-join modeland a "Program Distributor" to dispatch and collect data to and from independent processing units connected to a central memory.[76][77] In April 1958, Stanley Gill (Ferranti) discussed parallel programming and the need for branching and waiting.[78]Also in 1958, IBM researchersJohn CockeandDaniel Slotnickdiscussed the use of parallelism in numerical calculations for the first time.[79]Burroughs Corporationintroduced the D825 in 1962, a four-processor computer that accessed up to 16 memory modules through acrossbar switch.[80]In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing Societies Conference.[79]It was during this debate thatAmdahl's lawwas coined to define the limit of speed-up due to parallelism. In 1969,Honeywellintroduced its firstMulticssystem, a symmetric multiprocessor system capable of running up to eight processors in parallel.[79]C.mmp, a multi-processor project atCarnegie Mellon Universityin the 1970s, was among the first multiprocessors with more than a few processors. The first bus-connected multiprocessor with snooping caches was the Synapse N+1 in 1984.[73] SIMD parallel computers can be traced back to the 1970s. The motivation behind early SIMD computers was to amortize thegate delayof the processor'scontrol unitover multiple instructions.[81]In 1964, Slotnick had proposed building a massively parallel computer for theLawrence Livermore National Laboratory.[79]His design was funded by theUS Air Force, which was the earliest SIMD parallel-computing effort,ILLIAC IV.[79]The key to its design was a fairly high parallelism, with up to 256 processors, which allowed the machine to work on large datasets in what would later be known asvector processing. However, ILLIAC IV was called "the most infamous of supercomputers", because the project was only one-fourth completed, but took 11 years and cost almost four times the original estimate.[71]When it was finally ready to run its first real application in 1976, it was outperformed by existing commercial supercomputers such as theCray-1. In the early 1970s, at theMIT Computer Science and Artificial Intelligence Laboratory,Marvin MinskyandSeymour Papertstarted developing theSociety of Mindtheory, which views the biological brain asmassively parallel computer. In 1986, Minsky publishedThe Society of Mind, which claims that "mind is formed from many little agents, each mindless by itself".[82]The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.[83] Similar models (which also view the biological brain as a massively parallel computer, i.e., the brain is made up of a constellation of independent or semi-independent agents) were also described by:
https://en.wikipedia.org/wiki/Parallel_computing
Deadlockcommonly refers to: Deadlockordeadlockedmay also refer to:
https://en.wikipedia.org/wiki/Deadlock
Incomputer science, alockormutex(frommutual exclusion) is asynchronization primitivethat prevents state from being modified or accessed by multiplethreads of executionat once. Locks enforce mutual exclusionconcurrency controlpolicies, and with a variety of possible methods there exist multiple unique implementations for different applications. Generally, locks areadvisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implementmandatory locks, where attempting unauthorized access to a locked resource will force anexceptionin the entity attempting to make the access. The simplest type of lock is a binarysemaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade. Another way to classify locks is by what happens when thelock strategyprevents the progress of a thread. Most locking designsblocktheexecutionof thethreadrequesting the lock until it is allowed to access the locked resource. With aspinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process rescheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread. Locks typically require hardware support for efficient implementation. This support usually takes the form of one or moreatomicinstructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation. Uniprocessorarchitectures have the option of usinguninterruptible sequencesof instructions—using special instructions or instruction prefixes to disableinterruptstemporarily—but this technique does not work formultiprocessorshared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantialsynchronizationissues. The reason anatomic operationis required is because of concurrency, where more than one task executes the same logic. For example, consider the followingCcode: The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock.Dekker'sorPeterson's algorithmare possible substitutes if atomic locking operations are not available. Careless use of locks can result indeadlockorlivelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and atrun-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.) Some languages do support locks syntactically. An example inC#follows: C# introducedSystem.Threading.Lockin C# 13 on.NET9. The codelock(this)can lead to problems if the instance can be accessed publicly.[1] Similar toJava, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute.[2][3] Before being introduced to lock granularity, one needs to understand three concepts about locks: There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization. An important property of a lock is itsgranularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in lesslock overheadwhen a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increasedlock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce adeadlock.[citation needed] In adatabase management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users. Database lockscan be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using2-phased locksensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected usingwaits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps. There are mechanisms employed to manage the actions of multipleconcurrent userson a database—the purpose is to prevent lost updates and dirty reads. The two types of locking arepessimistic lockingandoptimistic locking: Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are calledincompatible; otherwise the locks arecompatible. Often, lock types blocking interactions are presented in the technical literature by aLock compatibility table. The following is an example with the common, major lock types: Comment:In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no".[5] Lock-based resource protection and thread/process synchronization have many disadvantages: Someconcurrency controlstrategies avoid some or all of these problems. For example, afunnelorserializing tokenscan avoid the biggest problem: deadlocks. Alternatives to locking includenon-blocking synchronizationmethods, likelock-freeprogramming techniques andtransactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve theapplicationlevel from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application. In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.[citation needed] One of lock-based programming's biggest problems is that "locks don'tcompose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals.Simon Peyton Jones(an advocate ofsoftware transactional memory) gives the following example of a banking application:[6]design a classAccountthat allows multiple concurrent clients to deposit or withdraw money to an account, and give an algorithm to transfer money from one account to another. The lock-based solution to the first part of the problem is: The second part of the problem is much more complicated. Atransferroutine that is correctfor sequential programswould be In a concurrent program, this algorithm is incorrect because when one thread is halfway throughtransfer, another might observe a state whereamounthas been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by putting locks on both accounts prior to changing either one, but then the locks have to be placed according to some arbitrary, global ordering to prevent deadlock: This solution gets more complicated when more locks are involved, and thetransferfunction needs to know about all of the locks, so they cannot behidden. Programming languages vary in their support for synchronization: Amutexis alocking mechanismthat sometimes uses the same basic implementation as the binary semaphore. However, they differ in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only thetaskthat locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:
https://en.wikipedia.org/wiki/Lock_(computer_science)
Anoperating system(OS) issystem softwarethat managescomputer hardwareandsoftwareresources, and provides commonservicesforcomputer programs. Time-sharingoperating systemsschedule tasksfor efficient use of the system and may also include accounting software for cost allocation ofprocessor time,mass storage, peripherals, and other resources. For hardware functions such asinput and outputandmemory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2]although the application code is usually executed directly by the hardware and frequently makessystem callsto an OS function or isinterruptedby it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles toweb serversandsupercomputers. As of September 2024[update],Androidis the most popular operating system with a 46% market share, followed byMicrosoft Windowsat 26%,iOSandiPadOSat 18%,macOSat 5%, andLinuxat 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems.[3]Linux distributionsare dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems),[4][5]such asembeddedand real-time systems, exist for many applications.Security-focused operating systemsalso exist. Some operating systems have low system requirements (e.g.light-weight Linux distribution). Others may have higher system requirements. Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e.live CD) or flash memory (i.e. a LiveUSB from aUSBstick). An operating system is difficult to define,[6]but has been called "thelayer of softwarethat manages a computer's resources for its users and theirapplications".[7]Operating systems include the software that is always running, called akernel—but can include other software as well.[6][8]The two other types of programs that can run on a computer aresystem programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software.[8] There are three main purposes that an operating system fulfills:[9] Withmultiprocessorsmultiple CPUs share memory. Amulticomputerorcluster computerhas multiple CPUs, each of whichhas its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive;[17]they are universal incloud computingbecause of the size of the machine needed.[18]The different CPUs often need to send and receive messages to each other;[19]to ensure good performance, the operating systems for these machines need to minimize this copying ofpackets.[20]Newer systems are oftenmultiqueue—separating groups of users into separatequeues—to reduce the need for packet copying and support more concurrent users.[21]Another technique isremote direct memory access, which enables each CPU to access memory belonging to other CPUs.[19]Multicomputer operating systems often supportremote procedure callswhere a CPU can call aprocedureon another CPU,[22]ordistributed shared memory, in which the operating system usesvirtualizationto generate shared memory that does not physically exist.[23] Adistributed systemis a group of distinct,networkedcomputers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world.[24]Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system.[25] Embedded operating systemsare designed to be used inembedded computer systems, whether they areinternet of thingsobjects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10kilobytes,[26]and the smallest are forsmart cards.[27]Examples includeEmbedded Linux,QNX,VxWorks, and the extra-small systemsRIOTandTinyOS.[28] Areal-time operating systemis an operating system that guarantees to processeventsor data by or at a specific moment in time. Hard real-time systems require exact timing and are common inmanufacturing,avionics, military, and other similar uses.[28]With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones.[28]In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such aseCos.[28] Ahypervisoris an operating system that runs avirtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware.[14][29]Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development,[30]and debugging.[31]They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system.[14] Alibrary operating system(libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form oflibrariesand composed with a single application and configuration code to construct aunikernel:[32]a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together[33]),single address space, machine image that can be deployed to cloud or embedded environments. The operating system code and application code are not executed in separatedprotection domains(there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentiallyinliningthem based on compiler thresholds), without the usual overhead ofcontext switches,[34]in a way similarly to embedded and real-time OSes. Note that this overhead is not negligible: to the direct cost of mode switching it's necessary to add the indirect pollution of important processor structures (likeCPU caches, theinstruction pipeline, and so on) which affects both user-mode and kernel-mode performance.[35] The first computers in the late 1940s and 1950s were directly programmed either withplugboardsor withmachine codeinputted on media such aspunch cards, withoutprogramming languagesor operating systems.[36]After the introduction of thetransistorin the mid-1950s,mainframesbegan to be built. These still needed professional operators[36]who manually do what a modern operating system would do, such as scheduling programs to run,[37]but mainframes still had rudimentary operating systems such asFortran Monitor System(FMS) andIBSYS.[38]In the 1960s,IBMintroduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines ofassembly languagethat had thousands ofbugs. The OS/360 also was the first popular operating system to supportmultiprogramming, such that the CPU could be put to use on one job while another was waiting oninput/output(I/O). Holding multiple jobs inmemorynecessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.[39] Around the same time,teleprintersbegan to be used asterminalsso multiple users could access the computer simultaneously. The operating systemMULTICSwas intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor tocloud computing. TheUNIXoperating system originated as a development of MULTICS for a single user.[40]Because UNIX'ssource codewas available, it became the basis of other, incompatible operating systems, of which the most successful wereAT&T'sSystem Vand theUniversity of California'sBerkeley Software Distribution(BSD).[41]To increase compatibility, theIEEEreleased thePOSIXstandard for operating systemapplication programming interfaces(APIs), which is supported by most UNIX systems.MINIXwas a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available,free softwareLinux. Since 2008, MINIX is used in controllers of mostIntelmicrochips, while Linux is widespread indata centersandAndroidsmartphones.[42] The invention oflarge scale integrationenabled the production ofpersonal computers(initially calledmicrocomputers) from around 1980.[43]For around five years, theCP/M(Control Program for Microcomputers) was the most popular operating system for microcomputers.[44]Later, IBM bought theDOS(Disk Operating System) fromMicrosoft. After modifications requested by IBM, the resulting system was calledMS-DOS(MicroSoft Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.[44] Apple'sMacintoshwas the first popular computer to use agraphical user interface(GUI). The GUI proved much moreuser friendlythan the text-onlycommand-line interfaceearlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay calledWindows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a largelegal settlementwas paid.[45]In the twenty-first century, Windows continues to be popular on personal computers but has lessmarket shareof servers. UNIX operating systems, especially Linux, are the most popular onenterprise systemsand servers but are also used onmobile devicesand many other computer systems.[46] On mobile devices,Symbian OSwas dominant at first, being usurped byBlackBerry OS(introduced 2002) andiOSforiPhones(from 2007). Later on, the open-sourceAndroidoperating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular.[47] The components of an operating system are designed to ensure that various parts of a computer function cohesively. With the de facto obsoletion ofDOS, all usersoftwaremust interact with the operating system to access hardware. The kernel is the part of the operating system that providesprotectionbetween different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power ofmalicious softwareand protecting private data, and ensuring that one program cannot monopolize the computer's resources.[48]Most operating systems have two modes of operation:[49]inuser mode, the hardware checks that the software is only executing legal instructions, whereas the kernel hasunrestricted powersand is not subject to these checks.[50]The kernel also managesmemoryfor other processes and controls access toinput/outputdevices.[51] The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program typically involves the creation of aprocessby the operating systemkernel, which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program, which then interacts with the user and with hardware devices. However, in some systems an application can request that the operating system execute another application within the same process, either as a subroutine or in a separate thread, e.g., theLINKandATTACHfacilities ofOS/360 and successors. Aninterrupt(also known as anabort,exception,fault,signal,[52]ortrap)[53]provides an efficient way for most operating systems to react to the environment. Interrupts cause thecentral processing unit(CPU) to have acontrol flowchange away from the currently running program to aninterrupt handler, also known as an interrupt service routine (ISR).[54][55]An interrupt service routine may cause thecentral processing unit(CPU) to have acontext switch.[56][a]The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave vary from operating system to operating system.[57]However, several interrupt functions are common.[57]The architecture and operating system must:[57] A software interrupt is a message to aprocessthat an event has occurred.[52]This contrasts with ahardware interrupt— which is a message to thecentral processing unit(CPU) that an event has occurred.[58]Software interrupts are similar to hardware interrupts — there is a change away from the currently running process.[59]Similarly, both hardware and software interrupts execute aninterrupt service routine. Software interrupts may be normally occurring events. It is expected that atime slicewill occur, so the kernel will have to perform acontext switch.[60]Acomputer programmay set a timer to go off after a few seconds in case too much data causes an algorithm to take too long.[61] Software interrupts may be error conditions, such as a malformedmachine instruction.[61]However, the most common error conditions aredivision by zeroandaccessing an invalid memory address.[61] Userscan send messages to the kernel to modify the behavior of a currently running process.[61]For example, in thecommand-line environment, pressing theinterrupt character(usuallyControl-C) might terminate the currently running process.[61] To generatesoftware interruptsforx86CPUs, theINTassembly languageinstruction is available.[62]The syntax isINT X, whereXis the offset number (inhexadecimalformat) to theinterrupt vector table. To generatesoftware interruptsinUnix-likeoperating systems, thekill(pid,signum)system callwill send asignalto another process.[63]pidis theprocess identifierof the receiving process.signumis the signal number (inmnemonicformat)[b]to be sent. (The abrasive name ofkillwas chosen because early implementations only terminated the process.)[64] In Unix-like operating systems,signalsinform processes of the occurrence of asynchronous events.[63]To communicate asynchronously, interrupts are required.[65]One reason a process needs to asynchronously communicate to another process solves a variation of the classicreader/writer problem.[66]The writer receives a pipe from theshellfor its output to be sent to the reader's input stream.[67]Thecommand-linesyntax isalpha | bravo.alphawill write to the pipe when its computation is ready and then sleep in the wait queue.[68]bravowill then be moved to theready queueand soon will read from its input stream.[69]The kernel will generatesoftware interruptsto coordinate the piping.[69] Signalsmay be classified into 7 categories.[63]The categories are: Input/output(I/O)devicesare slower than the CPU. Therefore, it would slow down the computer if the CPU had towaitfor each I/O to finish. Instead, a computer may implement interrupts for I/O completion, avoiding the need forpollingor busy waiting.[70] Some computers require an interrupt for each character or word, costing a significant amount of CPU time.Direct memory access(DMA) is an architecture feature to allow devices to bypass the CPU and accessmain memorydirectly.[71](Separate from the architecture, a device may perform direct memory access[c]to and from main memory either directly or via a bus.)[72][d] When acomputer usertypes a key on the keyboard, typically the character appears immediately on the screen. Likewise, when a user moves amouse, thecursorimmediately moves across the screen. Each keystroke and mouse movement generates aninterruptcalledInterrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character[72]or word[73]transmitted. Devices such ashard disk drives,solid-state drives, andmagnetic tapedrives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time. Data is, instead, transferred between the device and memory independently of the CPU by hardware such as achannelor adirect memory accesscontroller; an interrupt is delivered only when all the data is transferred.[74] If acomputer programexecutes asystem callto perform a block I/Owriteoperation, then the system call might execute the following instructions: While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device willinterruptthe currently running process byassertinganinterrupt request. The device will also place an integer onto the data bus.[78]Upon accepting the interrupt request, the operating system will: When the writing process has itstime sliceexpired, the operating system will:[79] With the program counter now reset, the interrupted process will resume its time slice.[57] Among other things, a multiprogramming operating systemkernelmust be responsible for managing all system memory which is currently in use by the programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of thekernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program tocrashthe system. Memory protectionenables thekernelto limit a process' access to the computer's memory. Various methods of memory protection exist, includingmemory segmentationandpaging. All methods require some level of hardware support (such as the80286MMU), which does not exist in all computers. In both segmentation and paging, certainprotected moderegisters specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt, which causes the CPU to re-entersupervisor mode, placing thekernelin charge. This is called asegmentation violationor Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, thekernelgenerally resorts to terminating the offending program, and reports the error. Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. Ageneral protection faultwould be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway. The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that is not accessible[e]memory, but nonetheless has been allocated to it, the kernel is interrupted(see§ Memory management). This kind of interrupt is typically apage fault. When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on a disk or other media to make that space available for use by other programs. This is calledswapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[80] Concurrencyrefers to the operating system's ability to carry out multiple tasks simultaneously.[81]Virtually all modern operating systems support concurrency.[82] Threadsenable splitting a process' work into multiple parts that can run simultaneously.[83]The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating systemkernelschedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives.[84]During acontext switcha running thread is suspended, its state is saved into thethread control blockand stack, and the state of the new thread is loaded in.[85]Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now caninterrupta thread (preemptive multitasking).[86] Threads have their own thread ID,program counter(PC), aregisterset, and astack, but share code,heapdata, and other resources with other threads of the same process.[87][88]Thus, there is less overhead to create a thread than a new process.[89]On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs.[90]Parallelismwith multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently.[91] Permanent storage devices used in twenty-first century computers, unlikevolatiledynamic random-access memory(DRAM), are still accessible after acrashorpower failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write.[92][93]The two main technologies are ahard driveconsisting ofmagnetic disks, andflash memory(asolid-state drivethat stores data in electrical circuits). The latter is more expensive but faster and more durable.[94][95] File systemsare anabstractionused by the operating system to simplify access to permanent storage. They provide human-readablefilenamesand othermetadata, increase performance viaamortizationof accesses, prevent multiple threads from accessing the same section of memory, and includechecksumsto identifycorruption.[96]File systems are composed of files (named collections of data, of an arbitrary size) anddirectories(also called folders) that list human-readable filenames and other directories.[97]An absolutefile pathbegins at theroot directoryand listssubdirectoriesdivided by punctuation, while a relative path defines the location of a file from a directory.[98][99] System calls(which are sometimeswrappedby libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application.[100]The operating system's efforts to reduce latency include storing recently requested blocks of memory in acacheandprefetchingdata that the application has not asked for, but might need next.[101]Device driversare software specific to eachinput/output(I/O) device that enables the operating system to work without modification over different hardware.[102][103] Another component of file systems is adictionarythat maps a file's name and metadata to thedata blockwhere its contents are stored.[104]Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses anindex(often implemented as atree).[105]Separately, there is a free spacemapto track free blocks, commonly implemented as abitmap.[105]Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reducefragmentation.[106] Maintaining data reliability in the face of a computer crash or hardware failure is another concern.[107]File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing.[108]Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks)[109][110]andchecksumsto detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption.[110] Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network.[111]Operating systems security rests on achieving theCIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of adenial of service attack).[112]As with other computer systems, isolatingsecurity domains—in the case of operating systems, the kernel, processes, andvirtual machines—is key to achieving security.[113]Other ways to increase security include simplicity to minimize theattack surface, locking access to resources by default, checking all requests for authorization,principle of least authority(granting the minimum privilege essential for performing a task),privilege separation, and reducing shared data.[114] Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with amonolithic kernellike most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design featuresmicrokernelsthat separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach.[115]Unikernelsare another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application.[115] Most operating systems are written inCorC++, which create potential vulnerabilities for exploitation. Despite attempts to protect against them, vulnerabilities are caused bybuffer overflowattacks, which are enabled by the lack ofbounds checking.[116]Hardware vulnerabilities, some of themcaused by CPU optimizations, can also be used to compromise the operating system.[117]There are known instances of operating system programmers deliberately implanting vulnerabilities, such asback doors.[118] Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs.[119]Becauseformal verificationof operating systems may not be feasible, developers use operating systemhardeningto reduce vulnerabilities,[120]e.g.address space layout randomization,control-flow integrity,[121]access restrictions,[122]and other techniques.[123]There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures.[124]Open source developers strive to work collaboratively to find and eliminate security vulnerabilities, usingcode reviewandtype checkingto expunge malicious code.[125][126]Andrew S. Tanenbaumadvises releasing thesource codeof all operating systems, arguing that it prevents developers from placing trust in secrecy and thus relying on the unreliable practice ofsecurity by obscurity.[127] Auser interface(UI) is essential to support human interaction with a computer. The two most common user interface types for any computer are For personal computers, includingsmartphonesandtablet computers, and forworkstations, user input is typically from a combination ofkeyboard,mouse, andtrackpadortouchscreen, all of which are connected to the operating system with specialized software.[128]Personal computer users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most personal computers.[129]The software to support GUIs is more complex than a command line for input and plain text output. Plain text output is often preferred by programmers, and is easy to support.[130] A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[131] In some cases, hobby development is in support of a "homebrew" computing device, for example, a simplesingle-board computerpowered by a6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is her/his own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests. Examples of hobby operating systems includeSyllableandTempleOS. If an application is written for use on a specific operating system, and isportedto another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwisemaintained. This cost in supporting operating systems diversity can be avoided by instead writing applications againstsoftware platformssuch asJavaorQt. These abstractions have already borne the cost of adaptation to specific operating systems and theirsystem libraries. Another approach is for operating system vendors to adopt standards. For example,POSIXandOS abstraction layersprovide commonalities that reduce porting costs. As of September 2024[update],Android(based on the Linux kernel) is the most popular operating system with a 46% market share, followed byMicrosoft Windowsat 26%,iOSandiPadOSat 18%,macOSat 5%, andLinuxat 1%. Android, iOS, and iPadOS aremobile operating systems, while Windows, macOS, and Linux are desktop operating systems.[3] Linuxis afree softwaredistributed under theGNU General Public License(GPL), which means that all of its derivatives are legally required to release theirsource code.[132]Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy.[133] Its design is similar to other UNIX systems not using amicrokernel.[134]It is written inC[135]and usesUNIX System Vsyntax, but also supportsBSDsyntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, whilesupporting multiple usersand employingpreemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16MBofRAM, but still is used on largemultiprocessorsystems.[134]Similar to other UNIX systems, Linuxdistributionsare composed of akernel,system libraries, andsystem utilities.[136]Linux has agraphical user interface(GUI) with a desktop, folder and file icons, as well as the option to access the operating system via acommand line.[137] Androidis a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity onsmartphonesand, to a lesser extent,embedded systemsneeding a GUI, such as "smart watches,automotive dashboards, airplane seatbacks,medical devices, andhome appliances".[138]Unlike Linux, much of Android is written inJavaand usesobject-oriented design.[139] Windows is aproprietaryoperating system that is widely used on desktop computers, laptops, tablets, phones,workstations,enterprise servers, andXboxconsoles.[141]The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on,energy efficiencyand support fordynamic devicesalso became priorities.[142] Windows Executiveworks viakernel-mode objectsfor important data structures like processes, threads, and sections (memory objects, for example files).[143]The operating system supportsdemand pagingofvirtual memory, which speeds up I/O for many applications. I/Odevice driversuse theWindows Driver Model.[143]TheNTFSfile system has a master table and each file is represented as arecordwithmetadata.[144]The scheduling includespreemptive multitasking.[145]Windows has many security features;[146]especially important are the use ofaccess-control listsandintegrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.[144]
https://en.wikipedia.org/wiki/Operating_system#Multitasking
straceis a diagnostic,debuggingand instructionaluserspaceutility forLinux. It is used to monitor and tamper with interactions betweenprocessesand theLinux kernel, which includesystem calls,signaldeliveries, and changes of process state. The operation of strace is made possible by the kernel feature known asptrace. SomeUnix-likesystems provide other diagnostic tools similar to strace, such as truss. Strace was originally written forSunOSby Paul Kranenburg in 1991, according to its copyright notice, and published early in 1992, in volume three of comp.sources.sun. The initialREADMEfile contained the following:[5] strace(1)is a system call tracer for Sun(tm) systems much like the Sun supplied programtrace(1).strace(1)is a useful utility to sort of debug programs for which no source is available which unfortunately includes almost all of the Sun supplied system software. Later, Branko Lankester ported this version toLinux, releasing his version in November 1992 with the second release following in 1993.[6][7]Richard Sladkey combined these separate versions of strace in 1993, and ported the program toSVR4andSolarisin 1994,[8]resulting in strace 3.0 that was announced in comp.sources.misc in mid-1994.[9] Beginning in 1996, strace was maintained by Wichert Akkerman. During his tenure, strace development migrated toCVS; ports toFreeBSDand many architectures on Linux (including ARM, IA-64, MIPS, PA-RISC, PowerPC, s390, SPARC) were introduced. In 2002, the burden of strace maintainership was transferred to Roland McGrath. Since then, strace gained support for several new Linux architectures (AMD64, s390x, SuperH), bi-architecture support for some of them, and received numerous additions and improvements in syscalls decoders on Linux; strace development migrated togitduring that period. Since 2009, strace is actively maintained by Dmitry Levin. strace gained support for AArch64, ARC, AVR32, Blackfin, Meta, Nios II, OpenSISC 1000, RISC-V, Tile/TileGx, Xtensa architectures since that time. The last version of strace that had some (evidently dead)[10]code for non-Linuxoperating systems was 4.6, released in March 2011.[11]In strace version 4.7, released in May 2012,[12]all non-Linux code had been removed;[13]since strace 4.13,[14]the project follows Linux kernel's release schedule, and as of version 5.0,[15]it follows Linux's versioning scheme as well. In 2012 strace also gained support for path tracing and file descriptor path decoding.[16]In August 2014, strace 4.9 was released,[17][18]where support for stack traces printing was added. In December 2016,[19][20]syscallfault injectionfeature was implemented. The most common use is to start a program using strace, which prints a list of system calls made by the program. This is useful if the program continually crashes, or does not behave as expected; for example using strace may reveal that the program is attempting to access a file which does not exist or cannot be read. An alternative application is to use the-pflag to attach to a running process. This is useful if a process has stopped responding, and might reveal, for example, that the process is blocking whilst attempting to make a network connection. Among other features, strace allows the following: strace supports decoding of arguments of some classes ofioctlcommands, such asBTRFS_*,V4L2_*,DM_*,NSFS_*,MEM*,EVIO*,KVM_*, and several others; it also supports decoding of variousnetlinkprotocols. As strace only details system calls, it cannot be used to detect as many problems as a code debugger such asGNU Debugger(gdb). It is, however, easier to use than a code debugger, and is a very useful tool for system administrators. It is also used by researchers to generate system call traces for latersystem call replay.[66][67][68] The following is an example of typical output of thestracecommand: The above fragment is only a small part of the output of strace when run on the 'ls' command. It shows that the current working directory is opened, inspected and its contents retrieved. The resulting list of file names is written to standard output. Different operating systems feature other similar or relatedinstrumentationtools, offering similar or more advanced features; some of the tools (although using the same or a similar name) may use completely different work mechanisms, resulting in different feature sets or results. Such tools include the following:
https://en.wikipedia.org/wiki/Strace
Inmathematics, amultiset(orbag, ormset) is a modification of the concept of asetthat, unlike a set,[1]allows for multiple instances for each of itselements. The number of instances given for each element is called themultiplicityof that element in the multiset. As a consequence, an infinite number of multisets exist that contain only elementsaandb, but vary in the multiplicities of their elements: These objects are all different when viewed as multisets, although they are the same set, since they all consist of the same elements. As with sets, and in contrast totuples, the order in which elements are listed does not matter in discriminating multisets, so{a,a,b}and{a,b,a}denote the same multiset. To distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used: the multiset{a,a,b}can be denoted by[a,a,b].[2] Thecardinalityor "size" of a multiset is the sum of the multiplicities of all its elements. For example, in the multiset{a,a,b,b,b,c}the multiplicities of the membersa,b, andcare respectively 2, 3, and 1, and therefore the cardinality of this multiset is 6. Nicolaas Govert de Bruijncoined the wordmultisetin the 1970s, according toDonald Knuth.[3]: 694However, the concept of multisets predates the coinage of the wordmultisetby many centuries. Knuth himself attributes the first study of multisets to the Indian mathematicianBhāskarāchārya, who describedpermutations of multisetsaround 1150. Other names have been proposed or used for this concept, includinglist,bunch,bag,heap,sample,weighted set,collection, andsuite.[3]: 694 Wayne Blizard traced multisets back to the very origin of numbers, arguing that "in ancient times, the numbernwas often represented by a collection ofnstrokes,tally marks, or units."[4]These and similar collections of objects can be regarded as multisets, because strokes, tally marks, or units are considered indistinguishable. This shows that people implicitly used multisets even before mathematics emerged. Practical needs for this structure have caused multisets to be rediscovered several times, appearing in literature under different names.[5]: 323For instance, they were important in earlyAIlanguages, such as QA4, where they were referred to asbags,a term attributed toPeter Deutsch.[6]A multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset (finitely repeated element set).[5]: 320[7] Although multisets were used implicitly from ancient times, their explicit exploration happened much later. The first known study of multisets is attributed to the Indian mathematicianBhāskarāchāryacirca 1150, who described permutations of multisets.[3]: 694The work ofMarius Nizolius(1498–1576) contains another early reference to the concept of multisets.[8]Athanasius Kircherfound the number of multiset permutations when one element can be repeated.[9]Jean Prestetpublished a general rule for multiset permutations in 1675.[10]John Wallisexplained this rule in more detail in 1685.[11] Multisets appeared explicitly in the work ofRichard Dedekind.[12][13] Other mathematicians formalized multisets and began to study them as precise mathematical structures in the 20th century. For example,Hassler Whitney(1933) describedgeneralized sets("sets" whosecharacteristic functionsmay take anyintegervalue: positive, negative or zero).[5]: 326[14]: 405Monro (1987) investigated thecategoryMulof multisets and theirmorphisms, defining amultisetas a set with anequivalence relationbetween elements "of the samesort", and amorphismbetween multisets as afunctionthat respectssorts. He also introduced amultinumber: a functionf(x) from a multiset to thenatural numbers, giving themultiplicityof elementxin the multiset. Monro argued that the concepts of multiset and multinumber are often mixed indiscriminately, though both are useful.[5]: 327–328[15] One of the simplest and most natural examples is the multiset ofprime factorsof a natural numbern. Here the underlying set of elements is the set of prime factors ofn. For example, the number120has theprime factorization120=233151,{\displaystyle 120=2^{3}3^{1}5^{1},}which gives the multiset{2, 2, 2, 3, 5}. A related example is the multiset of solutions of analgebraic equation. Aquadratic equation, for example, has two solutions. However, in some cases they are both the same number. Thus the multiset of solutions of the equation could be{3, 5}, or it could be{4, 4}. In the latter case it has a solution of multiplicity 2. More generally, thefundamental theorem of algebraasserts that thecomplexsolutions of apolynomial equationofdegreedalways form a multiset of cardinalityd. A special case of the above are theeigenvaluesof amatrix, whose multiplicity is usually defined as their multiplicity asrootsof thecharacteristic polynomial. However two other multiplicities are naturally defined for eigenvalues, their multiplicities as roots of theminimal polynomial, and thegeometric multiplicity, which is defined as thedimensionof thekernelofA−λI(whereλis an eigenvalue of the matrixA). These three multiplicities define three multisets of eigenvalues, which may be all different: LetAbe an×nmatrix inJordan normal formthat has a single eigenvalue. Its multiplicity isn, its multiplicity as a root of the minimal polynomial is the size of the largest Jordan block, and its geometric multiplicity is the number of Jordan blocks. Amultisetmay be formally defined as anordered pair(U,m)whereUis asetcalled auniverseor theunderlying set, andm:U→Z≥0{\displaystyle m\colon U\to \mathbb {Z} _{\geq 0}}is a function fromUto thenonnegative integers. The value⁠m(a){\displaystyle m(a)}⁠for an element⁠a∈U{\displaystyle a\in U}⁠is called themultiplicityof⁠a{\displaystyle a}⁠in the multiset and intepreted as the number of occurences of⁠a{\displaystyle a}⁠in the multiset. Thesupportof a multiset is the subset of⁠U{\displaystyle U}⁠formed by the elements⁠a∈U{\displaystyle a\in U}⁠such that⁠m(a)>0{\displaystyle m(a)>0}⁠. Afinite multisetis a multiset with afinitesupport. Most authors definemultisetsas finite multisets. This is the case in this article, where, unless otherwise stated, all multisets are finite multisets. Some authors[who?]define multisets with the additional constraint that⁠m(a)>0{\displaystyle m(a)>0}⁠for every⁠a{\displaystyle a}⁠, or, equivalently, the support equals the underlying set. Multisets with infinite multiplicities have also been studied;[16]they are not considered in this article. Some authors[who?]define a multiset in terms of a finite index set⁠I{\displaystyle I}⁠and a function⁠f:I→U{\displaystyle f\colon I\rightarrow U}⁠where the multiplicity of an element⁠a∈U{\displaystyle a\in U}⁠is⁠|f−1(a)|{\displaystyle |f^{-1}(a)|}⁠, the number of elements of⁠I{\displaystyle I}⁠that get mapped to⁠a{\displaystyle a}⁠by⁠f{\displaystyle f}⁠. Multisets may be represented as sets, with some elements repeated. For example, the multiset with support⁠{a,b}{\displaystyle \{a,b\}}⁠and multiplicity function such that⁠m(a)=2,m(b)=1{\displaystyle m(a)=2,\;m(b)=1}⁠can be represented as{a,a,b}. A more compact notation, in case of high multiplicities is⁠{(a,2),(b,1)}{\displaystyle \{(a,2),(b,1)\}}⁠for the same multiset. IfA={a1,…,an},{\displaystyle A=\{a_{1},\ldots ,a_{n}\},}a multiset with support included in⁠A{\displaystyle A}⁠is often represented asa1m(a1)⋯anm(an),{\displaystyle a_{1}^{m(a_{1})}\cdots a_{n}^{m(a_{n})},}to which the computation rules ofindeterminatescan be applied; that is, exponents 1 and factors with exponent 0 can be removed, and the multiset does not depend on the order of the factors. This allows extending the notation to infinite underlying sets as∏a∈Uam(a).{\displaystyle \prod _{a\in U}a^{m(a)}.}An advantage of notation is that it allows using the notation without knowing the exact support. For example, theprime factorsof anatural number⁠n{\displaystyle n}⁠form a multiset such thatn=∏pprimepm(p)=2m(2)3m(3)5m(5)⋯.{\displaystyle n=\prod _{p\;{\text{prime}}}p^{m(p)}=2^{m(2)}3^{m(3)}5^{m(5)}\cdots .} The finite subsets of a set⁠U{\displaystyle U}⁠are exactly the multisets with underlying set⁠U{\displaystyle U}⁠, such that⁠m(a)≤1{\displaystyle m(a)\leq 1}⁠for every⁠a∈U{\displaystyle a\in U}⁠. Elements of a multiset are generally taken in a fixed setU, sometimes called auniverse, which is often the set ofnatural numbers. An element ofUthat does not belong to a given multiset is said to have a multiplicity 0 in this multiset. This extends the multiplicity function of the multiset to a function fromUto the setN{\displaystyle \mathbb {N} }of non-negative integers. This defines aone-to-one correspondencebetween these functions and the multisets that have their elements inU. This extended multiplicity function is commonly called simply themultiplicity function, and suffices for defining multisets when the universe containing the elements has been fixed. This multiplicity function is a generalization of theindicator functionof asubset, and shares some properties with it. Thesupportof a multisetA{\displaystyle A}in a universeUis the underlying set of the multiset. Using the multiplicity functionm{\displaystyle m}, it is characterized asSupp⁡(A):={x∈U∣mA(x)>0}.{\displaystyle \operatorname {Supp} (A):=\{x\in U\mid m_{A}(x)>0\}.} A multiset isfiniteif its support is finite, or, equivalently, if its cardinality|A|=∑x∈Supp⁡(A)mA(x)=∑x∈UmA(x){\displaystyle |A|=\sum _{x\in \operatorname {Supp} (A)}m_{A}(x)=\sum _{x\in U}m_{A}(x)}is finite. Theempty multisetis the unique multiset with anemptysupport (underlying set), and thus a cardinality 0. The usual operations of sets may be extended to multisets by using the multiplicity function, in a similar way to using the indicator function for subsets. In the following,AandBare multisets in a given universeU, with multiplicity functionsmA{\displaystyle m_{A}}andmB.{\displaystyle m_{B}.} Two multisets aredisjointif their supports aredisjoint sets. This is equivalent to saying that their intersection is the empty multiset or that their sum equals their union. There is an inclusion–exclusion principle for finite multisets (similar tothe one for sets), stating that a finite union of finite multisets is the difference of two sums of multisets: in the first sum we consider all possible intersections of anoddnumber of the given multisets, while in the second sum we consider all possible intersections of anevennumber of the given multisets.[citation needed] The number of multisets of cardinalityk, with elements taken from a finite set of cardinalityn, is sometimes called themultiset coefficientormultiset number. This number is written by some authors as((nk)){\displaystyle \textstyle \left(\!\!{n \choose k}\!\!\right)}, a notation that is meant to resemble that ofbinomial coefficients; it is used for instance in (Stanley, 1997), and could be pronounced "nmultichoosek" to resemble "nchoosek" for(nk).{\displaystyle {\tbinom {n}{k}}.}Like thebinomial distributionthat involves binomial coefficients, there is anegative binomial distributionin which the multiset coefficients occur. Multiset coefficients should not be confused with the unrelatedmultinomial coefficientsthat occur in themultinomial theorem. The value of multiset coefficients can be given explicitly as((nk))=(n+k−1k)=(n+k−1)!k!(n−1)!=n(n+1)(n+2)⋯(n+k−1)k!,{\displaystyle \left(\!\!{n \choose k}\!\!\right)={n+k-1 \choose k}={\frac {(n+k-1)!}{k!\,(n-1)!}}={n(n+1)(n+2)\cdots (n+k-1) \over k!},}where the second expression is as a binomial coefficient;[a]many authors in fact avoid separate notation and just write binomial coefficients. So, the number of such multisets is the same as the number of subsets of cardinalitykof a set of cardinalityn+k− 1. The analogy with binomial coefficients can be stressed by writing the numerator in the above expression as arising factorial power((nk))=nk¯k!,{\displaystyle \left(\!\!{n \choose k}\!\!\right)={n^{\overline {k}} \over k!},}to match the expression of binomial coefficients using a falling factorial power:(nk)=nk_k!.{\displaystyle {n \choose k}={n^{\underline {k}} \over k!}.} For example, there are 4 multisets of cardinality 3 with elements taken from the set{1, 2}of cardinality 2 (n= 2,k= 3), namely{1, 1, 1},{1, 1, 2},{1, 2, 2},{2, 2, 2}. There are also 4subsetsof cardinality 3 in the set{1, 2, 3, 4}of cardinality 4 (n+k− 1), namely{1, 2, 3},{1, 2, 4},{1, 3, 4},{2, 3, 4}. One simple way toprovethe equality of multiset coefficients and binomial coefficients given above involves representing multisets in the following way. First, consider the notation for multisets that would represent{a,a,a,a,a,a,b,b,c,c,c,d,d,d,d,d,d,d}(6as, 2bs, 3cs, 7ds) in this form: This is a multiset of cardinalityk= 18made of elements of a set of cardinalityn= 4. The number of characters including both dots and vertical lines used in this notation is18 + 4 − 1. The number of vertical lines is 4 − 1. The number of multisets of cardinality 18 is then the number of ways to arrange the4 − 1vertical lines among the 18 + 4 − 1 characters, and is thus the number of subsets of cardinality 4 − 1 of a set of cardinality18 + 4 − 1. Equivalently, it is the number of ways to arrange the 18 dots among the18 + 4 − 1characters, which is the number of subsets of cardinality 18 of a set of cardinality18 + 4 − 1. This is(4+18−14−1)=(4+18−118)=1330,{\displaystyle {4+18-1 \choose 4-1}={4+18-1 \choose 18}=1330,}thus is the value of the multiset coefficient and its equivalencies:((418))=(2118)=21!18!3!=(213),=4⋅5⋅6⋅7⋅8⋅9⋅10⋅11⋅12⋅13⋅14⋅15⋅16⋅17⋅18⋅19⋅20⋅211⋅2⋅3⋅4⋅5⋅6⋅7⋅8⋅9⋅10⋅11⋅12⋅13⋅14⋅15⋅16⋅17⋅18,=1⋅2⋅3⋅4⋅5⋯16⋅17⋅18⋅19⋅20⋅211⋅2⋅3⋅4⋅5⋯16⋅17⋅18⋅1⋅2⋅3,=19⋅20⋅211⋅2⋅3.{\displaystyle {\begin{aligned}\left(\!\!{4 \choose 18}\!\!\right)&={21 \choose 18}={\frac {21!}{18!\,3!}}={21 \choose 3},\\[1ex]&={\frac {{\color {red}{\mathfrak {4\cdot 5\cdot 6\cdot 7\cdot 8\cdot 9\cdot 10\cdot 11\cdot 12\cdot 13\cdot 14\cdot 15\cdot 16\cdot 17\cdot 18}}}\cdot \mathbf {19\cdot 20\cdot 21} }{\mathbf {1\cdot 2\cdot 3} \cdot {\color {red}{\mathfrak {4\cdot 5\cdot 6\cdot 7\cdot 8\cdot 9\cdot 10\cdot 11\cdot 12\cdot 13\cdot 14\cdot 15\cdot 16\cdot 17\cdot 18}}}}},\\[1ex]&={\frac {1\cdot 2\cdot 3\cdot 4\cdot 5\cdots 16\cdot 17\cdot 18\;\mathbf {\cdot \;19\cdot 20\cdot 21} }{\,1\cdot 2\cdot 3\cdot 4\cdot 5\cdots 16\cdot 17\cdot 18\;\mathbf {\cdot \;1\cdot 2\cdot 3\quad } }},\\[1ex]&={\frac {19\cdot 20\cdot 21}{1\cdot 2\cdot 3}}.\end{aligned}}} From the relation between binomial coefficients and multiset coefficients, it follows that the number of multisets of cardinalitykin a set of cardinalityncan be written((nk))=(−1)k(−nk).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=(-1)^{k}{-n \choose k}.}Additionally,((nk))=((k+1n−1)).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{k+1 \choose n-1}\!\!\right).} Arecurrence relationfor multiset coefficients may be given as((nk))=((nk−1))+((n−1k))forn,k>0{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{n \choose k-1}\!\!\right)+\left(\!\!{n-1 \choose k}\!\!\right)\quad {\mbox{for }}n,k>0}with((n0))=1,n∈N,and((0k))=0,k>0.{\displaystyle \left(\!\!{n \choose 0}\!\!\right)=1,\quad n\in \mathbb {N} ,\quad {\mbox{and}}\quad \left(\!\!{0 \choose k}\!\!\right)=0,\quad k>0.} The above recurrence may be interpreted as follows. Let[n]:={1,…,n}{\displaystyle [n]:=\{1,\dots ,n\}}be the source set. There is always exactly one (empty) multiset of size 0, and ifn= 0there are no larger multisets, which gives the initial conditions. Now, consider the case in whichn,k> 0. A multiset of cardinalitykwith elements from[n]might or might not contain any instance of the final elementn. If it does appear, then by removingnonce, one is left with a multiset of cardinalityk− 1of elements from[n], and every such multiset can arise, which gives a total of((nk−1)){\displaystyle \left(\!\!{n \choose k-1}\!\!\right)}possibilities. Ifndoes not appear, then our original multiset is equal to a multiset of cardinalitykwith elements from[n− 1], of which there are((n−1k)).{\displaystyle \left(\!\!{n-1 \choose k}\!\!\right).} Thus,((nk))=((nk−1))+((n−1k)).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{n \choose k-1}\!\!\right)+\left(\!\!{n-1 \choose k}\!\!\right).} Thegenerating functionof the multiset coefficients is very simple, being∑d=0∞((nd))td=1(1−t)n.{\displaystyle \sum _{d=0}^{\infty }\left(\!\!{n \choose d}\!\!\right)t^{d}={\frac {1}{(1-t)^{n}}}.}As multisets are in one-to-one correspondence withmonomials,((nd)){\displaystyle \left(\!\!{n \choose d}\!\!\right)}is also the number of monomials ofdegreedinnindeterminates. Thus, the above series is also theHilbert seriesof thepolynomial ringk[x1,…,xn].{\displaystyle k[x_{1},\ldots ,x_{n}].} As((nd)){\displaystyle \left(\!\!{n \choose d}\!\!\right)}is a polynomial inn, it and the generating function are well defined for anycomplexvalue ofn. The multiplicative formula allows the definition of multiset coefficients to be extended by replacingnby an arbitrary numberα(negative,real, or complex):((αk))=αk¯k!=α(α+1)(α+2)⋯(α+k−1)k(k−1)(k−2)⋯1fork∈Nand arbitraryα.{\displaystyle \left(\!\!{\alpha \choose k}\!\!\right)={\frac {\alpha ^{\overline {k}}}{k!}}={\frac {\alpha (\alpha +1)(\alpha +2)\cdots (\alpha +k-1)}{k(k-1)(k-2)\cdots 1}}\quad {\text{for }}k\in \mathbb {N} {\text{ and arbitrary }}\alpha .} With this definition one has a generalization of the negative binomial formula (with one of the variables set to 1), which justifies calling the((αk)){\displaystyle \left(\!\!{\alpha \choose k}\!\!\right)}negative binomial coefficients:(1−X)−α=∑k=0∞((αk))Xk.{\displaystyle (1-X)^{-\alpha }=\sum _{k=0}^{\infty }\left(\!\!{\alpha \choose k}\!\!\right)X^{k}.} ThisTaylor seriesformula is valid for all complex numbersαandXwith|X| < 1. It can also be interpreted as anidentityofformal power seriesinX, where it actually can serve as definition of arbitrary powers of series with constant coefficient equal to 1; the point is that with this definition all identities hold that one expects forexponentiation, notably (1−X)−α(1−X)−β=(1−X)−(α+β)and((1−X)−α)−β=(1−X)−(−αβ),{\displaystyle (1-X)^{-\alpha }(1-X)^{-\beta }=(1-X)^{-(\alpha +\beta )}\quad {\text{and}}\quad ((1-X)^{-\alpha })^{-\beta }=(1-X)^{-(-\alpha \beta )},}and formulas such as these can be used to prove identities for the multiset coefficients. Ifαis a nonpositive integern, then all terms withk> −nare zero, and the infinite series becomes a finite sum. However, for other values ofα, including positive integers andrational numbers, the series is infinite. Multisets have various applications.[7]They are becoming fundamental incombinatorics.[17][18][19][20]Multisets have become an important tool in the theory ofrelational databases, which often uses the synonymbag.[21][22][23]For instance, multisets are often used to implement relations in database systems. In particular, a table (without a primary key) works as a multiset, because it can have multiple identical records. Similarly,SQLoperates on multisets and returns identical records. For instance, consider "SELECT name from Student". In the case that there are multiple records with name "Sara" in the student table, all of them are shown. That means the result of an SQL query is a multiset; if the result were instead a set, the repetitive records in the result set would have been eliminated. Another application of multisets is in modelingmultigraphs. In multigraphs there can be multiple edges between any two givenvertices. As such, the entity that specifies the edges is a multiset, and not a set. There are also other applications. For instance,Richard Radoused multisets as a device to investigate the properties of families of sets. He wrote, "The notion of a set takes no account of multiple occurrence of any one of its members, and yet it is just this kind of information that is frequently of importance. We need only think of the set of roots of a polynomialf(x) or thespectrumof alinear operator."[5]: 328–329 Different generalizations of multisets have been introduced, studied and applied to solving problems.
https://en.wikipedia.org/wiki/Multiset
Inmathematics, asetis a collection of different things; these things are calledelementsormembersof the set and are typicallymathematical objectsof any kind: numbers, symbols, points in space, lines, othergeometric shapes, variables, or even other sets. A set may befiniteorinfinite, depending whether the number of its elements is finite or not. There is a unique set with no elements, called theempty set; a set with a single element is asingleton. Sets are ubiquitous in modern mathematics. Indeed,set theory, more specificallyZermelo–Fraenkel set theory, has been the standard way to provide rigorousfoundationsfor all branches of mathematics since the first half of the 20th century. Before the end of the 19th century, sets were not studied specifically, and were not clearly distinguished fromsequences. Most mathematicians consideredinfinityaspotential—meaning that it is the result of an endless process—and were reluctant to considerinfinite sets, that is sets whose number of members is not anatural number. Specifically, alinewas not considered as the set of its points, but as alocuswhere points may be located. The mathematical study of infinite sets began withGeorg Cantor(1845–1918). This provided some counterintuitive facts and paradoxes. For example, thenumber linehas aninfinite numberof elements that is strictly larger than the infinite number ofnatural numbers, and anyline segmenthas the same number of elements as the whole space. Also,Russell's paradoximplies that the phrase "the set of all sets" is self-contradictory. Together with other counterintuitive results, this led to thefoundational crisis of mathematics, which was eventually resolved with the general adoption ofZermelo–Fraenkel set theoryas a robust foundation ofset theoryand all mathematics. Meanwhile, sets started to be widely used in all mathematics. In particular,algebraic structuresandmathematical spacesare typically defined in terms of sets. Also, many older mathematical results are restated in terms of sets. For example,Euclid's theoremis often stated as "thesetof theprime numbersis infinite". This wide use of sets in mathematics was prophesied byDavid Hilbertwhen saying: "No one will drive us from theparadise which Cantor created for us."[1] Generally, the common usage of sets in mathematics does not require the full power of Zermelo–Fraenkel set theory. In mathematical practice, sets can be manipulated independently of thelogical frameworkof this theory. The object of this article is to summarize the manipulation rules and properties of sets that are commonly used in mathematics, without reference to any logical framework. For the branch of mathematics that studies sets, seeSet theory; for an informal presentation of the corresponding logical framework, seeNaive set theory; for a more formal presentation, seeAxiomatic set theoryandZermelo–Fraenkel set theory. In mathematics, a set is a collection of different things.[2][3][4][5]These things are calledelementsormembersof the set and are typicallymathematical objectsof any kind such as numbers, symbols,points in space,lines, othergeometrical shapes,variables,functions, or even other sets.[6][7]A set may also be called acollectionor family, especially when its elements are themselves sets; this may avoid the confusion between the set and its members, and may make reading easier. A set may be specified either by listing its elements or by a property that characterizes its elements, such as for the set of theprime numbersor the set of all students in a given class.[8][9][10] If⁠x{\displaystyle x}⁠is an element of a set⁠S{\displaystyle S}⁠, one says that⁠x{\displaystyle x}⁠belongsto⁠S{\displaystyle S}⁠oris in⁠S{\displaystyle S}⁠, and this is written as⁠x∈S{\displaystyle x\in S}⁠.[11]The statement "⁠y{\displaystyle y}⁠is not in⁠S{\displaystyle S\,}⁠" is written as⁠y∉S{\displaystyle y\not \in S}⁠, which can also be read as "yis not inB".[12][13]For example, if⁠Z{\displaystyle \mathbb {Z} }⁠is the set of theintegers, one has⁠−3∈Z{\displaystyle -3\in \mathbb {Z} }⁠and⁠1.5∉Z{\displaystyle 1.5\not \in \mathbb {Z} }⁠. Each set is uniquely characterized by its elements. In particular, two sets that have precisely the same elements areequal(they are the same set).[14]This property, calledextensionality, can be written in formula asA=B⟺∀x(x∈A⟺x∈B).{\displaystyle A=B\iff \forall x\;(x\in A\iff x\in B).}This implies that there is only one set with no element, theempty set(ornull set) that is denoted⁠∅,∅{\displaystyle \varnothing ,\emptyset }⁠,[a]or⁠{}.{\displaystyle \{\,\}.}⁠[17][18]Asingletonis a set with exactly one element.[b]If⁠x{\displaystyle x}⁠is this element, the singleton is denoted⁠{x}.{\displaystyle \{x\}.}⁠If⁠x{\displaystyle x}⁠is itself a set, it must not be confused with⁠{x}.{\displaystyle \{x\}.}⁠For example,⁠∅{\displaystyle \emptyset }⁠is a set with no elements, while⁠{∅}{\displaystyle \{\emptyset \}}⁠is a singleton with⁠∅{\displaystyle \emptyset }⁠as its unique element. A set isfiniteif there exists anatural number⁠n{\displaystyle n}⁠such that the⁠n{\displaystyle n}⁠first natural numbers can be put inone to one correspondencewith the elements of the set. In this case, one says that⁠n{\displaystyle n}⁠is the number of elements of the set. A set isinfiniteif such an⁠n{\displaystyle n}⁠does not exist. Theempty setis a finite set with⁠0{\displaystyle 0}⁠elements. The natural numbers form an infinite set, commonly denoted⁠N{\displaystyle \mathbb {N} }⁠. Other examples of infinite sets includenumber setsthat contain the natural numbers,real vector spaces,curvesand most sorts ofspaces. Extensionality implies that for specifying a set, one has either to list its elements or to provide a property that uniquely characterizes the set elements. Rosterorenumeration notationis a notation introduced byErnst Zermeloin 1908 that specifies a set by listing its elements betweenbraces, separated by commas.[19][20][21][22][23]For example, one knows that{4,2,1,3}{\displaystyle \{4,2,1,3\}}and{blue, white, red}{\displaystyle \{{\text{blue, white, red}}\}}denote sets and nottuplesbecause of the enclosing braces. Above notations⁠{}{\displaystyle \{\,\}}⁠and⁠{x}{\displaystyle \{x\}}⁠for the empty set and for a singleton are examples of roster notation. When specifying sets, it only matters whether each distinct element is in the set or not; this means a set does not change if elements are repeated or arranged in a different order. For example,[24][25][26] {1,2,3,4}={4,2,1,3}={4,2,4,3,1,3}.{\displaystyle \{1,2,3,4\}=\{4,2,1,3\}=\{4,2,4,3,1,3\}.} When there is a clear pattern for generating all set elements, one can useellipsesfor abbreviating the notation,[27][28]such as in{1,2,3,…,1000}{\displaystyle \{1,2,3,\ldots ,1000\}}for the positive integers not greater than⁠1000{\displaystyle 1000}⁠. Ellipses allow also expanding roster notation to some infinite sets. For example, the set of all integers can be denoted as {…,−3,−2,−1,0,1,2,3,…}{\displaystyle \{\ldots ,-3,-2,-1,0,1,2,3,\ldots \}} or {0,1,−1,2,−2,3,−3,…}.{\displaystyle \{0,1,-1,2,-2,3,-3,\ldots \}.} Set-builder notation specifies a set as being the set of all elements that satisfy somelogical formula.[29][30][31]More precisely, if⁠P(x){\displaystyle P(x)}⁠is a logical formula depending on avariable⁠x{\displaystyle x}⁠, which evaluates totrueorfalsedepending on the value of⁠x{\displaystyle x}⁠, then{x∣P(x)}{\displaystyle \{x\mid P(x)\}}or[32]{x:P(x)}{\displaystyle \{x:P(x)\}}denotes the set of all⁠x{\displaystyle x}⁠for which⁠P(x){\displaystyle P(x)}⁠is true.[8]For example, a setFcan be specified as follows:F={n∣nis an integer, and0≤n≤19}.{\displaystyle F=\{n\mid n{\text{ is an integer, and }}0\leq n\leq 19\}.}In this notation, thevertical bar"|" is read as "such that", and the whole formula can be read as "Fis the set of allnsuch thatnis an integer in the range from 0 to 19 inclusive". Some logical formulas, such as⁠Sis a set{\displaystyle \color {red}{S{\text{ is a set}}}}⁠or⁠Sis a set andS∉S{\displaystyle \color {red}{S{\text{ is a set and }}S\not \in S}}⁠cannot be used in set-builder notation because there is no set for which the elements are characterized by the formula. There are several ways for avoiding the problem. One may prove that the formula defines a set; this is often almost immediate, but may be very difficult. One may also introduce a larger set⁠U{\displaystyle U}⁠that must contain all elements of the specified set, and write the notation as{x∣x∈Uand ...}{\displaystyle \{x\mid x\in U{\text{ and ...}}\}}or{x∈U∣...}.{\displaystyle \{x\in U\mid {\text{ ...}}\}.} One may also define⁠U{\displaystyle U}⁠once for all and take the convention that every variable that appears on the left of the vertical bar of the notation represents an element of⁠U{\displaystyle U}⁠. This amounts to say that⁠x∈U{\displaystyle x\in U}⁠is implicit in set-builder notation. In this case,⁠U{\displaystyle U}⁠is often calledthedomain of discourseor auniverse. For example, with the convention that a lower case Latin letter may represent areal numberand nothing else, theexpression{x∣x∉Q}{\displaystyle \{x\mid x\not \in \mathbb {Q} \}}is an abbreviation of{x∈R∣x∉Q},{\displaystyle \{x\in \mathbb {R} \mid x\not \in \mathbb {Q} \},}which defines theirrational numbers. Asubsetof a set⁠B{\displaystyle B}⁠is a set⁠A{\displaystyle A}⁠such that every element of⁠A{\displaystyle A}⁠is also an element of⁠B{\displaystyle B}⁠.[33]If⁠A{\displaystyle A}⁠is a subset of⁠B{\displaystyle B}⁠, one says commonly that⁠A{\displaystyle A}⁠iscontainedin⁠B{\displaystyle B}⁠,⁠B{\displaystyle B}⁠contains⁠A{\displaystyle A}⁠, or⁠B{\displaystyle B}⁠is asupersetof⁠A{\displaystyle A}⁠. This denoted⁠A⊆B{\displaystyle A\subseteq B}⁠and⁠B⊇A{\displaystyle B\supseteq A}⁠. However many authors use⁠A⊂B{\displaystyle A\subset B}⁠and⁠B⊃A{\displaystyle B\supset A}⁠instead. The definition of a subset can be expressed in notation asA⊆Bif and only if∀x(x∈A⟹x∈B).{\displaystyle A\subseteq B\quad {\text{if and only if}}\quad \forall x\;(x\in A\implies x\in B).} A set⁠A{\displaystyle A}⁠is aproper subsetof a set⁠B{\displaystyle B}⁠if⁠A⊆B{\displaystyle A\subseteq B}⁠and⁠A≠B{\displaystyle A\neq B}⁠. This is denoted⁠A⊂B{\displaystyle A\subset B}⁠and⁠B⊃A{\displaystyle B\supset A}⁠. When⁠A⊂B{\displaystyle A\subset B}⁠is used for the subset relation, or in case of possible ambiguity, one uses commonly⁠A⊊B{\displaystyle A\subsetneq B}⁠and⁠B⊋A{\displaystyle B\supsetneq A}⁠.[34] Therelationshipbetween sets established by ⊆ is calledinclusionorcontainment. Equality between sets can be expressed in terms of subsets. Two sets are equal if and only if they contain each other: that is,A⊆BandB⊆Ais equivalent toA=B.[30][8]The empty set is a subset of every set:∅ ⊆A.[17] Examples: There are several standardoperationsthat produce new sets from given sets, in the same way asadditionandmultiplicationproduce new numbers from given numbers. The operations that are considered in this section are those such that all elements of the produced sets belong to a previously defined set. These operations are commonly illustrated withEuler diagramsandVenn diagrams.[35] The main basic operations on sets are the following ones. Theintersectionof two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠is a set denoted⁠A∩B{\displaystyle A\cap B}⁠whose elements are those elements that belong to both⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠. That is,A∩B={x∣x∈A∧x∈B},{\displaystyle A\cap B=\{x\mid x\in A\land x\in B\},}where⁠∧{\displaystyle \land }⁠denotes thelogical and. Intersection isassociativeandcommutative; this means that for proceeding a sequence of intersections, one may proceed in any order, without the need of parentheses for specifying theorder of operations. Intersection has no generalidentity element. However, if one restricts intersection to the subsets of a given set⁠U{\displaystyle U}⁠, intersection has⁠U{\displaystyle U}⁠as identity element. If⁠S{\displaystyle {\mathcal {S}}}⁠is a nonempty set of sets, its intersection, denoted⋂A∈SA,{\textstyle \bigcap _{A\in {\mathcal {S}}}A,}is the set whose elements are those elements that belong to all sets in⁠S{\displaystyle {\mathcal {S}}}⁠. That is,⋂A∈SA={x∣(∀A∈S)x∈A}.{\displaystyle \bigcap _{A\in {\mathcal {S}}}A=\{x\mid (\forall A\in {\mathcal {S}})\;x\in A\}.} These two definitions of the intersection coincide when⁠S{\displaystyle {\mathcal {S}}}⁠has two elements. Theunionof two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠is a set denoted⁠A∪B{\displaystyle A\cup B}⁠whose elements are those elements that belong to⁠A{\displaystyle A}⁠or⁠B{\displaystyle B}⁠or both. That is,A∪B={x∣x∈A∨x∈B},{\displaystyle A\cup B=\{x\mid x\in A\lor x\in B\},}where⁠∨{\displaystyle \lor }⁠denotes thelogical or. Union isassociativeandcommutative; this means that for proceeding a sequence of intersections, one may proceed in any order, without the need of parentheses for specifying theorder of operations. The empty set is anidentity elementfor the union operation. If⁠S{\displaystyle {\mathcal {S}}}⁠is a set of sets, its union, denoted⋃A∈SA,{\textstyle \bigcup _{A\in {\mathcal {S}}}A,}is the set whose elements are those elements that belong to at least one set in⁠S{\displaystyle {\mathcal {S}}}⁠. That is,⋃A∈SA={x∣(∃A∈S)x∈A}.{\displaystyle \bigcup _{A\in {\mathcal {S}}}A=\{x\mid (\exists A\in {\mathcal {S}})\;x\in A\}.} These two definitions of the union coincide when⁠S{\displaystyle {\mathcal {S}}}⁠has two elements. Theset differenceof two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠, is a set, denoted⁠A∖B{\displaystyle A\setminus B}⁠or⁠A−B{\displaystyle A-B}⁠, whose elements are those elements that belong to⁠A{\displaystyle A}⁠, but not to⁠B{\displaystyle B}⁠. That is,A∖B={x∣x∈A∧x∉B},{\displaystyle A\setminus B=\{x\mid x\in A\land x\not \in B\},}where⁠∧{\displaystyle \land }⁠denotes thelogical and. When⁠B⊆A{\displaystyle B\subseteq A}⁠the difference⁠A∖B{\displaystyle A\setminus B}⁠is also called thecomplementof⁠B{\displaystyle B}⁠in⁠A{\displaystyle A}⁠. When all sets that are considered are subsets of a fixeduniversal set⁠U{\displaystyle U}⁠, the complement⁠U∖A{\displaystyle U\setminus A}⁠is often called theabsolute complementof⁠A{\displaystyle A}⁠. Thesymmetric differenceof two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠, denoted⁠AΔB{\displaystyle A\,\Delta \,B}⁠, is the set of those elements that belong toAorBbut not to both:AΔB=(A∖B)∪(B∖A).{\displaystyle A\,\Delta \,B=(A\setminus B)\cup (B\setminus A).} The set of all subsets of a set⁠U{\displaystyle U}⁠is called thepowersetof⁠U{\displaystyle U}⁠, often denoted⁠P(U){\displaystyle {\mathcal {P}}(U)}⁠. The powerset is an algebraic structure whose main operations are union, intersection, set difference, symmetric difference and absolute complement (complement in⁠U{\displaystyle U}⁠). The powerset is aBoolean ringthat has the symmetric difference as addition, the intersection as multiplication, the empty set asadditive identity,⁠U{\displaystyle U}⁠asmultiplicative identity, and complement as additive inverse. The powerset is also aBoolean algebrafor which thejoin⁠∨{\displaystyle \lor }⁠is the union⁠∪{\displaystyle \cup }⁠, themeet⁠∧{\displaystyle \land }⁠is the intersection⁠∩{\displaystyle \cap }⁠, and the negation is the set complement. As every Boolean algebra, the power set is also apartially ordered setfor set inclusion. It is also acomplete lattice. The axioms of these structures induce manyidentitiesrelating subsets, which are detailed in the linked articles. Afunctionfrom a setA—thedomain—to a setB—thecodomain—is a rule that assigns to each element ofAa unique element ofB. For example, thesquare functionmaps every real numberxtox2. Functions can be formally defined in terms of sets by means of theirgraph, which are subsets of theCartesian product(see below) of the domain and the codomain. Functions are fundamental for set theory, and examples are given in following sections. Intuitively, anindexed familyis a set whose elements are labelled with the elements of another set, the index set. These labels allow the same element to occur several times in the family. Formally, an indexed family is a function that has the index set as its domain. Generally, the usualfunctional notation⁠f(x){\displaystyle f(x)}⁠is not used for indexed families. Instead, the element of the index set is written as a subscript of the name of the family, such as in⁠ai{\displaystyle a_{i}}⁠. When the index set is⁠{1,2}{\displaystyle \{1,2\}}⁠, an indexed family is called anordered pair. When the index set is the set of the⁠n{\displaystyle n}⁠first natural numbers, an indexed family is called an⁠n{\displaystyle n}⁠-tuple. When the index set is the set of all natural numbers an indexed family is called asequence. In all these cases, the natural order of the natural numbers allows omitting indices for explicit indexed families. For example,⁠(b,2,b){\displaystyle (b,2,b)}⁠denotes the 3-tuple⁠A{\displaystyle A}⁠such that⁠A1=b,A2=2,A3=b{\displaystyle A_{1}=b,A_{2}=2,A_{3}=b}⁠. The above notations⋃A∈SA{\textstyle \bigcup _{A\in {\mathcal {S}}}A}and⋂A∈SA{\textstyle \bigcap _{A\in {\mathcal {S}}}A}are commonly replaced with a notation involving indexed families, namely⋃i∈IAi={x∣(∃i∈I)x∈Ai}{\displaystyle \bigcup _{i\in {\mathcal {I}}}A_{i}=\{x\mid (\exists i\in {\mathcal {I}})\;x\in A_{i}\}}and⋂i∈IAi={x∣(∀i∈I)x∈Ai}.{\displaystyle \bigcap _{i\in {\mathcal {I}}}A_{i}=\{x\mid (\forall i\in {\mathcal {I}})\;x\in A_{i}\}.} The formulas of the above sections are special cases of the formulas for indexed families, where⁠S=I{\displaystyle {\mathcal {S}}={\mathcal {I}}}⁠and⁠i=A=Ai{\displaystyle i=A=A_{i}}⁠. The formulas remain correct, even in the case where⁠Ai=Aj{\displaystyle A_{i}=A_{j}}⁠for some⁠i≠j{\displaystyle i\neq j}⁠, since⁠A=A∪A=A∩A.{\displaystyle A=A\cup A=A\cap A.}⁠ In§ Basic operations, all elements of sets produced by set operations belong to previously defined sets. In this section, other set operations are considered, which produce sets whose elements can be outside all previously considered sets. These operations areCartesian product,disjoint union,set exponentiationandpower set. The Cartesian product of two sets has already be used for defining functions. Given two sets⁠A1{\displaystyle A_{1}}⁠and⁠A2{\displaystyle A_{2}}⁠, theirCartesian product, denoted⁠A1×A2{\displaystyle A_{1}\times A_{2}}⁠is the set formed by all ordered pairs⁠(a1,a2){\displaystyle (a_{1},a_{2})}⁠such that⁠a1∈A1{\displaystyle a_{1}\in A_{1}}⁠and⁠ai∈A1{\displaystyle a_{i}\in A_{1}}⁠; that is,A1×A2={(a1,a2)∣a1∈A1∧a2∈A2}.{\displaystyle A_{1}\times A_{2}=\{(a_{1},a_{2})\mid a_{1}\in A_{1}\land a_{2}\in A_{2}\}.} This definition does not supposes that the two sets are different. In particular,A×A={(a1,a2)∣a1∈A∧a2∈A}.{\displaystyle A\times A=\{(a_{1},a_{2})\mid a_{1}\in A\land a_{2}\in A\}.} Since this definition involves a pair of indices (1,2), it generalizes straightforwardly to the Cartesian product ordirect productof any indexed family of sets:∏i∈IAi={(ai)i∈I∣(∀i∈I)ai∈Ai}.{\displaystyle \prod _{i\in {\mathcal {I}}}A_{i}=\{(a_{i})_{i\in {\mathcal {I}}}\mid (\forall i\in {\mathcal {I}})\;a_{i}\in A_{i}\}.}That is, the elements of the Cartesian product of a family of sets are all families of elements such that each one belongs to the set of the same index. The fact that, for every indexed family of nonempty sets, the Cartesian product is a nonempty set is insured by theaxiom of choice. Given two sets⁠E{\displaystyle E}⁠and⁠F{\displaystyle F}⁠, theset exponentiation, denoted⁠FE{\displaystyle F^{E}}⁠, is the set that has as elements all functions from⁠E{\displaystyle E}⁠to⁠F{\displaystyle F}⁠. Equivalently,⁠FE{\displaystyle F^{E}}⁠can be viewed as the Cartesian product of a family, indexed by⁠E{\displaystyle E}⁠, of sets that are all equal to⁠F{\displaystyle F}⁠. This explains the terminology and the notation, sinceexponentiationwith integer exponents is a product where all factors are equal to the base. Thepower setof a set⁠E{\displaystyle E}⁠is the set that has all subsets of⁠E{\displaystyle E}⁠as elements, including theempty setand⁠E{\displaystyle E}⁠itself.[30]It is often denoted⁠P(E){\displaystyle {\mathcal {P}}(E)}⁠. For example,P({1,2,3})={∅,{1},{2},{3},{1,2},{1,3},{2,3},{1,2,3}}.{\displaystyle {\mathcal {P}}(\{1,2,3\})=\{\emptyset ,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\}.} There is a natural one-to-one correspondence (bijection) between the subsets of⁠E{\displaystyle E}⁠and the functions from⁠E{\displaystyle E}⁠to⁠{0,1}{\displaystyle \{0,1\}}⁠; this correspondence associates to each subset the function that takes the value⁠1{\displaystyle 1}⁠on the subset and⁠0{\displaystyle 0}⁠elsewhere. Because of this correspondence, the power set of⁠E{\displaystyle E}⁠is commonly identified with a set exponentiation:P(E)={0,1}E.{\displaystyle {\mathcal {P}}(E)=\{0,1\}^{E}.}In this notation,⁠{0,1}{\displaystyle \{0,1\}}⁠is often abbreviated as⁠2{\displaystyle 2}⁠, which gives[30][36]P(E)=2E.{\displaystyle {\mathcal {P}}(E)=2^{E}.}In particular, if⁠E{\displaystyle E}⁠has⁠n{\displaystyle n}⁠elements, then⁠2E{\displaystyle 2^{E}}⁠has⁠2n{\displaystyle 2^{n}}⁠elements.[37] Thedisjoint unionof two or more sets is similar to the union, but, if two sets have elements in common, these elements are considered as distinct in the disjoint union. This is obtained by labelling the elements by the indexes of the set they are coming from. The disjoint union of two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠is commonly denoted⁠A⊔B{\displaystyle A\sqcup B}⁠and is thus defined asA⊔B={(a,i)∣(i=1∧a∈A)∨(i=2∧a∈B}.{\displaystyle A\sqcup B=\{(a,i)\mid (i=1\land a\in A)\lor (i=2\land a\in B\}.} If⁠A=B{\displaystyle A=B}⁠is a set with⁠n{\displaystyle n}⁠elements, then⁠A∪A=A{\displaystyle A\cup A=A}⁠has⁠n{\displaystyle n}⁠elements, while⁠A⊔A{\displaystyle A\sqcup A}⁠has⁠2n{\displaystyle 2n}⁠elements. The disjoint union of two sets is a particular case of the disjoint union of an indexed family of sets, which is defined as⨆i∈I={(a,i)∣i∈I∧a∈Ai}.{\displaystyle \bigsqcup _{i\in {\mathcal {I}}}=\{(a,i)\mid i\in {\mathcal {I}}\land a\in A_{i}\}.} The disjoint union is thecoproductin thecategoryof sets. Therefore the notation∐i∈I={(a,i)∣i∈I∧a∈Ai}{\displaystyle \coprod _{i\in {\mathcal {I}}}=\{(a,i)\mid i\in {\mathcal {I}}\land a\in A_{i}\}}is commonly used. Given an indexed family of sets⁠(Ai)i∈I{\displaystyle (A_{i})_{i\in {\mathcal {I}}}}⁠, there is anatural map⨆i∈IAi→⋃i∈IAi(a,i)↦a,{\displaystyle {\begin{aligned}\bigsqcup _{i\in {\mathcal {I}}}A_{i}&\to \bigcup _{i\in {\mathcal {I}}}A_{i}\\(a,i)&\mapsto a,\end{aligned}}}which consists in "forgetting" the indices. This maps is always surjective; it is bijective if and only if the⁠Ai{\displaystyle A_{i}}⁠arepairwise disjoint, that is, all intersections of two sets of the family are empty. In this case,⋃i∈IAi{\textstyle \bigcup _{i\in {\mathcal {I}}}A_{i}}and⨆i∈IAi{\textstyle \bigsqcup _{i\in {\mathcal {I}}}A_{i}}are commonly identified, and one says that their union is thedisjoint unionof the members of the family. If a set is the disjoint union of a family of subsets, one says also that the family is apartitionof the set. Informally, the cardinality of a setS, often denoted|S|, is the number of its members.[38]This number is thenatural number⁠n{\displaystyle n}⁠when there is abijectionbetween the set that is considered and the set⁠{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}⁠of the⁠n{\displaystyle n}⁠first natural numbers. The cardinality of the empty set is⁠0{\displaystyle 0}⁠.[39]A set with the cardinality of a natural number is called afinite setwhich is true for both cases. Otherwise, one has aninfinite set.[40] The fact that natural numbers measure the cardinality of finite sets is the basis of the concept of natural number, and predates for several thousands years the concept of sets. A large part ofcombinatoricsis devoted to the computation or estimation of the cardinality of finite sets. The cardinality of an infinite set is commonly represented by acardinal number, exactly as the number of elements of a finite set is represented by a natural numbers. The definition of cardinal numbers is too technical for this article; however, many properties of cardinalities can be dealt without referring to cardinal numbers, as follows. Two sets⁠S{\displaystyle S}⁠and⁠T{\displaystyle T}⁠have the same cardinality if there exists a one-to-one correspondence (bijection) between them. This is denoted|S|=|T|,{\displaystyle |S|=|T|,}and would be anequivalence relationon sets, if a set of all sets would exist. For example, the natural numbers and the even natural numbers have the same cardinality, since multiplication by two provides such a bijection. Similarly, theinterval⁠(−1,1){\displaystyle (-1,1)}⁠and the set of all real numbers have the same cardinality, a bijection being provided by the function⁠x↦tan⁡(πx/2){\displaystyle x\mapsto \tan(\pi x/2)}⁠. Having the same cardinality of aproper subsetis a characteristic property of infinite sets:a set is infinite if and only if it has the same cardinality as one of its proper subsets.So, by the above example, the natural numbers form an infinite set.[30] Besides equality, there is a natural inequality between cardinalities: a set⁠S{\displaystyle S}⁠has a cardinality smaller than or equal to the cardinality of another set⁠T{\displaystyle T}⁠if there is aninjectionfrome⁠S{\displaystyle S}⁠to⁠T{\displaystyle T}⁠. This is denoted|S|≤|T|.{\displaystyle |S|\leq |T|.} Schröder–Bernstein theoremimplies that|S|≤|T|{\displaystyle |S|\leq |T|}and|T|≤|S|{\displaystyle |T|\leq |S|}imply|S|=|T|.{\displaystyle |S|=|T|.}Also, one has|S|≤|T|,{\displaystyle |S|\leq |T|,}if and only if there is a surjection from⁠T{\displaystyle T}⁠to⁠S{\displaystyle S}⁠. For every two sets⁠S{\displaystyle S}⁠and⁠T{\displaystyle T}⁠, one has either|S|≤|T|{\displaystyle |S|\leq |T|}or|T|≤|S|.{\displaystyle |T|\leq |S|.}[c]So, inequality of cardinalities is atotal order. The cardinality of the set⁠N{\displaystyle \mathbb {N} }⁠of the natural numbers, denoted|N|=ℵ0,{\displaystyle |\mathbb {N} |=\aleph _{0},}is the smallest infinite cardinality. This means that if⁠S{\displaystyle S}⁠is a set of natural numbers, then either⁠S{\displaystyle S}⁠is finite or|S|=|N|.{\displaystyle |S|=|\mathbb {N} |.} Sets with cardinality less than or equal to|N|=ℵ0{\displaystyle |\mathbb {N} |=\aleph _{0}}are calledcountable sets; these are either finite sets orcountably infinite sets(sets of cardinalityℵ0{\displaystyle \aleph _{0}}); some authors use "countable" to mean "countably infinite". Sets with cardinality strictly greater thanℵ0{\displaystyle \aleph _{0}}are calleduncountable sets. Cantor's diagonal argumentshows that, for every set⁠S{\displaystyle S}⁠, its power set (the set of its subsets)⁠2S{\displaystyle 2^{S}}⁠has a greater cardinality:|S|<|2S|.{\displaystyle |S|<\left|2^{S}\right|.}This implies that there is no greatest cardinality. The cardinality of set of thereal numbersis called thecardinality of the continuumand denoted⁠c{\displaystyle {\mathfrak {c}}}⁠. (The term "continuum" referred to thereal linebefore the 20th century, when the real line was not commonly viewed as a set of numbers.) Since, as seen above, the real line⁠R{\displaystyle \mathbb {R} }⁠has the same cardinality of anopen interval, every subset of⁠R{\displaystyle \mathbb {R} }⁠that contains a nonemptyopen intervalhas also the cardinality⁠c{\displaystyle {\mathfrak {c}}}⁠. One hasc=2ℵ0,{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}},}meaning that the cardinality of the real numbers equals the cardinality of thepower setof the natural numbers. In particular,[41]c>ℵ0.{\displaystyle {\mathfrak {c}}>\aleph _{0}.} When published in 1878 byGeorg Cantor,[42]this result was so astonishing that it was refused by mathematicians, and several tens years were needed before its common acceptance. It can be shown that⁠c{\displaystyle {\mathfrak {c}}}⁠is also the cardinality of the entireplane, and of anyfinite-dimensionalEuclidean space.[43] Thecontinuum hypothesis, was a conjecture formulated by Georg Cantor in 1878 that there is no set with cardinality strictly between⁠ℵ0{\displaystyle \aleph _{0}}⁠and⁠c{\displaystyle {\mathfrak {c}}}⁠.[42]In 1963,Paul Cohenproved that the continuum hypothesis isindependentof theaxiomsofZermelo–Fraenkel set theorywith theaxiom of choice.[44]This means that if the most widely usedset theoryisconsistent(that is not self-contradictory),[d]then the same is true for both the set theory with the continuum hypothesis added as a further axiom, and the set theory with the negation of the continuum hypothesis added. Informally, the axiom of choice says that, given any family of nonempty sets, one can choose simultaneously an element in each of them.[e]Formulated this way, acceptability of this axiom sets a foundational logical question, because of the difficulty of conceiving an infinite instantaneous action. However, there are several equivalent formulations that are much less controversial and have strong consequences in many areas of mathematics. In the present days, the axiom of choice is thus commonly accepted in mainstream mathematics. A more formal statement of the axiom of choice is:the Cartesian product of every indexed family of nonempty sets is non empty. Other equivalent forms are described in the following subsections. Zorn's lemma is an assertion that is equivalent to the axiom of choice under the other axioms of set theory, and is easier to use in usual mathematics. Let⁠S{\displaystyle S}⁠be a partial ordered set. Achainin⁠S{\displaystyle S}⁠is a subset that istotally orderedunder the induced order. Zorn's lemma states that, if every chain in⁠S{\displaystyle S}⁠has anupper boundin⁠S{\displaystyle S}⁠, then⁠S{\displaystyle S}⁠has (at least) amaximal element, that is, an element that is not smaller than another element of⁠S{\displaystyle S}⁠. In most uses of Zorn's lemma,⁠S{\displaystyle S}⁠is a set of sets, the order is set inclusion, and the upperbound of a chain is taken as the union of its members. An example of use of Zorn's lemma, is the proof that everyvector spacehas abasis. Here the elements of⁠S{\displaystyle S}⁠arelinearly independentsubsets of the vector space. The union of a chain of elements of⁠S{\displaystyle S}⁠is linearly independent, since an infinite set is linearly independent if and only if each finite subset is, and every finite subset of the union of a chain must be included in a member of the chain. So, there exist a maximal linearly independent set. This linearly independant set must span the vector space because of maximality, and is therefore a basis. Another classical use of Zorn's lemma is the proof that every properideal—that is, an ideal that is not the whole ring—of aringis contained in amaximal ideal. Here,⁠S{\displaystyle S}⁠is the set of the proper ideals containing the given ideal. The union of chain of ideals is an ideal, since the axioms of an ideal involve a finite number of elements. The union of a chain of proper ideals is a proper ideal, since otherwise⁠1{\displaystyle 1}⁠would belong to the union, and this implies that it would belong to a member of the chain. The axiom of choice is equivalent with the fact that a well-order can be defined on every set, where a well-order is atotal ordersuch that every nonempty subset has a least element. Simple examples of well-ordered sets are the natural numbers (with the natural order), and, for everyn, the set of then-tuplesof natural numbers, with thelexicographic order. Well-orders allow a generalization ofmathematical induction, which is calledtransfinite induction. Given a property (predicate)⁠P(n){\displaystyle P(n)}⁠depending on a natural number, mathematical induction is the fact that for proving that⁠P(n){\displaystyle P(n)}⁠is always true, it suffice to prove that for every⁠n{\displaystyle n}⁠, Transfinite induction is the same, replacing natural numbers by the elements of a well-ordered set. Often, a proof by transfinite induction easier if three cases are proved separately, the two first cases being the same as for usual induction: Transfinite induction is fundamental for definingordinal numbersandcardinal numbers.
https://en.wikipedia.org/wiki/Set_(mathematics)
Recursionoccurs when the definition of a concept or process depends on a simpler or previous version of itself.[1]Recursion is used in a variety of disciplines ranging fromlinguisticstologic. The most common application of recursion is inmathematicsandcomputer science, where afunctionbeing defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur. A process that exhibits recursion isrecursive.Video feedbackdisplays recursive images, as does aninfinity mirror. In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties: For example, the following is a recursive definition of a person'sancestor. One's ancestor is either: TheFibonacci sequenceis another classic example of recursion: Many mathematical axioms are based upon recursive rules. For example, the formal definition of thenatural numbersby thePeano axiomscan be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number."[2]By this base case and recursive rule, one can generate the set of all natural numbers. Other recursively defined mathematical objects includefactorials,functions(e.g.,recurrence relations),sets(e.g.,Cantor ternary set), andfractals. There are various more tongue-in-cheek definitions of recursion; seerecursive humor. Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'.[3] To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, while the running of a procedure involves actually following the rules and performing the steps. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. When a procedure is thus defined, this immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete. Even if it is properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old, partially executed invocation of the procedure; this requires some administration as to how far various simultaneous instances of the procedures have progressed. For this reason, recursive definitions are very rare in everyday situations. LinguistNoam Chomsky, among many others, has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practical constraints such as the time available to utter one), can be explained as the consequence of recursion in natural language.[4][5] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence:Dorothy thinks witches are dangerous, in which the sentencewitches are dangerousoccurs in the larger one. So a sentence can be defined recursively (very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence. This is really just a special case of the mathematical definition of recursion. This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length:Dorothy thinks that Toto suspects that Tin Man said that.... There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another.[6]Over the years, languages in general have proved amenable to this kind of analysis. The generally accepted idea that recursion is an essential property of human language has been challenged byDaniel Everetton the basis of his claims about thePirahã language. Andrew Nevins, David Pesetsky and Cilene Rodrigues are among many who have argued against this.[7]Literaryself-referencecan in any case be argued to be different in kind from mathematical or logical recursion.[8] Recursion plays a crucial role not only in syntax, but also innatural language semantics. The wordand, for example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible,andis typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.[9] Arecursive grammaris aformal grammarthat contains recursiveproduction rules.[10] Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving acircular definitionorself-reference, in which the putative recursive step does not get closer to a base case, but instead leads to aninfinite regress. It is not unusual for such books to include a joke entry in their glossary along the lines of: A variation is found on page 269 in theindexof some editions ofBrian KernighanandDennis Ritchie's bookThe C Programming Language; the index entry recursively references itself ("recursion 86, 139, 141, 182, 202, 269"). Early versions of this joke can be found inLet's talk Lispby Laurent Siklóssy (published by Prentice Hall PTR on December 1, 1975, with a copyright date of 1976) and inSoftware Toolsby Kernighan and Plauger (published by Addison-Wesley Professional on January 11, 1976). The joke also appears inThe UNIX Programming Environmentby Kernighan and Pike. It did not appear in the first edition ofThe C Programming Language. The joke is part of thefunctional programmingfolklore and was already widespread in the functional programming community before the publication of the aforementioned books.[12][13] Another joke is that "To understand recursion, you must understand recursion."[11]In the English-language version of the Google web search engine, when a search for "recursion" is made, the site suggests "Did you mean:recursion."[14]An alternative form is the following, fromAndrew Plotkin:"If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer toDouglas Hofstadterthan you are; then ask him or her what recursion is." Recursive acronymsare other examples of recursive humor.PHP, for example, stands for "PHP Hypertext Preprocessor",WINEstands for "WINE Is Not an Emulator",GNUstands for "GNU's not Unix", andSPARQLdenotes the "SPARQL Protocol and RDF Query Language". The canonical example of a recursively defined set is given by thenatural numbers: In mathematical logic, thePeano axioms(or Peano postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematicianRichard Dedekindand by the Italian mathematicianGiuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions. Another interesting example is the set of all "provable" propositions in anaxiomatic systemthat are defined in terms of aproof procedurewhich is inductively (or recursively) defined as follows: Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating theCantor setis a subdivision rule, as isbarycentric subdivision. Afunctionmay be recursively defined in terms of itself. A familiar example is theFibonacci numbersequence:F(n) =F(n− 1) +F(n− 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this caseF(0) = 0 andF(1) = 1. Applying the standard technique ofproof by casesto recursively defined sets or functions, as in the preceding sections, yieldsstructural induction— a powerful generalization ofmathematical inductionwidely used to derive proofs inmathematical logicand computer science. Dynamic programmingis an approach tooptimizationthat restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is theBellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step). Inset theory, this is a theorem guaranteeing that recursively defined functions exist. Given a setX, an elementaofXand a functionf:X→X, the theorem states that there is a unique functionF:N→X{\displaystyle F:\mathbb {N} \to X}(whereN{\displaystyle \mathbb {N} }denotes the set of natural numbers including zero) such that for any natural numbern. Dedekind was the first to pose the problem of unique definition of set-theoretical functions onN{\displaystyle \mathbb {N} }by recursion, and gave a sketch of an argument in the 1888 essay "Was sind und was sollen die Zahlen?"[15] Take two functionsF:N→X{\displaystyle F:\mathbb {N} \to X}andG:N→X{\displaystyle G:\mathbb {N} \to X}such that: whereais an element ofX. It can be proved bymathematical inductionthatF(n) =G(n)for all natural numbersn: By induction,F(n) =G(n)for alln∈N{\displaystyle n\in \mathbb {N} }. A common method of simplification is to divide a problem into subproblems of the same type. As acomputer programmingtechnique, this is calleddivide and conquerand is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach isdynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached. A classic example of recursion is the definition of thefactorialfunction, given here inPythoncode: The function calls itself recursively on a smaller version of the input(n - 1)and multiplies the result of the recursive call byn, until reaching thebase case, analogously to the mathematical definition of factorial. Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is inparsersfor programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program. Recurrence relationsare equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., aclosed-form expression). Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually the simplicity of instructions. The main disadvantage is that the memory usage of recursive algorithms may grow very quickly, rendering them impractical for larger instances. Shapes that seem to have been created by recursive processes sometimes appear in plants and animals, such as in branching structures in which one large part branches out into two or more similar smaller parts. One example isRomanesco broccoli.[16] Authors use the concept ofrecursivityto foreground the situation in which specificallysocialscientists find themselves when producing knowledge about the world they are always already part of.[17][18]According to Audrey Alejandro, “as social scientists, the recursivity of our condition deals with the fact that we are both subjects (as discourses are the medium through which we analyse) and objects of the academic discourses we produce (as we are social agents belonging to the world we analyse).”[19]From this basis, she identifies in recursivity a fundamental challenge in the production of emancipatory knowledge which calls for the exercise ofreflexiveefforts: we are socialised into discourses and dispositions produced by the socio-political order we aim to challenge, a socio-political order that we may, therefore, reproduce unconsciously while aiming to do the contrary. The recursivity of our situation as scholars – and, more precisely, the fact that the dispositional tools we use to produce knowledge about the world are themselves produced by this world – both evinces the vital necessity of implementing reflexivity in practice and poses the main challenge in doing so. Recursion is sometimes referred to inmanagement scienceas the process of iterating through levels of abstraction in large business entities.[20]A common example is the recursive nature of managementhierarchies, ranging fromline managementtosenior managementviamiddle management. It also encompasses the larger issue ofcapital structureincorporate governance.[21] TheMatryoshka dollis a physical artistic example of the recursive concept.[22] Recursion has been used in paintings sinceGiotto'sStefaneschi Triptych, made in 1320. Its central panel contains the kneeling figure of Cardinal Stefaneschi, holding up the triptych itself as an offering.[23][24]This practice is more generally known as theDroste effect, an example of theMise en abymetechnique. M. C. Escher'sPrint Gallery(1956) is a print which depicts a distorted city containing a gallery whichrecursivelycontains the picture, and soad infinitum.[25] The filmInceptionhas colloquialized the appending of the suffix-ceptionto a noun to jokingly indicate the recursion of something.[26]
https://en.wikipedia.org/wiki/Recursion
Infunctional programming,fold(also termedreduce,accumulate,aggregate,compress, orinject) refers to a family ofhigher-order functionsthatanalyzearecursivedata structure and through use of a given combining operation, recombine the results ofrecursivelyprocessing its constituent parts, building up a return value. Typically, a fold is presented with a combiningfunction, a topnodeof adata structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure'shierarchy, using the function in a systematic way. Folds are in a sense dual tounfolds, which take aseedvalue and apply a functioncorecursivelyto decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on itsterminalvalues and the recursive results (catamorphism, versusanamorphismof unfolds). Folds can be regarded as consistently replacing the structural components of a data structure with functions and values.Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly callednil([]), or is constructed by prefixing an element in front of another list, creating what is called aconsnode(Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil)))))), resulting from application of aconsfunction (written down as a colon(:)inHaskell). One can view a fold on lists asreplacingthenilat the end of the list with a specific value, andreplacingeachconswith a specific function. These replacements can be viewed as a diagram: There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function: These pictures illustraterightandleftfold of a listvisually. They also highlight the fact thatfoldr (:) []is the identity function on lists (ashallow copyinLispparlance), as replacingconswithconsandnilwithnilwill not change the result. The left fold diagram suggests an easy way to reverse a list,foldl (flip (:)) []. Note that the parameters to cons must be flipped, because the element to add is now the right hand parameter of the combining function. Another easy result to see from this vantage-point is to write the higher-ordermap functionin terms offoldr, by composing the function to act on the elements withcons, as: where the period (.) is an operator denotingfunction composition. This way of looking at things provides a simple route to designing fold-like functions on otheralgebraic data typesand structures, like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype with provided functions, and any constant values of the type with provided values. Such a function is generally referred to as acatamorphism. The folding of the list[1,2,3,4,5]with the addition operator would result in 15, the sum of the elements of the list[1,2,3,4,5]. To a rough approximation, one can think of this fold as replacing the commas in the list with the + operation, giving1 + 2 + 3 + 4 + 5.[1] In the example above, + is anassociative operation, so the final result will be the same regardless of parenthesization, although the specific way in which it is calculated will be different. In the general case of non-associative binary functions, the order in which the elements are combined may influence the final result's value. On lists, there are two obvious ways to carry this out: either by combining the first element with the result of recursively combining the rest (called aright fold), or by combining the result of recursively combining all elements but the last one, with the last element (called aleft fold). This corresponds to a binaryoperatorbeing either right-associative or left-associative, inHaskell's orProlog's terminology. With a right fold, the sum would be parenthesized as1 + (2 + (3 + (4 + 5))), whereas with a left fold it would be parenthesized as(((1 + 2) + 3) + 4) + 5. In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when one reaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list. In the example above, the value 0 (theadditive identity) would be chosen as an initial value, giving1 + (2 + (3 + (4 + (5 + 0))))for the right fold, and((((0 + 1) + 2) + 3) + 4) + 5for the left fold. For multiplication, an initial choice of 0 wouldn't work:0 * 1 * 2 * 3 * 4 * 5 = 0. Theidentity elementfor multiplication is 1. This would give us the outcome1 * 1 * 2 * 3 * 4 * 5 = 120 = 5!. The use of an initial value is necessary when the combining functionfis asymmetrical in its types (e.g.a → b → b), i.e. when the type of its result is different from the type of the list's elements. Then an initial value must be used, with the same type as that off's result, for alinearchain of applications to be possible. Whether it will be left- or right-oriented will be determined by the types expected of its arguments by the combining function. If it is the second argument that must be of the same type as the result, thenfcould be seen as a binary operation thatassociates on the right, and vice versa. When the function is amagma, i.e. symmetrical in its types (a → a → a), and the result type is the same as the list elements' type, the parentheses may be placed in arbitrary fashion thus creating abinary treeof nested sub-expressions, e.g.,((1 + 2) + (3 + 4)) + 5. If the binary operationfis associative this value will be well-defined, i.e., same for any parenthesization, although the operational details of how it is calculated will be different. This can have significant impact on efficiency iffisnon-strict. Whereas linear folds arenode-orientedand operate in a consistent manner for eachnodeof alist, tree-like folds are whole-list oriented and operate in a consistent manner acrossgroupsof nodes. One often wants to choose theidentity elementof the operationfas the initial valuez. When no initial value seems appropriate, for example, when one wants to fold the function which computes the maximum of its two parameters over a non-empty list to get the maximum element of the list, there are variants offoldrandfoldlwhich use the last and first element of the list respectively as the initial value. In Haskell and several other languages, these are calledfoldr1andfoldl1, the 1 making reference to the automatic provision of an initial element, and the fact that the lists they are applied to must have at least one element. These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same. Richard Bird in his 2010 book proposes[2]"a general fold function on non-empty lists"foldrnwhich transforms its last element, by applying an additional argument function to it, into a value of the result type before starting the folding itself, and is thus able to use type-asymmetrical binary operation like the regularfoldrto produce a result of type different from the list's elements type. Using Haskell as an example,foldlandfoldrcan be formulated in a few equations. If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result of applying f to the old initial value and the first element. If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest. Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists: In the case offoldifunction, to avoid its runaway evaluation onindefinitelydefined lists the functionfmustnot alwaysdemand its second argument's value, at least not all of it, or not immediately (seeexamplebelow). In the presence oflazy, ornon-strictevaluation,foldrwill immediately return the application offto the head of the list and the recursive case of folding over the rest of the list. Thus, iffis able to produce some part of its result without reference to the recursive case on its "right" i.e., in itssecondargument, and the rest of the result is never demanded, then the recursion will stop (e.g.,head==foldr(\ab->a)(error"empty list")). This allows right folds to operate on infinite lists. By contrast,foldlwill immediately call itself with new parameters until it reaches the end of the list. Thistail recursioncan be efficiently compiled as a loop, but can't deal with infinite lists at all — it will recurse forever in aninfinite loop. Having reached the end of the list, anexpressionis in effect built byfoldlof nested left-deepeningf-applications, which is then presented to the caller to be evaluated. Were the functionfto refer to its second argument first here, and be able to produce some part of its result without reference to the recursive case (here, on itslefti.e., in itsfirstargument), then the recursion would stop. This means that whilefoldrrecurseson the right, it allows for a lazy combining function to inspect list's elements from the left; and conversely, whilefoldlrecurseson the left, it allows for a lazy combining function to inspect list's elements from the right, if it so chooses (e.g.,last==foldl(\ab->b)(error"empty list")). Reversing a list is also tail-recursive (it can be implemented usingrev=foldl(\ysx->x:ys)[]). Onfinitelists, that means that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf.1+>(2+>(3+>0))==((0<+3)<+2)<+1), with a modification to the functionfso it reverses the order of its arguments (i.e.,foldrfz==foldl(flipf)z.foldl(flip(:))[]), tail-recursively building a representation of expression that right-fold would build. The extraneous intermediate list structure can be eliminated with thecontinuation-passing styletechnique,foldrfzxs==foldl(\kx->k.fx)idxsz; similarly,foldlfzxs==foldr(\xk->k.flipfx)idxsz(flipis only needed in languages like Haskell with its flipped order of arguments to the combining function offoldlunlike e.g., in Scheme where the same order of arguments is used for combining functions to bothfoldlandfoldr). Another technical point is that, in the case of left folds using lazy evaluation, the new initial parameter is not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end of the list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often provide a stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call. In Haskell this is thefoldl'(note the apostrophe, pronounced 'prime') function in theData.Listlibrary (one needs to be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituents automatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constant space operation, when lazy evaluation of the final result is impossible or undesirable. Using aHaskellinterpreter, the structural transformations which fold functions perform can be illustrated by constructing a string: Infinite tree-like folding is demonstrated e.g., inrecursiveprimes production byunbounded sieve of EratosthenesinHaskell: where the functionunionoperates on ordered lists in a local manner to efficiently produce theirset union, andminustheirset difference. A finite prefix of primes is concisely defined as a folding of set difference operation over the lists of enumerated multiples of integers, as For finite lists, e.g.,merge sort(and its duplicates-removing variety,nubsort) could be easily defined using tree-like folding as with the functionmergea duplicates-preserving variant ofunion. Functionsheadandlastcould have been defined through folding as Scala also features the tree-like folds using the methodlist.fold(z)(op).[11] Fold is apolymorphicfunction. For anyghaving a definition thengcan be expressed as[12] Also, in alazy languagewith infinite lists, afixed point combinatorcan be implemented via fold,[13]proving that iterations can be reduced to folds:
https://en.wikipedia.org/wiki/Fold_(higher-order_function)
In manyprogramming languages,mapis ahigher-order functionthat applies agiven functionto each element of acollection, e.g. alistorset, returning the results in a collection of the same type. It is often calledapply-to-allwhen considered infunctional form. The concept of a map is not limited to lists: it works for sequentialcontainers, tree-like containers, or even abstract containers such asfutures and promises. Suppose there is list of integers[1, 2, 3, 4, 5]and would like to calculate thesquareof each integer. To do this, first define a function tosquarea single number (shown here inHaskell): Afterwards, call: which yields[1, 4, 9, 16, 25], demonstrating thatmaphas gone through the entire list and applied the functionsquareto each element. Below, there is view of each step of the mapping process for a list of integersX = [0, 5, 8, 3, 2, 1]mapping into a new listX'according to the functionf(x)=x+1{\displaystyle f(x)=x+1}: Themapis provided as part of the Haskell's base prelude (i.e. "standard library") and is implemented as: In Haskell, thepolymorphic functionmap :: (a -> b) -> [a] -> [b]is generalized to apolytypic functionfmap :: Functor f => (a -> b) -> f a -> f b, which applies to any type belonging theFunctortype class. Thetype constructorof lists[]can be defined as an instance of theFunctortype class using themapfunction from the previous example: Other examples ofFunctorinstances include trees: Mapping over a tree yields: For every instance of theFunctortype class,fmapiscontractually obligedto obey the functor laws: where.denotesfunction compositionin Haskell. Among other uses, this allows defining element-wise operations for various kinds ofcollections. Incategory theory, afunctorF:C→D{\displaystyle F:C\rightarrow D}consists of two maps: one that sends each objectA{\displaystyle A}of the category to another objectFA{\displaystyle FA}, and one that sends each morphismf:A→B{\displaystyle f:A\rightarrow B}to another morphismFf:FA→FB{\displaystyle Ff:FA\rightarrow FB}, which acts as ahomomorphismon categories (i.e. it respects the category axioms). Interpreting the universe of data types as a categoryType{\displaystyle Type}, with morphisms being functions, then a type constructorFthat is a member of theFunctortype class is the object part of such a functor, andfmap :: (a -> b) -> F a -> F bis the morphism part. The functor laws described above are precisely the category-theoretic functor axioms for this functor. Functors can also be objects in categories, with "morphisms" callednatural transformations. Given two functorsF,G:C→D{\displaystyle F,G:C\rightarrow D}, a natural transformationη:F→G{\displaystyle \eta :F\rightarrow G}consists of a collection of morphismsηA:FA→GA{\displaystyle \eta _{A}:FA\rightarrow GA}, one for each objectA{\displaystyle A}of the categoryD{\displaystyle D}, which are 'natural' in the sense that they act as a 'conversion' between the two functors, taking no account of the objects that the functors are applied to. Natural transformations correspond to functions of the formeta :: F a -> G a, whereais a universally quantified type variable –etaknows nothing about the type which inhabitsa. The naturality axiom of such functions is automatically satisfied because it is a so-called free theorem, depending on the fact that it isparametrically polymorphic.[1]For example,reverse :: List a -> List a, which reverses a list, is a natural transformation, as isflattenInorder :: Tree a -> List a, which flattens a tree from left to right, and evensortBy :: (a -> a -> Bool) -> List a -> List a, which sorts a list based on a provided comparison function. The mathematical basis of maps allow for a number ofoptimizations. The composition law ensures that both lead to the same result; that is,map⁡(f)∘map⁡(g)=map⁡(f∘g){\displaystyle \operatorname {map} (f)\circ \operatorname {map} (g)=\operatorname {map} (f\circ g)}. However, the second form is more efficient to compute than the first form, because eachmaprequires rebuilding an entire list from scratch. Therefore, compilers will attempt to transform the first form into the second; this type of optimization is known asmap fusionand is thefunctionalanalog ofloop fusion.[2] Map functions can be and often are defined in terms of afoldsuch asfoldr, which means one can do amap-fold fusion:foldr f z . map gis equivalent tofoldr (f . g) z. The implementation of map above on singly linked lists is nottail-recursive, so it may build up a lot of frames on the stack when called with a large list. Many languages alternately provide a "reverse map" function, which is equivalent to reversing a mapped list, but is tail-recursive. Here is an implementation which utilizes thefold-left function. Since reversing a singly linked list is also tail-recursive, reverse and reverse-map can be composed to perform normal map in a tail-recursive way, though it requires performing two passes over the list. The map function originated infunctional programminglanguages. The languageLispintroduced a map function calledmaplist[3]in 1959, with slightly different versions already appearing in 1958.[4]This is the original definition formaplist, mapping a function over successive rest lists: The functionmaplistis still available in newer Lisps likeCommon Lisp,[5]though functions likemapcaror the more genericmapwould be preferred. Squaring the elements of a list usingmaplistwould be written inS-expressionnotation like this: Using the functionmapcar, above example would be written like this: Today mapping functions are supported (or may be defined) in manyprocedural,object-oriented, andmulti-paradigmlanguages as well: InC++'sStandard Library, it is calledstd::transform, inC#(3.0)'s LINQ library, it is provided as an extension method calledSelect. Map is also a frequently used operation in high level languages such asColdFusion Markup Language(CFML),Perl,Python, andRuby; the operation is calledmapin all four of these languages. Acollectalias formapis also provided in Ruby (fromSmalltalk).Common Lispprovides a family of map-like functions; the one corresponding to the behavior described here is calledmapcar(-carindicating access using theCAR operation). There are also languages with syntactic constructs providing the same functionality as the map function. Map is sometimes generalized to accept dyadic (2-argument) functions that can apply a user-supplied function to corresponding elements from two lists. Some languages use special names for this, such asmap2orzipWith. Languages using explicitvariadic functionsmay have versions of map with variablearityto supportvariable-arityfunctions. Map with 2 or more lists encounters the issue of handling when the lists are of different lengths. Various languages differ on this. Some raise an exception. Some stop after the length of the shortest list and ignore extra items on the other lists. Some continue on to the length of the longest list, and for the lists that have already ended, pass some placeholder value to the function indicating no value. In languages which supportfirst-class functionsandcurrying,mapmay bepartially appliedtolifta function that works on only one value to an element-wise equivalent that works on an entire container; for example,map squareis a Haskell function which squares each element of a list.
https://en.wikipedia.org/wiki/Map_(higher-order_function)
Infunctional programming,filteris ahigher-order functionthat processes adata structure(usually alist) in some order to produce a new data structure containing exactly those elements of the original data structure for which a givenpredicatereturns theBoolean valuetrue. InHaskell, the code example evaluates to the list 2, 4, …, 10 by applying the predicateevento every element of the list of integers 1, 2, …, 10 in that order and creating a new list of those elements for which the predicate returns the Boolean value true, thereby giving a list containing only the even members of that list. Conversely, the code example evaluates to the list 1, 3, …, 9 by collecting those elements of the list of integers 1, 2, …, 10 for which the predicateevenreturns the Boolean value false (with.being thefunction composition operator). Below, you can see a view of each step of the filter process for a list of integersX = [0, 5, 8, 3, 2, 1]according to the function :f(x)={Trueifx≡0(mod2)Falseifx≡1(mod2).{\displaystyle f(x)={\begin{cases}\mathrm {True} &{\text{ if }}x\equiv 0{\pmod {2}}\\\mathrm {False} &{\text{ if }}x\equiv 1{\pmod {2}}.\end{cases}}} This function express that ifx{\displaystyle x}is even the return value isTrue{\displaystyle \mathrm {True} }, otherwise it'sFalse{\displaystyle \mathrm {False} }. This is the predicate. Filter is a standard function for manyprogramming languages, e.g., Haskell,[1]OCaml,[2]Standard ML,[3]orErlang.[4]Common Lispprovides the functionsremove-ifandremove-if-not.[5]Scheme Requests for Implementation(SRFI) 1 provides an implementation of filter for the languageScheme.[6]C++provides thealgorithmsremove_if(mutating) andremove_copy_if(non-mutating);C++11additionally providescopy_if(non-mutating).[7]Smalltalkprovides theselect:method for collections. Filter can also be realized usinglist comprehensionsin languages that support them. In Haskell,filtercan be implemented like this: Here,[]denotes the empty list,++the list concatenation operation, and[x | p x]denotes a list conditionally holding a value,x, if the conditionp xholds (evaluates toTrue). Filter creates its result without modifying the original list. Many programming languages also provide variants that destructively modify the list argument instead for faster performance. Other variants of filter (e.g., HaskelldropWhile[14]andpartition[15]) are also common. A commonmemory optimizationforpurely functional programming languagesis to have the input list and filtered result share the longest common tail (tail-sharing).
https://en.wikipedia.org/wiki/Filter_(higher-order_function)
Infunctional programming,monadsare a way to structure computations as a sequence of steps, where each step not only produces a value but also some extra information about the computation, such as a potential failure, non-determinism, or side effect. More formally, a monad is atype constructorM equipped with two operations,return:<A>(a:A)->M(A)which lifts a value into the monadic context, andbind:<A,B>(m_a:M(A),f:A->M(B))->M(B)which chains monadic computations. In simpler terms, monads can be thought of asinterfacesimplemented on type constructors, that allow for functions to abstract over various type constructor variants that implement monad (e.g.Option,List, etc.).[1][2] Both the concept of a monad and the term originally come fromcategory theory, where a monad is defined as anendofunctorwith additional structure.[a][b]Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as themonad laws, which should be satisfied by any monad and can be used toverifymonadic code.[3][4] Since monads makesemanticsexplicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such asHaskell, even offer pre-built definitions in their corelibrariesfor the general monad structure and common instances.[1][5] "For a monadm, a value of typem arepresents having access to a value of typeawithin the context of the monad." —C. A. McCann[6] More exactly, a monad can be used where unrestricted access to a value is inappropriate for reasons specific to the scenario. In the case of the Maybe monad, it is because the value may not exist. In the case of the IO monad, it is because the value may not be known yet, such as when the monad represents user input that will only be provided after a prompt is displayed. In all cases the scenarios in which access makes sense are captured by the bind operation defined for the monad; for the Maybe monad a value is bound only if it exists, and for the IO monad a value is bound only after the previous operations in the sequence have been performed. A monad can be created by defining atype constructorMand two operations: (An alternative butequivalent constructusing thejoinfunction instead of thebindoperator can be found in the later section§ Derivation from functors.) With these elements, the programmer composes a sequence of function calls (a "pipeline") with severalbindoperators chained together in an expression. Each function call transforms its input plain-type value, and the bind operator handles the returned monadic value, which is fed into the next step in the sequence. Typically, the bind operator>>=may contain code unique to the monad that performs additional computation steps not available in the function received as a parameter. Between each pair of composed function calls, the bind operator can inject into the monadic valuem asome additional information that is not accessible within the functionf, and pass it along down the pipeline. It can also exert finer control of the flow of execution, for example by calling the function only under some conditions, or executing the function calls in a particular order. One example of a monad is theMaybetype. Undefined null results are one particular pain point that many procedural languages don't provide specific tools for dealing with, requiring use of thenull object patternor checks to test for invalid values at each operation to handle undefined values. This causes bugs and makes it harder to build robust software that gracefully handles errors. TheMaybetype forces the programmer to deal with these potentially undefined results by explicitly defining the two states of a result:Just ⌑result⌑, orNothing. For example the programmer might be constructing a parser, which is to return an intermediate result, or else signal a condition which the parser has detected, and which the programmer must also handle. With just a little extra functional spice on top, thisMaybetype transforms into a fully-featured monad.[c]: 12.3 pages 148–151 In most languages, the Maybe monad is also known as anoption type, which is just a type that marks whether or not it contains a value. Typically they are expressed as some kind ofenumerated type. In theRustprogramming language it is calledOption<T>and variants of this type can either be a value ofgeneric typeT, or the empty variant:None. Option<T>can also be understood as a "wrapping" type, and this is where its connection to monads comes in. In languages with some form of the Maybe type, there are functions that aid in their use such as composingmonadic functionswith each other and testing if a Maybe contains a value. In the following hard-coded example, a Maybe type is used as a result of functions that may fail, in this case the type returns nothing if there is adivide-by-zero. One such way to test whether or not a Maybe contains a value is to useifstatements. Other languages may havepattern matching Monads can compose functions that return Maybe, putting them together. A concrete example might have one function take in several Maybe parameters, and return a single Maybe whose value is Nothing when any of the parameters are Nothing, as in the following: Instead of repeatingSomeexpressions, we can use something called abindoperator. (also known as "map", "flatmap", or "shove"[8]: 2205s). This operation takes a monad and a function that returns a monad and runs the function on the inner value of the passed monad, returning the monad from the function. In Haskell, there is an operatorbind, or (>>=) that allows for this monadic composition in a more elegant form similar tofunction composition.[d]: 150–151 With>>=available,chainable_divisioncan be expressed much more succinctly with the help ofanonymous functions(i.e. lambdas). Notice in the expression below how the two nested lambdas each operate on the wrapped value in the passedMaybemonad using the bind operator.[e]: 93 What has been shown so far is basically a monad, but to be more concise, the following is a strict list of qualities necessary for a monad as defined by the following section. These are the 3 things necessary to form a monad. Other monads may embody different logical processes, and some may have additional properties, but all of them will have these three similar components.[1][9] The more common definition for a monad in functional programming, used in the above example, is actually based on aKleisli triple⟨T, η, μ⟩ rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined basic typesTandU, a monad consists of three parts: To fully qualify as a monad though, these three parts must also respect a few laws: Algebraically, this means any monad both gives rise to a category (called theKleisli category)andamonoidin the category of functors (from values to computations), with monadic composition as a binary operator in the monoid[8]: 2450sandunitas identity in the monoid. The value of the monad pattern goes beyond merely condensing code and providing a link to mathematical reasoning. Whatever language or defaultprogramming paradigma developer uses, following the monad pattern brings many of the benefits ofpurely functional programming. Byreifyinga specific kind of computation, a monad not onlyencapsulatesthe tedious details of that computational pattern, but it does so in adeclarativeway, improving the code's clarity. As monadic values explicitly represent not only computed values, but computedeffects, a monadic expression can be replaced with its value inreferentially transparent positions, much like pure expressions can be, allowing for many techniques and optimizations based onrewriting.[4] Typically, programmers will usebindto chain monadic functions into a sequence, which has led some to describe monads as "programmable semicolons", a reference to how manyimperativelanguages use semicolons to separatestatements.[1][5]However, monads do not actually order computations; even in languages that use them as central features, simpler function composition can arrange steps within a program. A monad's general utility rather lies in simplifying a program's structure and improvingseparation of concernsthrough abstraction.[4][11] The monad structure can also be seen as a uniquely mathematical andcompile timevariation on thedecorator pattern. Some monads can pass along extra data that is inaccessible to functions, and some even exert finer control over execution, for example only calling a function under certain conditions. Because they let application programmers implementdomain logicwhile offloading boilerplate code onto pre-developed modules, monads can even be considered a tool foraspect-oriented programming.[12] One other noteworthy use for monads is isolating side-effects, likeinput/outputor mutablestate, in otherwise purely functional code. Even purely functional languagescanstill implement these "impure" computations without monads, via an intricate mix of function composition andcontinuation-passing style(CPS) in particular.[2]With monads though, much of this scaffolding can be abstracted away, essentially by taking each recurring pattern in CPS code and bundling it into a distinct monad.[4] If a language does not support monads by default, it is still possible to implement the pattern, often without much difficulty. When translated from category-theory to programming terms, the monad structure is ageneric conceptand can be defined directly in any language that supports an equivalent feature forbounded polymorphism. A concept's ability to remain agnostic about operational details while working on underlying types is powerful, but the unique features and stringent behavior of monads set them apart from other concepts.[13] Discussions of specific monads will typically focus on solving a narrow implementation problem since a given monad represents a specific computational form. In some situations though, an application can even meet its high-level goals by using appropriate monads within its core logic. Here are just a few applications that have monads at the heart of their designs: The term "monad" in programming dates to theAPLandJprogramming languages, which do tend toward being purely functional. However, in those languages, "monad" is only shorthand for a function taking one parameter (a function with two parameters being a "dyad", and so on).[19] The mathematicianRoger Godementwas the first to formulate the concept of a monad (dubbing it a "standard construction") in the late 1950s, though the term "monad" that came to dominate was popularized by category-theoristSaunders Mac Lane.[citation needed]The form defined above usingbind, however, was originally described in 1965 by mathematicianHeinrich Kleisliin order to prove that any monad could be characterized as anadjunctionbetween two (covariant) functors.[20] Starting in the 1980s, a vague notion of the monad pattern began to surface in the computer science community. According to programming language researcherPhilip Wadler, computer scientistJohn C. Reynoldsanticipated several facets of it in the 1970s and early 1980s, when he discussed the value ofcontinuation-passing style, of category theory as a rich source for formal semantics, and of the type distinction between values and computations.[4]The research languageOpal, which was actively designed up until 1990, also effectively based I/O on a monadic type, but the connection was not realized at the time.[21] The computer scientistEugenio Moggiwas the first to explicitly link the monad of category theory to functional programming, in a conference paper in 1989,[22]followed by a more refined journal submission in 1991. In earlier work, several computer scientists had advanced using category theory to provide semantics for thelambda calculus. Moggi's key insight was that a real-world program is not just a function from values to other values, but rather a transformation that formscomputationson those values. When formalized in category-theoretic terms, this leads to the conclusion that monads are the structure to represent these computations.[3] Several others popularized and built on this idea, including Philip Wadler andSimon Peyton Jones, both of whom were involved in the specification of Haskell. In particular, Haskell used a problematic "lazy stream" model up through v1.2 to reconcile I/O withlazy evaluation, until switching over to a more flexible monadic interface.[23]The Haskell community would go on to apply monads to many problems in functional programming, and in the 2010s, researchers working with Haskell eventually recognized that monads areapplicative functors;[24][j]and that both monads andarrowsaremonoids.[26] At first, programming with monads was largely confined to Haskell and its derivatives, but as functional programming has influenced other paradigms, many languages have incorporated a monad pattern (in spirit if not in name). Formulations now exist inScheme,Perl,Python,Racket,Clojure,Scala,F#, and have also been considered for a newMLstandard.[citation needed] One benefit of the monad pattern is bringing mathematical precision on the composition of computations. Not only can the monad laws be used to check an instance's validity, but features from related structures (like functors) can be used throughsubtyping. Returning to theMaybeexample, its components were declared to make up a monad, but no proof was given that it satisfies the monad laws. This can be rectified by plugging the specifics ofMaybeinto one side of the general laws, then algebraically building a chain of equalities to reach the other side: Though rarer in computer science, one can use category theory directly, which defines a monad as afunctorwith two additionalnatural transformations.[k]So to begin, a structure requires ahigher-order function(or "functional") namedmapto qualify as a functor: This is not always a major issue, however, especially when a monad is derived from a pre-existing functor, whereupon the monad inheritsmapautomatically. (For historical reasons, thismapis instead calledfmapin Haskell.) A monad's first transformation is actually the sameunitfrom the Kleisli triple, but following the hierarchy of structures closely, it turns outunitcharacterizes anapplicative functor, an intermediate structure between a monad and a basic functor. In the applicative context,unitis sometimes referred to aspurebut is still the same function. What does differ in this construction is the lawunitmust satisfy; asbindis not defined, the constraint is given in terms ofmapinstead: The final leap from applicative functor to monad comes with the second transformation, thejoinfunction (in category theory this is a natural transformation usually calledμ), which "flattens" nested applications of the monad: As the characteristic function,joinmust also satisfy three variations on the monad laws: Regardless of whether a developer defines a direct monad or a Kleisli triple, the underlying structure will be the same, and the forms can be derived from each other easily: TheList monadnaturally demonstrates how deriving a monad from a simpler functor can come in handy. In many languages, a list structure comes pre-defined along with some basic features, so aListtype constructor andappendoperator (represented with++for infix notation) are assumed as already given here. Embedding a plain value in a list is also trivial in most languages: From here, applying a function iteratively with alist comprehensionmay seem like an easy choice forbindand converting lists to a full monad. The difficulty with this approach is thatbindexpects monadic functions, which in this case will output lists themselves; as more functions are applied, layers of nested lists will accumulate, requiring more than a basic comprehension. However, a procedure to apply anysimplefunction over the whole list, in other wordsmap, is straightforward: Now, these two procedures already promoteListto an applicative functor. To fully qualify as a monad, only a correct notion ofjointo flatten repeated structure is needed, but for lists, that just means unwrapping an outer list to append the inner ones that contain values: The resulting monad is not only a list, but one that automatically resizes and condenses itself as functions are applied.bindcan now also be derived with just a formula, then used to feedListvalues through a pipeline of monadic functions: One application for this monadic list is representingnondeterministic computation.Listcan hold results for all execution paths in an algorithm, then condense itself at each step to "forget" which paths led to which results (a sometimes important distinction from deterministic, exhaustive algorithms).[citation needed]Another benefit is that checks can be embedded in the monad; specific paths can be pruned transparently at their first point of failure, with no need to rewrite functions in the pipeline.[28] A second situation whereListshines is composingmultivalued functions. For instance, thenthcomplex rootof a number should yieldndistinct complex numbers, but if anothermth root is then taken of those results, the finalm•nvalues should be identical to the output of them•nth root.Listcompletely automates this issue away, condensing the results from each step into a flat, mathematically correct list.[29] Monads present opportunities for interesting techniques beyond just organizing program logic. Monads can lay the groundwork for useful syntactic features while their high-level and mathematical nature enable significant abstraction. Although usingbindopenly often makes sense, many programmers prefer a syntax that mimics imperative statements (calleddo-notationin Haskell,perform-notationinOCaml,computation expressionsinF#,[30]andfor comprehensioninScala). This is onlysyntactic sugarthat disguises a monadic pipeline as acode block; the compiler will then quietly translate these expressions into underlying functional code. Translating theaddfunction from theMaybeinto Haskell can show this feature in action. A non-monadic version ofaddin Haskell looks like this: In monadic Haskell,returnis the standard name forunit, plus lambda expressions must be handled explicitly, but even with these technicalities, theMaybemonad makes for a cleaner definition: With do-notation though, this can be distilled even further into a very intuitive sequence: A second example shows howMaybecan be used in an entirely different language: F#. With computation expressions, a "safe division" function that returnsNonefor an undefined operandordivision by zero can be written as: At build-time, the compiler will internally "de-sugar" this function into a denser chain ofbindcalls: For a last example, even the general monad laws themselves can be expressed in do-notation: Every monad needs a specific implementation that meets the monad laws, but other aspects like the relation to other structures or standard idioms within a language are shared by all monads. As a result, a language or library may provide a generalMonadinterface withfunction prototypes, subtyping relationships, and other general facts. Besides providing a head-start to development and guaranteeing a new monad inherits features from a supertype (such as functors), checking a monad's design against the interface adds another layer of quality control.[citation needed] Monadic code can often be simplified even further through the judicious use of operators. Themapfunctional can be especially helpful since it works on more than just ad-hoc monadic functions; so long as a monadic function should work analogously to a predefined operator,mapcan be used to instantly "lift" the simpler operator into a monadic one.[l]With this technique, the definition ofaddfrom theMaybeexample could be distilled into: The process could be taken even one step further by definingaddnot just forMaybe, but for the wholeMonadinterface. By doing this, any new monad that matches the structure interface and implements its ownmapwill immediately inherit a lifted version ofaddtoo. The only change to the function needed is generalizing the type signature: Another monadic operator that is also useful for analysis is monadic composition (represented as infix>=>here), which allows chaining monadic functions in a more mathematical style: With this operator, the monad laws can be written in terms of functions alone, highlighting the correspondence to associativity and existence of an identity: In turn, the above shows the meaning of the "do" block in Haskell: The simplest monad is theIdentity monad, which just annotates plain values and functions to satisfy the monad laws: Identitydoes actually have valid uses though, such as providing abase casefor recursivemonad transformers. It can also be used to perform basic variable assignment within an imperative-style block.[m][citation needed] Any collection with a properappendis already a monoid, but it turns out thatListis not the onlycollectionthat also has a well-definedjoinand qualifies as a monad. One can even mutateListinto these other monadic collections by simply imposing special properties onappend:[n][o] As already mentioned, pure code should not have unmanaged side effects, but that does not preclude a program fromexplicitlydescribing and managing effects. This idea is central to Haskell'sIO monad, where an object of typeIO acan be seen as describing an action to be performed in the world, optionally providing information about the world of typea. An action that provides no information about the world has the typeIO (), "providing" the dummy value(). When a programmer binds anIOvalue to a function, the function computes the next action to be performed based on the information about the world provided by the previous action (input from users, files, etc.).[23]Most significantly, because the value of the IO monad can only be bound to a function that computes another IO monad, the bind function imposes a discipline of a sequence of actions where the result of an action can only be provided to a function that will compute the next action to perform. This means that actions which do not need to be performed never are, and actions that do need to be performed have a well defined sequence. For example, Haskell has several functions for acting on the widerfile system, including one that checks whether a file exists and another that deletes a file. Their two type signatures are: The first is interested in whether a given file really exists, and as a result, outputs aBoolean valuewithin theIOmonad. The second function, on the other hand, is only concerned with acting on the file system so theIOcontainer it outputs is empty. IOis not limited just to file I/O though; it even allows for user I/O, and along with imperative syntax sugar, can mimic a typical"Hello, World!" program: Desugared, this translates into the following monadic pipeline (>>in Haskell is just a variant ofbindfor when only monadic effects matter and the underlying result can be discarded): Another common situation is keeping alog fileor otherwise reporting a program's progress. Sometimes, a programmer may want to log even more specific, technical data for laterprofilingordebugging. TheWriter monadcan handle these tasks by generating auxiliary output that accumulates step-by-step. To show how the monad pattern is not restricted to primarily functional languages, this example implements aWritermonad inJavaScript. First, an array (with nested tails) allows constructing theWritertype as alinked list. The underlying output value will live in position 0 of the array, and position 1 will implicitly hold a chain of auxiliary notes: Definingunitis also very simple: Onlyunitis needed to define simple functions that outputWriterobjects with debugging notes: A true monad still requiresbind, but forWriter, this amounts simply to concatenating a function's output to the monad's linked list: The sample functions can now be chained together usingbind, but defining a version of monadic composition (calledpipeloghere) allows applying these functions even more succinctly: The final result is a clean separation of concerns between stepping through computations and logging them to audit later: An environment monad (also called areader monadand afunction monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a typeTto functions of typeE→T, whereEis the type of the shared environment. The monad functions are:return:T→E→T=t↦e↦tbind:(E→T)→(T→E→T′)→E→T′=r↦f↦e↦f(re)e{\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow E\rightarrow T=t\mapsto e\mapsto t\\{\text{bind}}\colon &(E\rightarrow T)\rightarrow (T\rightarrow E\rightarrow T')\rightarrow E\rightarrow T'=r\mapsto f\mapsto e\mapsto f\,(r\,e)\,e\end{array}}} The following monadic operations are useful:ask:E→E=idElocal:(E→E)→(E→T)→E→T=f↦c↦e↦c(fe){\displaystyle {\begin{array}{ll}{\text{ask}}\colon &E\rightarrow E={\text{id}}_{E}\\{\text{local}}\colon &(E\rightarrow E)\rightarrow (E\rightarrow T)\rightarrow E\rightarrow T=f\mapsto c\mapsto e\mapsto c\,(f\,e)\end{array}}} Theaskoperation is used to retrieve the current context, whilelocalexecutes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad. Formally, a value in an environment monad is equivalent to a function with an additional, anonymous argument;returnandbindare equivalent to theKandScombinators, respectively, in theSKI combinator calculus. A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of types) along with a return value (of typet). This is similar to an environment monad, except that it also returns a new state, and thus allows modeling amutableenvironment. Note that this monad takes a type parameter, the type of the state information. The monad operations are defined as follows: Useful state operations include: Another operation applies a state monad to a given initial state: do-blocks in a state monad are sequences of operations that can examine and update the state data. Informally, a state monad of state typeSmaps the type of return valuesTinto functions of typeS→T×S{\displaystyle S\rightarrow T\times S}, whereSis the underlying state. Thereturnandbindfunction are: From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in anycartesian closed categoryby definition. Acontinuationmonad[p]with return typeRmaps typeTinto functions of type(T→R)→R{\displaystyle \left(T\rightarrow R\right)\rightarrow R}. It is used to modelcontinuation-passing style. The return and bind functions are as follows: Thecall-with-current-continuationfunction is defined as follows: The following code is pseudocode.Suppose we have two functionsfooandbar, with types That is, both functions take in an integer and return another integer. Then we can apply the functions in succession like so: Where the result is the result offooapplied to the result ofbarapplied tox. But suppose we are debugging our program, and we would like to add logging messages tofooandbar. So we change the types as so: So that both functions return a tuple, with the result of the application as the integer, and a logging message with information about the applied function and all the previously applied functions as the string. Unfortunately, this means we can no longercomposefooandbar, as their input typeintis not compatible with their output typeint * string. And although we can again gain composability by modifying the types of each function to beint * string -> int * string, this would require us to add boilerplate code to each function to extract the integer from the tuple, which would get tedious as the number of such functions increases. Instead, let us define a helper function to abstract away this boilerplate for us: bindtakes in an integer and string tuple, then takes in a function (likefoo) that maps from an integer to an integer and string tuple. Its output is an integer and string tuple, which is the result of applying the input function to the integer within the input integer and string tuple. In this way, we only need to write boilerplate code to extract the integer from the tuple once, inbind. Now we have regained some composability. For example: Where(x,s)is an integer and string tuple.[q] To make the benefits even clearer, let us define an infix operator as an alias forbind: So thatt >>= fis the same asbind t f. Then the above example becomes: Finally, we define a new function to avoid writing(x, "")every time we wish to create an empty logging message, where""is the empty string. Which wrapsxin the tuple described above. The result is a pipeline for logging messages: That allows us to more easily log the effects ofbarandfooonx. int * stringdenotes a pseudo-codedmonadic value.[q]bindandreturnare analogous to the corresponding functions of the same name. In fact,int * string,bind, andreturnform a monad. Anadditive monadis a monad endowed with an additional closed, associative, binary operatormplusand anidentity elementundermplus, calledmzero. TheMaybemonad can be considered additive, withNothingasmzeroand a variation on theORoperator asmplus.Listis also an additive monad, with the empty list[]acting asmzeroand the concatenation operator++asmplus. Intuitively,mzerorepresents a monadic wrapper with no value from an underlying type, but is also considered a "zero" (rather than a "one") since it acts as anabsorberforbind, returningmzerowhenever bound to a monadic function. This property is two-sided, andbindwill also returnmzerowhen any value is bound to a monadiczero function. In category-theoretic terms, an additive monad qualifies once as a monoid over monadic functions withbind(as all monads do), and again over monadic values viamplus.[32][r] Sometimes, the general outline of a monad may be useful, but no simple pattern recommends one monad or another. This is where afree monadcomes in; as afree objectin the category of monads, it can represent monadic structure without any specific constraints beyond the monad laws themselves. Just as afree monoidconcatenates elements without evaluation, a free monad allows chaining computations with markers to satisfy the type system, but otherwise imposes no deeper semantics itself. For example, by working entirely through theJustandNothingmarkers, theMaybemonad is in fact a free monad. TheListmonad, on the other hand, is not a free monad since it brings extra, specific facts about lists (likeappend) into its definition. One last example is an abstract free monad: Free monads, however, arenotrestricted to a linked-list like in this example, and can be built around other structures liketrees. Using free monads intentionally may seem impractical at first, but their formal nature is particularly well-suited for syntactic problems. A free monad can be used to track syntax and type while leaving semantics for later, and has found use in parsers andinterpretersas a result.[33]Others have applied them to more dynamic, operational problems too, such as providingiterateeswithin a language.[34] Besides generating monads with extra properties, for any given monad, one can also define acomonad. Conceptually, if monads represent computations built up from underlying values, then comonads can be seen as reductions back down to values. Monadic code, in a sense, cannot be fully "unpacked"; once a value is wrapped within a monad, it remains quarantined there along with any side-effects (a good thing in purely functional programming). Sometimes though, a problem is more about consuming contextual data, which comonads can model explicitly. Technically, a comonad is thecategorical dualof a monad, which loosely means that it will have the same required components, only with the direction of the type signaturesreversed. Starting from thebind-centric monad definition, a comonad consists of: extendandcounitmust also satisfy duals of the monad laws: Analogous to monads, comonads can also be derived from functors using a dual ofjoin: While operations likeextendare reversed, however, a comonad doesnotreverse functions it acts on, and consequently, comonads are still functors withmap, notcofunctors. The alternate definition withduplicate,counit, andmapmust also respect its own comonad laws: And as with monads, the two forms can be converted automatically: A simple example is theProduct comonad, which outputs values based on an input value and shared environment data. In fact, theProductcomonad is just the dual of theWritermonad and effectively the same as theReadermonad (both discussed below).ProductandReaderdiffer only in which function signatures they accept, and how they complement those functions by wrapping or unwrapping values. A less trivial example is theStream comonad, which can be used to representdata streamsand attach filters to the incoming signals withextend. In fact, while not as popular as monads, researchers have found comonads particularly useful forstream processingand modelingdataflow programming.[35][36] Due to their strict definitions, however, one cannot simply move objects back and forth between monads and comonads. As an even higher abstraction,arrowscan subsume both structures, but finding more granular ways to combine monadic and comonadic code is an active area of research.[37][38] Alternatives for modeling computations: Related design concepts: Generalizations of monads: HaskellWiki references: Tutorials: Interesting cases:
https://en.wikipedia.org/wiki/Monad_(functional_programming)
Incomputer science, alistorsequenceis acollectionof items that are finite in number and in a particularorder. Aninstanceof a list is a computer representation of themathematicalconcept of atupleor finitesequence. A list may contain the same value more than once, and each occurrence is considered a distinct item. The termlistis also used for several concretedata structuresthat can be used to implementabstractlists, especiallylinked listsandarrays. In some contexts, such as inLispprogramming, the termlistmay refer specifically to a linked list rather than an array. Inclass-based programming, lists are usually provided asinstancesof subclasses of a generic "list" class, and traversed via separateiterators. Manyprogramming languagesprovide support forlist data types, and have special syntax and semantics for lists and list operations. A list can often be constructed by writing the items in sequence, separated bycommas,semicolons, and/orspaces, within a pair of delimiters such asparentheses'()',brackets'[]',braces'{}', orangle brackets'<>'. Some languages may allow list types to beindexedorslicedlikearray types, in which case the data type is more accurately described as an array. Intype theoryandfunctional programming, abstract lists are usually definedinductivelyby two operations:nilthat yields the empty list, andcons, which adds an item at the beginning of a list.[1] Astreamis the potentially infinite analog of a list.[2]: §3.5 Implementation of the list data structure may provide some of the followingoperations: Lists are typically implemented either aslinked lists(either singly or doubly linked) or asarrays, usually variable length ordynamic arrays. The standard way of implementing lists, originating with the programming languageLisp, is to have each element of the list contain both its value and a pointer indicating the location of the next element in the list. This results in either alinked listor atree, depending on whether the list has nested sublists. Some older Lisp implementations (such as the Lisp implementation of theSymbolics3600) also supported "compressed lists" (usingCDR coding) which had a special internal representation (invisible to the user). Lists can be manipulated usingiterationorrecursion. The former is often preferred inimperative programming languages, while the latter is the norm infunctional languages. Lists can be implemented asself-balancing binary search treesholding index-value pairs, providing equal-time access to any element (e.g. all residing in the fringe, and internal nodes storing the right-most child's index, used to guide the search), taking the time logarithmic in the list's size, but as long as it doesn't change much will provide the illusion ofrandom accessand enable swap, prefix and append operations in logarithmic time as well.[3] Somelanguagesdo not offer a listdata structure, but offer the use ofassociative arraysor some kind of table to emulate lists. For example,Luaprovides tables. Although Lua stores lists that have numerical indices as arrays internally, they still appear as dictionaries.[4] InLisp, lists are the fundamental data type and can represent both program code and data. In most dialects, the list of the first three prime numbers could be written as(list 2 3 5). In several dialects of Lisp, includingScheme, a list is a collection of pairs, consisting of a value and a pointer to the next pair (or null value), making a singly linked list.[5] Unlike in anarray, a list can expand and shrink. In computing, lists are easier to implement than sets. A finitesetin the mathematical sense can be realized as a list with additional restrictions; that is, duplicate elements are disallowed and order is irrelevant. Sorting the list speeds up determining if a given item is already in the set, but in order to ensure the order, it requires more time to add new entry to the list. In efficient implementations, however, sets are implemented usingself-balancing binary search treesorhash tables, rather than a list. Lists also form the basis for otherabstract data typesincluding thequeue, thestack, and their variations. The abstract list typeLwith elements of some typeE(amonomorphiclist) is defined by the following functions: with the axioms for any elementeand any listl. It is implicit that Note that first (nil ()) and rest (nil ()) are not defined. These axioms are equivalent to those of the abstractstackdata type. Intype theory, the above definition is more simply regarded as aninductive typedefined in terms of constructors:nilandcons. In algebraic terms, this can be represented as the transformation 1 +E×L→L.firstandrestare then obtained bypattern matchingon theconsconstructor and separately handling thenilcase. The list type forms amonadwith the following functions (usingE*rather thanLto represent monomorphic lists with elements of typeE): whereappendis defined as: Alternatively, the monad may be defined in terms of operationsreturn,fmapandjoin, with: Note thatfmap,join,appendandbindare well-defined, since they're applied to progressively deeper arguments at each recursive call. The list type is an additive monad, withnilas the monadic zero andappendas monadic sum. Lists form amonoidunder theappendoperation. The identity element of the monoid is the empty list,nil. In fact, this is thefree monoidover the set of list elements.
https://en.wikipedia.org/wiki/List_(abstract_data_type)
Scala(/ˈskɑːlɑː/SKAH-lah)[7][8]is astrongstatically typedhigh-levelgeneral-purpose programming languagethat supports bothobject-oriented programmingandfunctional programming. Designed to be concise,[9]many of Scala's design decisions are intended to addresscriticisms of Java.[6] Scala source code can be compiled toJava bytecodeand run on aJava virtual machine(JVM). Scala can also be transpiled toJavaScriptto run in a browser, or compiled directly to a native executable. When running on the JVM, Scala provideslanguage interoperabilitywithJavaso that libraries written in either language may be referenced directly in Scala or Java code.[10]Like Java, Scala isobject-oriented, and uses asyntaxtermedcurly-bracewhich is similar to the languageC. Since Scala 3, there is also an option to use theoff-side rule(indenting) to structureblocks, and its use is advised.Martin Oderskyhas said that this turned out to be the most productive change introduced in Scala 3.[11] Unlike Java, Scala has many features offunctional programminglanguages (likeScheme,Standard ML, andHaskell), includingcurrying,immutability,lazy evaluation, andpattern matching. It also has an advanced type system supportingalgebraic data types,covariance and contravariance,higher-order types(but nothigher-rank types),anonymous types,operator overloading,optional parameters,named parameters,raw strings, and an experimental exception-only version of algebraic effects that can be seen as a more powerful version of Java'schecked exceptions.[12] The name Scala is a portmanteau ofscalableandlanguage, signifying that it is designed to grow with the demands of its users.[13] The design of Scala started in 2001 at theÉcole Polytechnique Fédérale de Lausanne(EPFL) (inLausanne,Switzerland) byMartin Odersky. It followed on from work on Funnel, a programming language combining ideas from functional programming andPetri nets.[14]Odersky formerly worked onGeneric Java, andjavac, Sun's Java compiler.[14] After an internal release in late 2003, Scala was released publicly in early 2004 on theJava platform,[15][6][14][16]A second version (v2.0) followed in March 2006.[6] On 17 January 2011, the Scala team won a five-year research grant of over €2.3 million from theEuropean Research Council.[17]On 12 May 2011, Odersky and collaborators launched Typesafe Inc. (later renamedLightbend Inc.), a company to provide commercial support, training, and services for Scala. Typesafe received a $3 million investment in 2011 fromGreylock Partners.[18][19][20][21] Scala runs on theJava platform(Java virtual machine) and is compatible with existingJavaprograms.[15]AsAndroidapplications are typically written in Java and translated from Java bytecode intoDalvikbytecode (which may be further translated to native machine code during installation) when packaged, Scala's Java compatibility makes it well-suited to Android development, the more so when a functional approach is preferred.[22] The reference Scala software distribution, including compiler and libraries, is released under theApache license.[23] Scala.jsis a Scala compiler that compiles to JavaScript, making it possible to write Scala programs that can run in web browsers orNode.js.[24]The compiler, in development since 2013, was announced as no longer experimental in 2015 (v0.6). Version v1.0.0-M1 was released in June 2018 and version 1.1.1 in September 2020.[25] Scala Native is a Scalacompilerthat targets theLLVMcompiler infrastructure to create executable code that uses a lightweight managed runtime, which uses theBoehm garbage collector. The project is led by Denys Shabalin and had its first release, 0.1, on 14 March 2017. Development of Scala Native began in 2015 with a goal of being faster thanjust-in-time compilationfor the JVM by eliminating the initial runtime compilation of code and also providing the ability to call native routines directly.[26][27] A reference Scala compiler targeting the.NET Frameworkand itsCommon Language Runtimewas released in June 2004,[14]but was officially dropped in 2012.[28] TheHello World programwritten in Scala 3 has this form: Unlike thestand-alone Hello World application for Java, there is no class declaration and nothing is declared to be static. When the program is stored in fileHelloWorld.scala, the user compiles it with the command: and runs it with This is analogous to the process for compiling and running Java code. Indeed, Scala's compiling and executing model is identical to that of Java, making it compatible with Java build tools such asApache Ant. A shorter version of the "Hello World" Scala program is: Scala includes an interactive shell and scripting support.[29]Saved in a file namedHelloWorld2.scala, this can be run as a script using the command: Commands can also be entered directly into the Scala interpreter, using the option-e: Expressions can be entered interactively in theREPL: The following example shows the differences between Java and Scala syntax. The function mathFunction takes an integer, squares it, and then adds the cube root of that number to the natural log of that number, returning the result (i.e.,n2/3+ln⁡(n2){\displaystyle n^{2/3}+\ln(n^{2})}): Some syntactic differences in this code are: These syntactic relaxations are designed to allow support fordomain-specific languages. Some other basic syntactic differences: The following example contrasts the definition of classes in Java and Scala. The code above shows some of the conceptual differences between Java and Scala's handling of classes: Scala has the same compiling model asJavaandC#, namely separate compiling anddynamic class loading, so that Scala code can call Java libraries. Scala's operational characteristics are the same as Java's. The Scala compiler generates byte code that is nearly identical to that generated by the Java compiler.[15]In fact, Scala code can bedecompiledto readable Java code, with the exception of certain constructor operations. To theJava virtual machine(JVM), Scala code and Java code are indistinguishable. The only difference is one extra runtime library,scala-library.jar.[30] Scala adds a large number of features compared with Java, and has some fundamental differences in its underlying model of expressions and types, which make the language theoretically cleaner and eliminate severalcorner casesin Java. From the Scala perspective, this is practically important because several added features in Scala are also available in C#. As mentioned above, Scala has a good deal of syntactic flexibility, compared with Java. The following are some examples: By themselves, these may seem like questionable choices, but collectively they serve the purpose of allowingdomain-specific languagesto be defined in Scala without needing to extend the compiler. For example,Erlang's special syntax for sending a message to an actor, i.e.actor ! messagecan be (and is) implemented in a Scala library without needing language extensions. Java makes a sharp distinction between primitive types (e.g.intandboolean) and reference types (anyclass). Only reference types are part of the inheritance scheme, deriving fromjava.lang.Object. In Scala, all types inherit from a top-level classAny, whose immediate children areAnyVal(value types, such asIntandBoolean) andAnyRef(reference types, as in Java). This means that the Java distinction between primitive types and boxed types (e.g.intvs.Integer) is not present in Scala;boxingand unboxing is completely transparent to the user. Scala 2.10 allows for new value types to be defined by the user. Instead of the Java "foreach" loops for looping through an iterator, Scala hasfor-expressions, which are similar tolist comprehensionsin languages such asHaskell, or a combination of list comprehensions andgenerator expressionsinPython. For-expressions using theyieldkeyword allow a newcollectionto be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series ofmap,flatMapandfiltercalls. Whereyieldis not used, the code approximates to an imperative-style loop, by translating toforeach. A simple example is: The result of running it is the following vector: (Note that the expression1 to 25is not special syntax. The methodtois rather defined in the standard Scala library as an extension method on integers, using a technique known as implicit conversions[32]that allows new methods to be added to existing types.) A more complex example of iterating over a map is: Expression(mention, times) <- mentionsis an example ofpattern matching(see below). Iterating over a map returns a set of key-valuetuples, and pattern-matching allows the tuples to easily be destructured into separate variables for the key and value. Similarly, the result of the comprehension also returns key-value tuples, which are automatically built back up into a map because the source object (from the variablementions) is a map. Note that ifmentionsinstead held a list, set, array or other collection of tuples, exactly the same code above would yield a new collection of the same type. While supporting all of the object-oriented features available in Java (and in fact, augmenting them in various ways), Scala also provides a large number of capabilities that are normally found only infunctional programminglanguages. Together, these features allow Scala programs to be written in an almost completely functional style and also allow functional and object-oriented styles to be mixed. Examples are: UnlikeCorJava, but similar to languages such asLisp, Scala makes no distinction between statements andexpressions. All statements are in fact expressions that evaluate to some value. Functions that would be declared as returningvoidin C or Java, and statements likewhilethat logically do not return a value, are in Scala considered to return the typeUnit, which is asingleton type, with only one object of that type. Functions and operators that never return at all (e.g. thethrowoperator or a function that always exitsnon-locallyusing an exception) logically have return typeNothing, a special type containing no objects; that is, abottom type, i.e. a subclass of every possible type. (This in turn makes typeNothingcompatible with every type, allowingtype inferenceto function correctly.)[33] Similarly, anif-then-else"statement" is actually an expression, which produces a value, i.e. the result of evaluating one of the two branches. This means that such a block of code can be inserted wherever an expression is desired, obviating the need for aternary operatorin Scala: For similar reasons,returnstatements are unnecessary in Scala, and in fact are discouraged. As inLisp, the last expression in a block of code is the value of that block of code, and if the block of code is the body of a function, it will be returned by the function. To make it clear that all functions are expressions, even methods that returnUnitare written with an equals sign or equivalently (with type inference, and omitting the unnecessary newline): Due totype inference, the type of variables, function return values, and many other expressions can typically be omitted, as the compiler can deduce it. Examples areval x = "foo"(for an immutableconstantorimmutable object) orvar x = 1.5(for a variable whose value can later be changed). Type inference in Scala is essentially local, in contrast to the more globalHindley-Milneralgorithm used in Haskell,MLand other more purely functional languages. This is done to facilitate object-oriented programming. The result is that certain types still need to be declared (most notably, function parameters, and the return types ofrecursive functions), e.g. or (with a return type declared for a recursive function) In Scala, functions are objects, and a convenient syntax exists for specifyinganonymous functions. An example is the expressionx => x < 2, which specifies a function with one parameter, that compares its argument to see if it is less than 2. It is equivalent to the Lisp form(lambda (x) (< x 2)). Note that neither the type ofxnor the return type need be explicitly specified, and can generally be inferred bytype inference; but they can be explicitly specified, e.g. as(x: Int) => x < 2or even(x: Int) => (x < 2): Boolean. Anonymous functions behave as trueclosuresin that they automatically capture any variables that are lexically available in the environment of the enclosing function. Those variables will be available even after the enclosing function returns, and unlike in the case of Java'sanonymous inner classesdo not need to be declared as final. (It is even possible to modify such variables if they are mutable, and the modified value will be available the next time the anonymous function is called.) An even shorter form of anonymous function usesplaceholdervariables: For example, the following: can be written more concisely as or even Scala enforces a distinction between immutable and mutable variables. Mutable variables are declared using thevarkeyword and immutable values are declared using thevalkeyword. A variable declared using thevalkeyword cannot be reassigned in the same way that a variable declared using thefinalkeyword can't be reassigned in Java.vals are only shallowly immutable, that is, an object referenced by a val is not guaranteed to itself be immutable. Immutable classes are encouraged by convention however, and the Scala standard library provides a rich set of immutablecollectionclasses. Scala provides mutable and immutable variants of most collection classes, and the immutable version is always used unless the mutable version is explicitly imported.[34]The immutable variants arepersistent data structuresthat always return an updated copy of an old object instead of updating the old object destructively in place. An example of this isimmutable linked listswhere prepending an element to a list is done by returning a new list node consisting of the element and a reference to the list tail. Appending an element to a list can only be done by prepending all elements in the old list to a new list with only the new element. In the same way, inserting an element in the middle of a list will copy the first half of the list, but keep a reference to the second half of the list. This is called structural sharing. This allows for very easy concurrency — no locks are needed as no shared objects are ever modified.[35] Evaluation is strict ("eager") by default. In other words, Scala evaluates expressions as soon as they are available, rather than as needed. However, it is possible to declare a variable non-strict ("lazy") with thelazykeyword, meaning that the code to produce the variable's value will not be evaluated until the first time the variable is referenced. Non-strict collections of various types also exist (such as the typeStream, a non-strict linked list), and any collection can be made non-strict with theviewmethod. Non-strict collections provide a good semantic fit to things like server-produced data, where the evaluation of the code to generate later elements of a list (that in turn triggers a request to a server, possibly located somewhere else on the web) only happens when the elements are actually needed. Functional programming languages commonly providetail calloptimization to allow for extensive use ofrecursionwithoutstack overflowproblems. Limitations in Java bytecode complicate tail call optimization on the JVM. In general, a function that calls itself with a tail call can be optimized, but mutually recursive functions cannot.Trampolineshave been suggested as a workaround.[36]Trampoline support has been provided by the Scala library with the objectscala.util.control.TailCallssince Scala 2.8.0 (released 14 July 2010). A function may optionally be annotated with@tailrec, in which case it will not compile unless it is tail recursive.[37] An example of this optimization could be implemented using thefactorialdefinition. For instance, the recursive version of the factorial: Could be optimized to the tail recursive version like this: However, this could compromise composability with other functions because of the new argument on its definition, so it is common to useclosuresto preserve its original signature: This ensures tail call optimization and thus prevents a stack overflow error. Scala has built-in support forpattern matching, which can be thought of as a more sophisticated, extensible version of aswitch statement, where arbitrary data types can be matched (rather than just simple types like integers, Booleans and strings), including arbitrary nesting. A special type of class known as acase classis provided, which includes automatic support for pattern matching and can be used to model thealgebraic data typesused in many functional programming languages. (From the perspective of Scala, a case class is simply a normal class for which the compiler automatically adds certain behaviors that could also be provided manually, e.g., definitions of methods providing for deep comparisons and hashing, and destructuring a case class on its constructor parameters during pattern matching.) An example of a definition of thequicksortalgorithm using pattern matching is this: The idea here is that we partition a list into the elements less than a pivot and the elements not less, recursively sort each part, and paste the results together with the pivot in between. This uses the samedivide-and-conquerstrategy ofmergesortand other fast sorting algorithms. Thematchoperator is used to do pattern matching on the object stored inlist. Eachcaseexpression is tried in turn to see if it will match, and the first match determines the result. In this case,Nilonly matches the literal objectNil, butpivot :: tailmatches a non-empty list, and simultaneouslydestructuresthe list according to the pattern given. In this case, the associated code will have access to a local variable namedpivotholding the head of the list, and another variabletailholding the tail of the list. Note that these variables are read-only, and are semantically very similar to variablebindingsestablished using theletoperator in Lisp and Scheme. Pattern matching also happens in local variable declarations. In this case, the return value of the call totail.partitionis atuple— in this case, two lists. (Tuples differ from other types of containers, e.g. lists, in that they are always of fixed size and the elements can be of differing types — although here they are both the same.) Pattern matching is the easiest way of fetching the two parts of the tuple. The form_ < pivotis a declaration of ananonymous functionwith a placeholder variable; see the section above on anonymous functions. The list operators::(which adds an element onto the beginning of a list, similar toconsin Lisp and Scheme) and:::(which appends two lists together, similar toappendin Lisp and Scheme) both appear. Despite appearances, there is nothing "built-in" about either of these operators. As specified above, any string of symbols can serve as function name, and a method applied to an object can be written "infix"-style without the period or parentheses. The line above as written: could also be written thus: in more standard method-call notation. (Methods that end with a colon are right-associative and bind to the object to the right.) In the pattern-matching example above, the body of thematchoperator is apartial function, which consists of a series ofcaseexpressions, with the first matching expression prevailing, similar to the body of aswitch statement. Partial functions are also used in the exception-handling portion of atrystatement: Finally, a partial function can be used alone, and the result of calling it is equivalent to doing amatchover it. For example, the prior code forquicksortcan be written thus: Here a read-onlyvariableis declared whose type is a function from lists of integers to lists of integers, and bind it to a partial function. (Note that the single parameter of the partial function is never explicitly declared or named.) However, we can still call this variable exactly as if it were a normal function: Scala is a pureobject-oriented languagein the sense that every value is anobject.Data typesand behaviors of objects are described byclassesandtraits. Class abstractions are extended bysubclassingand by a flexiblemixin-based composition mechanism to avoid the problems ofmultiple inheritance. Traits are Scala's replacement for Java'sinterfaces. Interfaces in Java versions under 8 are highly restricted, able only to contain abstract function declarations. This has led to criticism that providing convenience methods in interfaces is awkward (the same methods must be reimplemented in every implementation), and extending a published interface in a backwards-compatible way is impossible. Traits are similar tomixinclasses in that they have nearly all the power of a regular abstract class, lacking only class parameters (Scala's equivalent to Java's constructor parameters), since traits are always mixed in with a class. Thesuperoperator behaves specially in traits, allowing traits to be chained using composition in addition to inheritance. The following example is a simple window system: A variable may be declared thus: The result of callingmywin.draw()is: In other words, the call todrawfirst executed the code inTitleDecoration(the last trait mixed in), then (through thesuper()calls) threaded back through the other mixed-in traits and eventually to the code inWindow,even though none of the traits inherited from one another. This is similar to thedecorator pattern, but is more concise and less error-prone, as it doesn't require explicitly encapsulating the parent window, explicitly forwarding functions whose implementation isn't changed, or relying on run-time initialization of entity relationships. In other languages, a similar effect could be achieved at compile-time with a long linear chain ofimplementation inheritance, but with the disadvantage compared to Scala that one linear inheritance chain would have to be declared for each possible combination of the mix-ins. Scala is equipped with an expressive static type system that mostly enforces the safe and coherent use of abstractions. The type system is, however, notsound.[38]In particular, the type system supports: Scala is able toinfer typesby use. This makes most static type declarations optional. Static types need not be explicitly declared unless a compiler error indicates the need. In practice, some static type declarations are included for the sake of code clarity. A common technique in Scala, known as "enrich my library"[39](originally termed "pimp my library" by Martin Odersky in 2006;[32]concerns were raised about this phrasing due to its negative connotations[40]and immaturity[41]), allows new methods to be used as if they were added to existing types. This is similar to the C# concept ofextension methodsbut more powerful, because the technique is not limited to adding methods and can, for instance, be used to implement new interfaces. In Scala, this technique involves declaring animplicit conversionfrom the type "receiving" the method to a new type (typically, a class) that wraps the original type and provides the additional method. If a method cannot be found for a given type, the compiler automatically searches for any applicable implicit conversions to types that provide the method in question. This technique allows new methods to be added to an existing class using an add-on library such that only code thatimportsthe add-on library gets the new functionality, and all other code is unaffected. The following example shows the enrichment of typeIntwith methodsisEvenandisOdd: Importing the members ofMyExtensionsbrings the implicit conversion to extension classIntPredicatesinto scope.[42] Scala's standard library includes support forfutures and promises, in addition to the standard Java concurrency APIs. Originally, it also included support for theactor model, which is now available as a separatesource-availableplatformAkka[43]licensed byLightbend Inc.Akka actors may bedistributedor combined withsoftware transactional memory(transactors). Alternativecommunicating sequential processes(CSP) implementations for channel-based message passing are Communicating Scala Objects,[44]or simply viaJCSP. An Actor is like a thread instance with a mailbox. It can be created bysystem.actorOf, overriding thereceivemethod to receive messages and using the!(exclamation point) method to send a message.[45]The following example shows an EchoServer that can receive messages and then print them. Scala also comes with built-in support for data-parallel programming in the form of Parallel Collections[46]integrated into its Standard Library since version 2.9.0. The following example shows how to use Parallel Collections to improve performance.[47] Besides futures and promises, actor support, anddata parallelism, Scala also supports asynchronous programming with software transactional memory, and event streams.[48] The most well-known open-source cluster-computing solution written in Scala isApache Spark. Additionally,Apache Kafka, thepublish–subscribemessage queuepopular with Spark and other stream processing technologies, is written in Scala. There are several ways to test code in Scala. ScalaTest supports multiple testing styles and can integrate with Java-based testing frameworks.[49]ScalaCheck is a library similar to Haskell'sQuickCheck.[50]specs2 is a library for writing executable software specifications.[51]ScalaMock provides support for testing high-order and curried functions.[52]JUnitandTestNGare popular testing frameworks written in Java. Experimental features Scala is often compared withGroovyandClojure, two other programming languages also using the JVM. Substantial differences between these languages exist in the type system, in the extent to which each language supports object-oriented and functional programming, and in the similarity of their syntax to that of Java. Scala isstatically typed, while both Groovy and Clojure aredynamically typed. This makes the type system more complex and difficult to understand but allows almost all[38]type errors to be caught at compile-time and can result in significantly faster execution. By contrast, dynamic typing requires more testing to ensure program correctness, and thus is generally slower, to allow greater programming flexibility and simplicity. Regarding speed differences, current versions of Groovy and Clojure allow optional type annotations to help programs avoid the overhead of dynamic typing in cases where types are practically static. This overhead is further reduced when using recent versions of the JVM, which has been enhanced with aninvoke dynamicinstruction for methods that are defined with dynamically typed arguments. These advances reduce the speed gap between static and dynamic typing, although a statically typed language, like Scala, is still the preferred choice when execution efficiency is very important. Regarding programming paradigms, Scala inherits the object-oriented model of Java and extends it in various ways. Groovy, while also strongly object-oriented, is more focused in reducing verbosity. In Clojure, object-oriented programming is deemphasised with functional programming being the main strength of the language. Scala also has many functional programming facilities, including features found in advanced functional languages like Haskell, and tries to be agnostic between the two paradigms, letting the developer choose between the two paradigms or, more frequently, some combination thereof. Regarding syntax similarity with Java, Scala inherits much of Java's syntax, as is the case with Groovy. Clojure on the other hand follows theLispsyntax, which is different in both appearance and philosophy.[citation needed] Back in 2013, when Scala was in version 2.10, theThoughtWorksTechnology Radar, which is an opinion based biannual report of a group of senior technologists,[139]recommended Scala adoption in its languages and frameworks category.[140] In July 2014, this assessment was made more specific and now refers to a “Scala, the good parts”, which is described as “To successfully use Scala, you need to research the language and have a very strong opinion on which parts are right for you, creating your own definition of Scala, the good parts.”.[141] In the 2018 edition of theState ofJavasurvey,[142]which collected data from 5160 developers on various Java-related topics, Scala places third in terms of use of alternative languages on theJVM. Relative to the prior year's edition of the survey, Scala's use among alternativeJVM languagesfell from 28.4% to 21.5%, overtaken byKotlin, which rose from 11.4% in 2017 to 28.8% in 2018. The Popularity of Programming Language Index,[143]which tracks searches for language tutorials, ranked Scala 15th in April 2018 with a small downward trend, and 17th in Jan 2021. This makes Scala the 3rd most popular JVM-based language afterJavaandKotlin, ranked 12th. TheRedMonk Programming LanguageRankings, which establishes rankings based on the number ofGitHubprojects and questions asked onStack Overflow, in January 2021 ranked Scala 14th.[144]Here, Scala was placed inside a second-tier group of languages–ahead ofGo,PowerShell, and Haskell, and behindSwift,Objective-C,Typescript, andR. TheTIOBE index[145]of programming language popularity employs internet search engine rankings and similar publication counting to determine language popularity. In September 2021, it showed Scala in 31st place. In this ranking, Scala was ahead of Haskell (38th) andErlang, but belowGo(14th),Swift(15th), andPerl(19th). As of 2022[update], JVM-based languages such as Clojure, Groovy, and Scala are highly ranked, but still significantly less popular than the originalJavalanguage, which is usually ranked in the top three places.[144][145] In November 2011,Yammermoved away from Scala for reasons that included the learning curve for new team members and incompatibility from one version of the Scala compiler to the next.[175]In March 2015, former VP of the Platform Engineering group atTwitterRaffi Krikorian, stated that he would not have chosen Scala in 2011 due to itslearning curve.[176]The same month,LinkedInSVPKevin Scottstated their decision to "minimize [their] dependence on Scala".[177]
https://en.wikipedia.org/wiki/Scala_(programming_language)
Inprogramming languages(especiallyfunctional programminglanguages) andtype theory, anoption typeormaybe typeis apolymorphic typethat represents encapsulation of an optional value; e.g., it is used as the return type of functions which may or may not return a meaningful value when they are applied. It consists of a constructor which either is empty (often namedNoneorNothing), or which encapsulates the original data typeA(often writtenJust AorSome A). A distinct, but related concept outside of functional programming, which is popular inobject-oriented programming, is callednullable types(often expressed asA?). The core difference between option types and nullable types is that option types support nesting (e.g.Maybe (Maybe String)≠Maybe String), while nullable types do not (e.g.String??=String?). Intype theory, it may be written as:A?=A+1{\displaystyle A^{?}=A+1}. This expresses the fact that for a given set of values inA{\displaystyle A}, an option type adds exactly one additional value (the empty value) to the set of valid values forA{\displaystyle A}. This is reflected in programming by the fact that in languages havingtagged unions, option types can be expressed as the tagged union of the encapsulated type plus aunit type.[1] In theCurry–Howard correspondence, option types are related to theannihilation lawfor ∨: x∨1=1.[how?] An option type can also be seen as acollectioncontaining either one or zero elements.[original research?] The option type is also amonadwhere:[2] The monadic nature of the option type is useful for efficiently tracking failure and errors.[3] Adadoes not implement option-types directly, however it provides discriminated types which can be used to parameterize a record. To implement a Option type, a Boolean type is used as the discriminant; the following example provides a generic to create an option type from any non-limited constrained type: Example usage: In Agda, the option type is namedMaybewith variantsnothingandjusta. In ATS, the option type is defined as Since C++17, the option type is defined in the standard library astemplate<typenameT>std::optional<T>. In Coq, the option type is defined asInductiveoption(A:Type):Type:=|Some:A->optionA|None:optionA.. In Elm, the option type is defined astypeMaybea=Justa|Nothing.[4] In F#, the option type is defined astype'aoption=None|Someof'a.[5] In Haskell, the option type is defined asdataMaybea=Nothing|Justa.[6] In Idris, the option type is defined asdataMaybea=Nothing|Justa. In Java, the option type is defined the standard library by thejava.util.Optional<T>class. In OCaml, the option type is defined astype'aoption=None|Someof'a.[7] In Rust, the option type is defined asenumOption<T>{None,Some(T)}.[8] In Scala, the option type is defined assealedabstractclassOption[+A], a type extended byfinalcaseclassSome[+A](value:A)andcaseobjectNone. In Standard ML, the option type is defined asdatatype'aoption=NONE|SOMEof'a. In Swift, the option type is defined asenumOptional<T>{casenone,some(T)}but is generally written asT?.[9] In Zig, add ? before the type name like?i32to make it an optional type. Payloadncan be captured in aniforwhilestatement, such asif(opt)|n|{...}else{...}, and anelseclause is evaluated if it isnull.
https://en.wikipedia.org/wiki/Option_type
Inmathematics,Church encodingis a means of representing data and operators in thelambda calculus. TheChurch numeralsare a representation of the natural numbers using lambda notation. The method is named forAlonzo Church, who first encoded data in the lambda calculus this way. Terms that are usually considered primitive in other notations (such as integers, Booleans, pairs, lists, and tagged unions) are mapped tohigher-order functionsunder Church encoding. TheChurch–Turing thesisasserts that any computable operator (and its operands) can be represented under Church encoding.[dubious–discuss]In theuntyped lambda calculusthe only primitive data type is the function. A straightforward implementation of Church encoding slows some access operations fromO(1){\displaystyle O(1)}toO(n){\displaystyle O(n)}, wheren{\displaystyle n}is the size of thedata structure, making Church encoding impractical.[1]Research has shown that this can be addressed by targeted optimizations, but mostfunctional programminglanguages instead expand their intermediate representations to containalgebraic data types.[2]Nonetheless Church encoding is often used in theoretical arguments, as it is a natural representation for partial evaluation and theorem proving.[1]Operations can be typed usinghigher-ranked types,[3]and primitive recursion is easily accessible.[1]The assumption that functions are the only primitive data types streamlines many proofs. Church encoding is complete but only representationally. Additional functions are needed to translate the representation into common data types, for display to people. It is not possible in general to decide if two functions areextensionallyequal due to theundecidability of equivalencefromChurch's theorem. The translation may apply the function in some way to retrieve the value it represents, or look up its value as a literal lambda term. Lambda calculus is usually interpreted as usingintensional equality. There arepotential problemswith the interpretation of results because of the difference between the intensional and extensional definition of equality. Church numerals are the representations ofnatural numbersunder Church encoding. Thehigher-order functionthat represents natural numbernis a function that maps any functionf{\displaystyle f}to itsn-foldcomposition. In simpler terms, the "value" of the numeral is equivalent to the number of times the function encapsulates its argument. All Church numerals are functions that take two parameters. Church numerals0,1,2, ..., are defined as follows in thelambda calculus. Starting with0not applying the function at all, proceed with1applying the function once,2applying the function twice,3applying the function three times, etc.: The Church numeral3represents the action of applying any given function three times to a value. The supplied function is first applied to a supplied parameter and then successively to its own result. The end result is not the numeral 3 (unless the supplied parameter happens to be 0 and the function is asuccessor function). The function itself, and not its end result, is the Church numeral3. The Church numeral3means simply to do anything three times. It is anostensivedemonstration of what is meant by "three times". Arithmeticoperations on numbers may be represented by functions on Church numerals. These functions may be defined inlambda calculus, or implemented in most functional programming languages (seeconverting lambda expressions to functions). The addition functionplus⁡(m,n)=m+n{\displaystyle \operatorname {plus} (m,n)=m+n}uses the identityf∘(m+n)(x)=f∘m(f∘n(x)){\displaystyle f^{\circ (m+n)}(x)=f^{\circ m}(f^{\circ n}(x))}. The successor functionsucc⁡(n)=n+1{\displaystyle \operatorname {succ} (n)=n+1}isβ-equivalentto(plus⁡1){\displaystyle (\operatorname {plus} \ 1)}. The multiplication functionmult⁡(m,n)=m∗n{\displaystyle \operatorname {mult} (m,n)=m*n}uses the identityf∘(m∗n)(x)=(f∘n)∘m(x){\displaystyle f^{\circ (m*n)}(x)=(f^{\circ n})^{\circ m}(x)}. The exponentiation functionexp⁡(b,n)=bn{\displaystyle \operatorname {exp} (b,n)=b^{n}}is given by the definition of Church numerals,nhx=hnx{\displaystyle n\ h\ x=h^{n}\ x}. In the definition substituteh→b,x→f{\displaystyle h\to b,x\to f}to getnbf=bnf{\displaystyle n\ b\ f=b^{n}\ f}and, which gives the lambda expression, Thepred⁡(n){\displaystyle \operatorname {pred} (n)}function is more difficult to understand. A Church numeral applies a functionntimes. The predecessor function must return a function that applies its parametern - 1times. This is achieved by building a container aroundfandx, which is initialized in a way that omits the application of the function the first time. Seepredecessorfor a more detailed explanation. The subtraction function can be written based on the predecessor function. λn.λf.λx.n(λg.λh.h(gf))(λu.x)(λu.u){\displaystyle \lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)} Notes: The predecessor function used in the Church encoding is, We need a way of applying the function 1 fewer times to build the predecessor. A numeralnapplies the functionfntimes tox. The predecessor function must use the numeralnto apply the functionn-1times. Before implementing the predecessor function, here is a scheme that wraps the value in a container function. We will define new functions to use in place offandx, calledincandinit. The container function is calledvalue. The left-hand side of the table shows a numeralnapplied toincandinit. The general recurrence rule is, If there is also a function to retrieve the value from the container (calledextract), Thenextractmay be used to define thesamenumfunction as, Thesamenumfunction is not intrinsically useful. However, asincdelegates calling offto its container argument, we can arrange that on the first applicationincreceives a special container that ignores its argument allowing to skip the first application off. Call this new initial containerconst. The right-hand side of the above table shows the expansions ofnincconst. Then by replacinginitwithconstin the expression for thesamefunction we get the predecessor function, As explained below the functionsinc,init,const,valueandextractmay be defined as, Which gives the lambda expression forpredas, The value container applies a function to its value. It is defined by, so, Theincfunction should take a value containingv, and return a new value containingf v. Letting g be the value container, then, so, The value may be extracted by applying the identity function, UsingI, so, To implementpredtheinitfunction is replaced with theconstthat does not applyf. We needconstto satisfy, Which is satisfied if, Or as a lambda expression, A much simpler presentation is enabled using combinators notation. Now it is easy enough to see that i.e. by eta-contraction, and then by induction, etc. Pred may also be defined using pairs: This is a simpler definition but leads to a more complex expression for pred. The expansion forpred⁡three{\displaystyle \operatorname {pred} \operatorname {three} }: Divisionof natural numbers may be implemented by,[4] Calculatingn−m{\displaystyle n-m}takes many beta reductions. Unless doing the reduction by hand, this doesn't matter that much, but it is preferable to not have to do this calculation twice. The simplest predicate for testing numbers isIsZeroso consider the condition. But this condition is equivalent ton≤m{\displaystyle n\leq m}, notn<m{\displaystyle n<m}. If this expression is used then the mathematical definition of division given above is translated into function on Church numerals as, As desired, this definition has a single call tominus⁡nm{\displaystyle \operatorname {minus} \ n\ m}. However the result is that this formula gives the value of(n−1)/m{\displaystyle (n-1)/m}. This problem may be corrected by adding 1 tonbefore callingdivide. The definition ofdivideis then, divide1is a recursive definition. TheY combinatormay be used to implement the recursion. Create a new function calleddivby; to get, Then, where, Gives, Or as text, using \ forλ, For example, 9/3 is represented by Using a lambda calculus calculator, the above expression reduces to 3, using normal order. One simple approach for extending Church Numerals tosigned numbersis to use a Church pair, containing Church numerals representing a positive and a negative value.[5]The integer value is the difference between the two Church numerals. A natural number is converted to a signed number by, Negation is performed by swapping the values. The integer value is more naturally represented if one of the pair is zero. TheOneZerofunction achieves this condition, The recursion may be implemented using the Y combinator, Addition is defined mathematically on the pair by, The last expression is translated into lambda calculus as, Similarly subtraction is defined, giving, Multiplication may be defined by, The last expression is translated into lambda calculus as, A similar definition is given here for division, except in this definition, one value in each pair must be zero (seeOneZeroabove). ThedivZfunction allows us to ignore the value that has a zero component. divZis then used in the following formula, which is the same as for multiplication, but withmultreplaced bydivZ. Rational andcomputable real numbersmay also be encoded in lambda calculus. Rational numbers may be encoded as a pair of signed numbers. Computable real numbers may be encoded by a limiting process that guarantees that the difference from the real value differs by a number which may be made as small as we need.[6][7]The references given describe software that could, in theory, be translated into lambda calculus. Once real numbers are defined, complex numbers are naturally encoded as a pair of real numbers. The data types and functions described above demonstrate that any data type or calculation may be encoded in lambda calculus. This is theChurch–Turing thesis. Most real-world languages have support for machine-native integers; thechurchandunchurchfunctions convert between nonnegative integers and their corresponding Church numerals. The functions are given here inHaskell, where the\corresponds to the λ of Lambda calculus. Implementations in other languages are similar. Church Booleansare the Church encoding of the Boolean valuestrueandfalse.Some programming languages use these as an implementation model for Boolean arithmetic; examples areSmalltalkandPico. Boolean logic may be considered as a choice. The Church encoding oftrueandfalseare functions of two parameters: The two definitions are known as Church Booleans: This definition allows predicates (i.e. functions returninglogical values) to directly act as if-clauses. A function returning a Boolean, which is then applied to two parameters, returns either the first or the second parameter: evaluates tothen-clauseifpredicate-xevaluates totrue, and toelse-clauseifpredicate-xevaluates tofalse. Becausetrueandfalsechoose the first or second parameter they may be combined to provide logic operators. Note that there are multiple possible implementations ofnot. Some examples: Apredicateis a function that returns a Boolean value. The most fundamental predicate isIsZero{\displaystyle \operatorname {IsZero} }, which returnstrue{\displaystyle \operatorname {true} }if its argument is the Church numeral0{\displaystyle 0}, andfalse{\displaystyle \operatorname {false} }if its argument is any other Church numeral: The following predicate tests whether the first argument is less-than-or-equal-to the second: Because of the identity, The test for equality may be implemented as, Church pairs are the Church encoding of thepair(two-tuple) type. The pair is represented as a function that takes a function argument. When given its argument it will apply the argument to the two components of the pair. The definition inlambda calculusis, For example, An (immutable)listis constructed from list nodes. The basic operations on the list are; We give four different representations of lists below: A nonempty list can be implemented by a Church pair; However this does not give a representation of the empty list, because there is no "null" pointer. To represent null, the pair may be wrapped in another pair, giving three values: Using this idea the basic list operations can be defined like this:[8] In anilnodesecondis never accessed, provided thatheadandtailare only applied to nonempty lists. Alternatively, define[9] where the last definition is a special case of the general Other operations for one pair as a list node As an alternative to the encoding using Church pairs, a list can be encoded by identifying it with itsright fold function. For example, a list of three elements x, y and z can be encoded by a higher-order function that when applied to a combinator c and a value n returns c x (c y (c z n)). Equivalently, it is an application of the chain of functional compositions of partial applications, (c x ∘ c y ∘ c z) n. This list representation can be given type inSystem F. The evident correspondence to Church numerals is non-coincidental, as that can be seen as a unary encoding, with natural numbers represented by lists of unit (i.e. non-important) values, e.g. [() () ()], with the list's length serving as the representation of the natural number. Right folding over such lists uses functions which necessarily ignore the element's value, and is equivalent to the chained functional composition, i.e. (c () ∘ c () ∘ c ()) n = (f ∘ f ∘ f) n, as is used in Church numerals. An alternative representation is Scott encoding, which uses the idea ofcontinuationsand can lead to simpler code.[10](see alsoMogensen–Scott encoding). In this approach, we use the fact that lists can be observed usingpattern matchingexpression. For example, usingScalanotation, iflistdenotes a value of typeListwith empty listNiland constructorCons(h, t)we can inspect the list and computenilCodein case the list is empty andconsCode(h, t)when the list is not empty: Thelistis given by how it acts uponnilCodeandconsCode. We therefore define a list as a function that accepts suchnilCodeandconsCodeas arguments, so that instead of the above pattern match we may simply write: Let us denote bynthe parameter corresponding tonilCodeand bycthe parameter corresponding toconsCode. The empty list is the one that returns the nil argument: The non-empty list with headhand tailtis given by More generally, analgebraic data typewithm{\displaystyle m}alternatives becomes a function withm{\displaystyle m}parameters. When thei{\displaystyle i}th constructor hasni{\displaystyle n_{i}}arguments, the corresponding parameter of the encoding takesni{\displaystyle n_{i}}arguments as well. Scott encoding can be done in untyped lambda calculus, whereas its use with types requires a type system with recursion and type polymorphism. A list with element type E in this representation that is used to compute values of type C would have the following recursive type definition, where '=>' denotesfunction type: A list that can be used to compute arbitrary types would have a type that quantifies overC. A list generic[clarification needed]inEwould also takeEas the type argument.
https://en.wikipedia.org/wiki/Church_encoding
Incombinatorics, amatroid/ˈmeɪtrɔɪd/is a structure that abstracts and generalizes the notion oflinear independenceinvector spaces. There are many equivalent ways to define a matroidaxiomatically, the most significant being in terms of: independent sets; bases or circuits; rank functions; closure operators; and closed sets orflats. In the language ofpartially ordered sets, a finite simple matroid is equivalent to ageometric lattice. Matroid theory borrows extensively from the terms used in bothlinear algebraandgraph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications ingeometry,topology,combinatorial optimization,network theory, andcoding theory.[1][2] There are manyequivalentways to define a (finite) matroid.[a] In terms of independence, a finite matroidM{\displaystyle M}is a pair(E,I){\displaystyle (E,{\mathcal {I}})}, whereE{\displaystyle E}is afinite set(called theground set) andI{\displaystyle {\mathcal {I}}}is afamilyofsubsetsofE{\displaystyle E}(called theindependent sets) with the following properties:[3] The first two properties define a combinatorial structure known as anindependence system(orabstract simplicial complex). Actually, assuming (I2), property (I1) is equivalent to the fact that at least one subset ofE{\displaystyle E}is independent, i.e.,I≠∅{\displaystyle {\mathcal {I}}\neq \emptyset }. A subset of the ground setE{\displaystyle E}that is not independent is calleddependent. A maximal independent set – that is, an independent set that becomes dependent upon adding any element ofE{\displaystyle E}– is called abasisfor the matroid. Acircuitin a matroidM{\displaystyle M}is a minimal dependent subset ofE{\displaystyle E}– that is, a dependent set whose proper subsets are all independent. The term arises because the circuits ofgraphic matroidsare cycles in the corresponding graphs.[3] The dependent sets, the bases, or the circuits of a matroid characterize the matroid completely: a set is independent if and only if it is not dependent, if and only if it is a subset of a basis, and if and only if it does not contain a circuit. The collections of dependent sets, of bases, and of circuits each have simple properties that may be taken as axioms for a matroid. For instance, one may define a matroidM{\displaystyle M}to be a pair(E,B){\displaystyle (E,{\mathcal {B}})}, whereE{\displaystyle E}is a finite set as before andB{\displaystyle {\mathcal {B}}}is a collection of subsets ofE{\displaystyle E}, calledbases, with the following properties:[3] This property (B2) is called thebasis exchange property. It follows from this property that no member ofB{\displaystyle {\mathcal {B}}}can be a proper subset of any other. It is a basic result of matroid theory, directly analogous to a similar theorem ofbases in linear algebra, that any two bases of a matroidM{\displaystyle M}have the same number of elements. This number is called therankofM{\displaystyle M}.IfM{\displaystyle M}is a matroid onE{\displaystyle E}, andA{\displaystyle A}is a subset ofE{\displaystyle E}, then a matroid onA{\displaystyle A}can be defined by considering a subset ofA{\displaystyle A}to be independent if and only if it is independent inM{\displaystyle M}. This allows us to talk aboutsubmatroidsand about the rank of any subset ofE{\displaystyle E}. The rank of a subsetA{\displaystyle A}is given by therank functionr(A){\displaystyle r(A)}of the matroid, which has the following properties:[3] These properties can be used as one of the alternative definitions of a finite matroid: If(E,r){\displaystyle (E,r)}satisfies these properties, then the independent sets of a matroid overE{\displaystyle E}can be defined as those subsetsA{\displaystyle A}ofE{\displaystyle E}withr(A)=|A|{\displaystyle r(A)=|A|}. In the language ofpartially ordered sets, such a matroid structure is equivalent to thegeometric latticewhose elements are the subsetsA⊂M{\displaystyle A\subset M}, partially ordered by inclusion. The difference|A|−r(A){\displaystyle |A|-r(A)}is called thenullityof the subsetA{\displaystyle A}. It is the minimum number of elements that must be removed fromA{\displaystyle A}to obtain an independent set. The nullity ofE{\displaystyle E}inM{\displaystyle M}is called the nullity ofM{\displaystyle M}. The differencer(E)−r(A){\displaystyle r(E)-r(A)}is sometimes called thecorankof the subsetA{\displaystyle A}. LetM{\displaystyle M}be a matroid on a finite setE{\displaystyle E}, with rank functionr{\displaystyle r}as above. Theclosureorspancl⁡(A){\displaystyle \operatorname {cl} (A)}of a subsetA{\displaystyle A}ofE{\displaystyle E}is the set This defines aclosure operatorcl:P(E)↦P(E){\displaystyle \operatorname {cl} :{\mathcal {P}}(E)\mapsto {\mathcal {P}}(E)}whereP{\displaystyle {\mathcal {P}}}denotes thepower set, with the following properties: The first three of these properties are the defining properties of a closure operator. The fourth is sometimes called theMac Lane–Steinitzexchange property. These properties may be taken as another definition of matroid: every functioncl:P(E)→P(E){\displaystyle \operatorname {cl} :{\mathcal {P}}(E)\to {\mathcal {P}}(E)}that obeys these properties determines a matroid.[3] A set whose closure equals itself is said to beclosed, or aflatorsubspaceof the matroid.[4]A set is closed if it ismaximalfor its rank, meaning that the addition of any other element to the set would increase the rank. The closed sets of a matroid are characterized by a covering partition property: The classL(M){\displaystyle {\mathcal {L}}(M)}of all flats,partially orderedby set inclusion, forms amatroid lattice. Conversely, every matroid latticeL{\displaystyle L}forms a matroid over its setE{\displaystyle E}ofatomsunder the following closure operator: for a setS{\displaystyle S}of atoms with join⋁S{\displaystyle \bigvee S}, The flats of this matroid correspond one-for-one with the elements of the lattice; the flat corresponding to lattice elementy{\displaystyle y}is the set Thus, the lattice of flats of this matroid is naturally isomorphic toL{\displaystyle L}. In a matroid of rankr{\displaystyle r}, a flat of rankr−1{\displaystyle r-1}is called ahyperplane. (Hyperplanes are also calledco-atomsorcopoints.) These are the maximal proper flats; that is, the only superset of a hyperplane that is also a flat is the setE{\displaystyle E}of all the elements of the matroid. An equivalent definition is that a coatom is a subset ofEthat does not spanM, but such that adding any other element to it does make a spanning set.[5] The familyH{\displaystyle {\mathcal {H}}}of hyperplanes of a matroid has the following properties, which may be taken as yet another axiomatization of matroids:[5] Minty(1966) defined agraphoidas a triple(L,C,D){\displaystyle (L,C,D)}in whichC{\displaystyle C}andD{\displaystyle D}are classes of nonempty subsets ofL{\displaystyle L}such that He proved that there is a matroid for whichC{\displaystyle C}is the class of circuits andD{\displaystyle D}is the class of cocircuits. Conversely, ifC{\displaystyle C}andD{\displaystyle D}are the circuit and cocircuit classes of a matroidM{\displaystyle M}with ground setE{\displaystyle E}, then(E,C,D){\displaystyle (E,C,D)}is a graphoid. Thus, graphoids give aself-dual cryptomorphic axiomatizationof matroids. LetE{\displaystyle E}be a finite set. The set of all subsets ofE{\displaystyle E}defines the independent sets of a matroid. It is called thefree matroidoverE{\displaystyle E}. LetE{\displaystyle E}be a finite set andk{\displaystyle k}anatural number. One may define a matroid onE{\displaystyle E}by taking everyk{\displaystyle k}elementsubset ofE{\displaystyle E}to be a basis. This is known as theuniform matroidof rankk{\displaystyle k}. A uniform matroid with rankk{\displaystyle k}and withn{\displaystyle n}elements is denotedUk,n{\displaystyle U_{k,n}}. All uniform matroids of rank at least 2 are simple (see§ Additional terms). The uniform matroid of rank 2 onn{\displaystyle n}points is called then{\displaystyle n}point line. A matroid is uniform if and only if it has no circuits of size less than one plus the rank of the matroid. The direct sums of uniform matroids are calledpartition matroids. In the uniform matroidU0,n{\displaystyle U_{0,n}}, every element is a loop (an element that does not belong to any independent set), and in the uniform matroidUn,n{\displaystyle U_{n,n}}, every element is a coloop (an element that belongs to all bases). The direct sum of matroids of these two types is a partition matroid in which every element is a loop or a coloop; it is called adiscrete matroid. An equivalent definition of a discrete matroid is a matroid in which every proper, non-empty subset of the ground setE{\displaystyle E}is a separator. Matroid theory developed mainly out of a deep examination of the properties of independence and dimension in vector spaces. There are two ways to present the matroids defined in this way: The validity of the independent set axioms for this matroid follows from theSteinitz exchange lemma. An important example of a matroid defined in this way is the Fano matroid, a rank three matroid derived from theFano plane, afinite geometrywith seven points (the seven elements of the matroid) and seven lines (the proper nontrivial flats of the matroid). It is a linear matroid whose elements may be described as the seven nonzero points in a three dimensional vector space over thefinite fieldGF(2). However, it is not possible to provide a similar representation for the Fano matroid using thereal numbersin place of GF(2). AmatrixA{\displaystyle A}with entries in afieldgives rise to a matroidM{\displaystyle M}on its set of columns. The dependent sets of columns in the matroid are those that are linearly dependent as vectors. For instance, the Fano matroid can be represented in this way as a 3 × 7(0,1) matrix. Column matroids are just vector matroids under another name, but there are often reasons to favor the matrix representation.[b] A matroid that is equivalent to a vector matroid, although it may be presented differently, is calledrepresentableorlinear. IfM{\displaystyle M}is equivalent to a vector matroid over afieldF{\displaystyle F}, then we sayM{\displaystyle M}isrepresentable overF{\displaystyle F};in particular,M{\displaystyle M}isreal representableif it is representable over the real numbers. For instance, although a graphic matroid (see below) is presented in terms of a graph, it is also representable by vectors over any field. A basic problem in matroid theory is to characterize the matroids that may be represented over a given fieldF{\displaystyle F};Rota's conjecturedescribes a possible characterization for everyfinite field. The main results so far are characterizations ofbinary matroids(those representable over GF(2)) due toTutte(1950s), of ternary matroids (representable over the 3 element field) due to Reid and Bixby, and separately toSeymour(1970s), and of quaternary matroids (representable over the 4 element field) due toGeelen, Gerards & Kapoor (2000). A proof of Rota's conjecture was announced, but not published, in 2014 by Geelen, Gerards, and Whittle.[6] Aregular matroidis a matroid that is representable over all possible fields. TheVámos matroidis the simplest example of a matroid that is not representable over any field. A second original source for the theory of matroids isgraph theory. Every finite graph (ormultigraph)G{\displaystyle G}gives rise to a matroidM(G){\displaystyle M(G)}as follows: take asE{\displaystyle E}the set of all edges inG{\displaystyle G}and consider a set of edges independent if and only if it is aforest; that is, if it does not contain asimple cycle. ThenM(G){\displaystyle M(G)}is called acycle matroid. Matroids derived in this way aregraphic matroids. Not every matroid is graphic, but all matroids on three elements are graphic.[7]Every graphic matroid is regular. Other matroids on graphs were discovered subsequently: A third original source of matroid theory isfield theory. Anextensionof a field gives rise to a matroid: A matroid that is equivalent to a matroid of this kind is called analgebraic matroid.[13]The problem of characterizing algebraic matroids is extremely difficult; little is known about it. TheVámos matroidprovides an example of a matroid that is not algebraic. There are some standard ways to make new matroids out of old ones. IfM{\displaystyle M}is a finite matroid, we can define theorthogonalordual matroidM∗{\displaystyle M^{*}}by taking the same underlying set and calling a set abasisinM∗{\displaystyle M^{*}}if and only if its complement is a basis inM{\displaystyle M}. It is not difficult to verify thatM∗{\displaystyle M^{*}}is a matroid and that the dual ofM∗{\displaystyle M^{*}}isM{\displaystyle M}.[14] The dual can be described equally well in terms of other ways to define a matroid. For instance: According to a matroid version ofKuratowski's theorem, the dual of a graphic matroidM{\displaystyle M}is a graphic matroid if and only ifM{\displaystyle M}is the matroid of aplanar graph. In this case, the dual ofM{\displaystyle M}is the matroid of thedual graphofG{\displaystyle G}.[15]The dual of a vector matroid representable over a particular fieldF{\displaystyle F}is also representable overF{\displaystyle F}. The dual of a transversal matroid is a strict gammoid and vice versa. IfMis a matroid with element setE, andSis a subset ofE, therestrictionofMtoS, writtenM|S, is the matroid on the setSwhose independent sets are the independent sets ofMthat are contained inS. Its circuits are the circuits ofMthat are contained inSand its rank function is that ofMrestricted to subsets ofS. In linear algebra, this corresponds to restricting to the subspace generated by the vectors inS. Equivalently ifT=M−Sthis may be termed thedeletionofT, writtenM\TorM−T. The submatroids ofMare precisely the results of a sequence of deletions: the order is irrelevant.[16][17] The dual operation of restriction is contraction.[18]IfTis a subset ofE, thecontractionofMbyT, writtenM/T, is the matroid on the underlying setE − Twhose rank function isr′(A)=r(A∪T)−r(T){\displaystyle r'(A)=r(A\cup T)-r(T)}.[19]In linear algebra, this corresponds to looking at the quotient space by the linear space generated by the vectors inT, together with the images of the vectors inE - T. A matroidNthat is obtained fromMby a sequence of restriction and contraction operations is called aminorofM.[17][20]We sayMcontainsNas a minor. Many important families of matroids may be characterized by theminor-minimalmatroids that do not belong to the family; these are calledforbiddenorexcluded minors.[21] LetMbe a matroid with an underlying set of elementsE, and letNbe another matroid on an underlying setF. Thedirect sumof matroidsMandNis the matroid whose underlying set is thedisjoint unionofEandF, and whose independent sets are the disjoint unions of an independent set ofMwith an independent set ofN. TheunionofMandNis the matroid whose underlying set is the union (not the disjoint union) ofEandF, and whose independent sets are those subsets that are the union of an independent set inMand one inN. Usually the term "union" is applied whenE=F, but that assumption is not essential. IfEandFare disjoint, the union is the direct sum. LetMbe a matroid with an underlying set of elementsE. Several important combinatorial optimization problems can be solved efficiently on every matroid. In particular: Two standalone systems for calculations with matroids are Kingan'sOidand Hlineny'sMacek. Both of them are open-sourced packages. "Oid" is an interactive, extensible software system for experimenting with matroids. "Macek" is a specialized software system with tools and routines for reasonably efficient combinatorial computations with representable matroids. Both open source mathematics software systemsSAGEandMacaulay2contain matroid packages.Maplehas a package for dealing with matroids since the version 2024.[30] There are two especially significant polynomials associated to a finite matroidMon the ground setE. Each is amatroid invariant, which means that isomorphic matroids have the same polynomial. Thecharacteristic polynomialofM– sometimes called thechromatic polynomial,[31]although it does not count colorings – is defined to be or equivalently (as long as the empty set is closed inM) as where μ denotes theMöbius functionof thegeometric latticeof the matroid and the sum is taken over all the flats A of the matroid.[32] Thebeta invariantof a matroid, introduced byCrapo(1967), may be expressed in terms of the characteristic polynomialp{\displaystyle p}as an evaluation of the derivative[33] or directly as[34] The beta invariant is non-negative, and is zero if and only ifM{\displaystyle M}is disconnected, or empty, or a loop. Otherwise it depends only on the lattice of flats ofM{\displaystyle M}. IfM{\displaystyle M}has no loops and coloops thenβ(M)=β(M∗){\displaystyle \beta (M)=\beta (M^{*})}.[34] TheWhitney numbers of the first kindofM{\displaystyle M}are the coefficients of the powers ofλ{\displaystyle \lambda }in the characteristic polynomial. Specifically, thei{\displaystyle i}th Whitney numberwi(M){\displaystyle w_{i}(M)}is the coefficient ofλr(M)−i{\displaystyle \lambda ^{r(M)-i}}and is the sum of Möbius function values: summed over flats of the right rank. These numbers alternate in sign, so that(−1)iwi(M)>0{\displaystyle (-1)^{i}w_{i}(M)>0}for0≤i≤r(M){\displaystyle 0\leq i\leq r(M)}. TheWhitney numbers of the second kindofM{\displaystyle M}are the numbers of flats of each rank. That is,Wi(M){\displaystyle W_{i}(M)}is the number of ranki{\displaystyle i}flats. The Whitney numbers of both kinds generalize theStirling numbersof the first and second kind, which are the Whitney numbers of the cycle matroid of thecomplete graph, and equivalently of thepartition lattice. They were named afterHassler Whitney, the (co)founder of matroid theory, byGian-Carlo Rota. The name has been extended to the similar numbers for finite rankedpartially ordered sets. TheTutte polynomialof a matroid,TM(x,y){\displaystyle T_{M}(x,y)}, generalizes the characteristic polynomial to two variables. This gives it more combinatorial interpretations, and also gives it the duality property which implies a number of dualities between properties ofM{\displaystyle M}and properties ofM∗{\displaystyle M^{*}}. One definition of the Tutte polynomial is This expresses the Tutte polynomial as an evaluation of theco-rank-nullityorrank generating polynomial,[35] From this definition it is easy to see that the characteristic polynomial is, up to a simple factor, an evaluation ofTM{\displaystyle T_{M}}, specifically, Another definition is in terms of internal and external activities and a sum over bases, reflecting the fact thatT(1,1){\displaystyle T(1,1)}is the number of bases.[36]This, which sums over fewer subsets but has more complicated terms, was Tutte's original definition. There is a further definition in terms of recursion by deletion and contraction.[37]The deletion-contraction identity is whene{\displaystyle e}is neither a loop nor a coloop. An invariant of matroids (i.e., a function that takes the same value on isomorphic matroids) satisfying this recursion and the multiplicative condition is said to be aTutte-Grothendieck invariant.[35]The Tutte polynomial is the most general such invariant; that is, the Tutte polynomial is a Tutte-Grothendieck invariant and every such invariant is an evaluation of the Tutte polynomial.[31] TheTutte polynomialTG{\displaystyle T_{G}}of a graph is the Tutte polynomialTM(G){\displaystyle T_{M(G)}}of its cycle matroid. The theory of infinite matroids is much more complicated than that of finite matroids and forms a subject of its own. For a long time, one of the difficulties has been that there were many reasonable and useful definitions, none of which appeared to capture all the important aspects of finite matroid theory. For instance, it seemed to be hard to have bases, circuits, and duality together in one notion of infinite matroids. The simplest definition of an infinite matroid is to requirefinite rank; that is, the rank ofEis finite. This theory is similar to that of finite matroids except for the failure of duality due to the fact that the dual of an infinite matroid of finite rank does not have finite rank. Finite-rank matroids include any subsets of finite-dimensional vector spaces and offield extensionsof finitetranscendence degree. The next simplest infinite generalization is finitary matroids, also known aspregeometries. A matroid with possibly infinite ground set isfinitaryif it has the property that Equivalently, every dependent set contains a finite dependent set. Examples are linear dependence of arbitrary subsets of infinite-dimensionalvector spaces(but not infinite dependencies as inHilbertandBanach spaces), and algebraic dependence in arbitrary subsets of field extensions of possibly infinite transcendence degree. Again, the class of finitary matroid is not self-dual, because the dual of a finitary matroid is not finitary. Finitary infinite matroids are studied inmodel theory, a branch ofmathematical logicwith strong ties toalgebra. In the late 1960s matroid theorists asked for a more general notion that shares the different aspects of finite matroids and generalizes their duality. Many notions of infinite matroids were defined in response to this challenge, but the question remained open. One of the approaches examined by D.A. Higgs became known asB-matroidsand was studied by Higgs, Oxley, and others in the 1960s and 1970s. According to a recent result byBruhn et al. (2013), it solves the problem: Arriving at the same notion independently, they provided five equivalent systems of axiom—in terms of independence, bases, circuits, closure and rank. The duality of B-matroids generalizes dualities that can be observed in infinite graphs. The independence axioms are as follows: With these axioms, every matroid has a dual. Matroid theory was introduced byWhitney (1935). It was also independently discovered byTakeo Nakasawa, whose work was forgotten for many years (Nishimura & Kuroda (2009)). In his seminal paper, Whitney provided two axioms for independence, and defined any structure adhering to these axioms to be "matroids".[c]His key observation was that these axioms provide an abstraction of "independence" that is common to both graphs and matrices. Because of this, many of the terms used in matroid theory resemble the terms for their analogous concepts inlinear algebraorgraph theory. Almost immediately after Whitney first wrote about matroids, an important article was written byMacLane (1936)on the relation of matroids toprojective geometry. A year later,van der Waerden (1937)noted similarities between algebraic and linear dependence in his classic textbook on Modern Algebra. In the 1940sRichard Radodeveloped further theory under the name "independence systems" with an eye towardstransversal theory, where his name for the subject is still sometimes used. In the 1950sW.T. Tuttebecame the foremost figure in matroid theory, a position he retained for many years. His contributions were plentiful, including and the tools he used to prove many of his results: which are so complicated that later theorists have gone to great trouble to eliminate the need for them in proofs.[d] Crapo (1969)andBrylawski (1972)generalized to matroids Tutte's "dichromate", a graphic polynomial now known as theTutte polynomial(named by Crapo). Their work has recently (especially in the 2000s) been followed by a flood of papers—though not as many as on the Tutte polynomial of a graph. In 1976Dominic Welshpublished the first comprehensive book on matroid theory. Paul Seymour's decomposition theorem for regular matroids (Seymour (1980)) was the most significant and influential work of the late 1970s and the 1980s. Another fundamental contribution, byKahn & Kung (1982), showed why projective geometries andDowling geometriesplay such an important role in matroid theory. By the 1980s there were many other important contributors, but one should not omit to mentionGeoff Whittle's extension to ternary matroids of Tutte's characterization of binary matroids that are representable over the rationals (Whittle 1995), perhaps the biggest single contribution of the 1990s. In the current period (since around 2000) the Matroid Minors Project ofGeelen, Gerards, Whittle, and others,[e]has produced substantial advances in the structure theory of matroids. Many others have also contributed to that part of matroid theory, which (in the first and second decades of the 21st century) is flourishing. Mathematicians who pioneered the study of matroids include Some of the other major contributors are
https://en.wikipedia.org/wiki/Matroid
In the mathematical discipline ofgraph theory, amatchingorindependent edge setin an undirectedgraphis a set ofedgeswithout commonvertices.[1]In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in abipartite graphcan be treated as anetwork flowproblem. Given agraphG= (V,E),amatchingMinGis a set of pairwisenon-adjacentedges, none of which areloops; that is, no two edges share common vertices. A vertex ismatched(orsaturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex isunmatched(orunsaturated). Amaximal matchingis a matchingMof a graphGthat is not a subset of any other matching. A matchingMof a graphGis maximal if every edge inGhas a non-empty intersection with at least one edge inM. The following figure shows examples of maximal matchings (red) in three graphs. Amaximum matching(also known as maximum-cardinality matching[2]) is a matching that contains the largest possible number of edges. There may be many maximum matchings. Thematching numberν(G){\displaystyle \nu (G)}of a graphGis the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs. Aperfect matchingis a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph isincidentto an edge of the matching. A matching is perfect if|M|=|V|/2{\displaystyle |M|=|V|/2}. Every perfect matching is maximum and hence maximal. In some literature, the termcomplete matchingis used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-sizeedge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover:⁠ν(G)≤ρ(G){\displaystyle \nu (G)\leq \rho (G)}⁠. A graph can only contain a perfect matching when the graph has an even number of vertices. Anear-perfect matchingis one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has anodd numberof vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is calledfactor-critical. Given a matchingM, analternating pathis a path that begins with an unmatched vertex[3]and whose edges belong alternately to the matching and not to the matching. Anaugmenting pathis an alternating path that starts from and ends on free (unmatched) vertices.Berge's lemmastates that a matchingMis maximum if and only if there is no augmenting path with respect toM. Aninduced matchingis a matching that is the edge set of aninduced subgraph.[4] In any graph without isolated vertices, the sum of the matching number and theedge covering numberequals the number of vertices.[5]If there is a perfect matching, then both the matching number and the edge cover number are|V| / 2. IfAandBare two maximal matchings, then|A| ≤ 2|B|and|B| ≤ 2|A|. To see this, observe that each edge inB\Acan be adjacent to at most two edges inA\BbecauseAis a matching; moreover each edge inA\Bis adjacent to an edge inB\Aby maximality ofB, hence Further we deduce that In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, ifGis a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2. A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: LetG{\displaystyle G}be agraphonn{\displaystyle n}vertices, andλ1>λ2>…>λk>0{\displaystyle \lambda _{1}>\lambda _{2}>\ldots >\lambda _{k}>0}bek{\displaystyle k}distinct nonzeropurely imaginary numberswhere2k≤n{\displaystyle 2k\leq n}. Then thematching numberofG{\displaystyle G}isk{\displaystyle k}if and only if (a) there is a realskew-symmetric matrixA{\displaystyle A}with graphG{\displaystyle G}andeigenvalues±λ1,±λ2,…,±λk{\displaystyle \pm \lambda _{1},\pm \lambda _{2},\ldots ,\pm \lambda _{k}}andn−2k{\displaystyle n-2k}zeros, and (b) all real skew-symmetric matrices with graphG{\displaystyle G}have at most2k{\displaystyle 2k}nonzeroeigenvalues.[6]Note that the (simple) graph of a real symmetric or skew-symmetric matrixA{\displaystyle A}of ordern{\displaystyle n}hasn{\displaystyle n}vertices and edges given by the nonozero off-diagonal entries ofA{\displaystyle A}. Agenerating functionof the number ofk-edge matchings in a graph is called a matching polynomial. LetGbe a graph andmkbe the number ofk-edge matchings. One matching polynomial ofGis Another definition gives the matching polynomial as wherenis the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials. A fundamental problem incombinatorial optimizationis finding amaximum matching. This problem has various algorithms for different classes of graphs. In anunweighted bipartite graph, the optimization problem is to find amaximum cardinality matching. The problem is solved by theHopcroft-Karp algorithmin timeO(√VE)time, and there are more efficientrandomized algorithms,approximation algorithms, and algorithms for special classes of graphs such as bipartiteplanar graphs, as described in the main article. In aweightedbipartite graph,the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often calledmaximum weighted bipartite matching, or theassignment problem. TheHungarian algorithmsolves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modifiedshortest pathsearch in the augmenting path algorithm. If theBellman–Ford algorithmis used for this step, the running time of the Hungarian algorithm becomesO(V2E){\displaystyle O(V^{2}E)}, or the edge cost can be shifted with a potential to achieveO(V2log⁡V+VE){\displaystyle O(V^{2}\log {V}+VE)}running time with theDijkstra algorithmandFibonacci heap.[7] In anon-bipartite weighted graph, the problem ofmaximum weight matchingcan be solved in timeO(V2E){\displaystyle O(V^{2}E)}usingEdmonds' blossom algorithm. A maximal matching can be found with a simplegreedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find alargestmaximal matching in polynomial time. However, no polynomial-time algorithm is known for finding aminimum maximal matching, that is, a maximal matching that contains thesmallestpossible number of edges. A maximal matching withkedges is anedge dominating setwithkedges. Conversely, if we are given a minimum edge dominating set withkedges, we can construct a maximal matching withkedges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set.[8]Both of these two optimization problems are known to beNP-hard; the decision versions of these problems are classical examples ofNP-completeproblems.[9]Both problems can beapproximatedwithin factor 2 in polynomial time: simply find an arbitrary maximal matchingM.[10] The number of matchings in a graph is known as theHosoya indexof the graph. It is#P-completeto compute this quantity, even for bipartite graphs.[11]It is also #P-complete to countperfect matchings, even inbipartite graphs, because computing thepermanentof an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as itsbiadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings.[12]A remarkable theorem ofKasteleynstates that the number of perfect matchings in aplanar graphcan be computed exactly in polynomial time via theFKT algorithm. The number of perfect matchings in acomplete graphKn(withneven) is given by thedouble factorial(n− 1)!!.[13]The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by thetelephone numbers.[14] The number of perfect matchings in a graph is also known as thehafnianof itsadjacency matrix. One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are calledmaximally matchable edges, orallowededges). Algorithms for this problem include: The problem of developing anonline algorithmfor matching was first considered byRichard M. Karp,Umesh Vazirani, andVijay Vaziraniin 1990.[18] In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of thesecretary problemand has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains acompetitive ratioof0.696.[19] Kőnig's theoremstates that, in bipartite graphs, the maximum matching is equal in size to the minimumvertex cover. Via this result, the minimum vertex cover,maximum independent set, andmaximum vertex bicliqueproblems may be solved inpolynomial timefor bipartite graphs. Hall's marriage theoremprovides a characterization of bipartite graphs which have a perfect matching and theTutte theoremprovides a characterization for arbitrary graphs.
https://en.wikipedia.org/wiki/Matching_(graph_theory)
Graphmay refer to:
https://en.wikipedia.org/wiki/Graph_(mathematics)
Instatistics,polynomial regressionis a form ofregression analysisin which the relationship between theindependent variablexand thedependent variableyis modeled as apolynomialinx. Polynomial regression fits a nonlinear relationship between the value ofxand the correspondingconditional meanofy, denoted E(y|x). Although polynomial regression fits a nonlinear model to the data, as astatistical estimationproblem it is linear, in the sense that the regression function E(y|x) is linear in the unknownparametersthat are estimated from thedata. Thus, polynomial regression is a special case oflinear regression.[1] The explanatory (independent) variables resulting from the polynomial expansion of the "baseline" variables are known as higher-degree terms. Such variables are also used inclassificationsettings.[2] Polynomial regression models are usually fit using the method ofleast squares. The least-squares method minimizes thevarianceof theunbiasedestimatorsof the coefficients, under the conditions of theGauss–Markov theorem. The least-squares method was published in 1805 byLegendreand in 1809 byGauss. The firstdesignof anexperimentfor polynomial regression appeared in an 1815 paper ofGergonne.[3][4]In the twentieth century, polynomial regression played an important role in the development ofregression analysis, with a greater emphasis on issues ofdesignandinference.[5]More recently, the use of polynomial models has been complemented by other methods, with non-polynomial models having advantages for some classes of problems.[citation needed] The goal of regression analysis is to model the expected value of a dependent variableyin terms of the value of an independent variable (or vector of independent variables)x. In simple linear regression, the model is used, where ε is an unobserved random error with mean zero conditioned on ascalarvariablex. In this model, for each unit increase in the value ofx, the conditional expectation ofyincreases byβ1units. In many settings, such a linear relationship may not hold. For example, if we are modeling the yield of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find that the yield improves by increasing amounts for each unit increase in temperature. In this case, we might propose a quadratic model of the form In this model, when the temperature is increased fromxtox+ 1 units, the expected yield changes byβ1+β2(2x+1).{\displaystyle \beta _{1}+\beta _{2}(2x+1).}(This can be seen by replacingxin this equation withx+1 and subtracting the equation inxfrom the equation inx+1.) Forinfinitesimalchanges inx, the effect onyis given by thetotal derivativewith respect tox:β1+2β2x.{\displaystyle \beta _{1}+2\beta _{2}x.}The fact that the change in yield depends onxis what makes the relationship betweenxandynonlinear even though the model is linear in the parameters to be estimated. In general, we can model the expected value ofyas annth degree polynomial, yielding the general polynomial regression model Conveniently, these models are all linear from the point of view ofestimation, since the regression function is linear in terms of the unknown parametersβ0,β1, .... Therefore, forleast squaresanalysis, the computational and inferential problems of polynomial regression can be completely addressed using the techniques ofmultiple regression. This is done by treatingx,x2, ... as being distinct independent variables in a multiple regression model. The polynomial regression model can be expressed in matrix form in terms of a design matrixX{\displaystyle \mathbf {X} }, a response vectory→{\displaystyle {\vec {y}}}, a parameter vectorβ→{\displaystyle {\vec {\beta }}}, and a vectorε→{\displaystyle {\vec {\varepsilon }}}of random errors. Thei-th row ofX{\displaystyle \mathbf {X} }andy→{\displaystyle {\vec {y}}}will contain thexandyvalue for thei-th data sample. Then the model can be written as asystem of linear equations: which when using pure matrix notation is written as The vector of estimated polynomial regression coefficients (usingordinary least squaresestimation) is assumingm<nwhich is required for the matrix to be invertible; then sinceX{\displaystyle \mathbf {X} }is aVandermonde matrix, the invertibility condition is guaranteed to hold if all thexi{\displaystyle x_{i}}values are distinct. This is the unique least-squares solution. The above matrix equations explain the behavior of polynomial regression well. However, to physically implement polynomial regression for a set of xy point pairs, more detail is useful. The below matrix equations for polynomial coefficients are expanded from regression theory without derivation and easily implemented.[6][7][8] [∑i=1nxi0∑i=1nxi1∑i=1nxi2⋯∑i=1nxim∑i=1nxi1∑i=1nxi2∑i=1nxi3⋯∑i=1nxim+1∑i=1nxi2∑i=1nxi3∑i=1nxi4⋯∑i=1nxim+2⋮⋮⋮⋱⋮∑i=1nxim∑i=1nxim+1∑i=1nxim+2…∑i=1nxi2m][β0β1β2⋯βm]=[∑i=1nyixi0∑i=1nyixi1∑i=1nyixi2⋯∑i=1nyixim]{\displaystyle {\begin{bmatrix}\sum _{i=1}^{n}x_{i}^{0}&\sum _{i=1}^{n}x_{i}^{1}&\sum _{i=1}^{n}x_{i}^{2}&\cdots &\sum _{i=1}^{n}x_{i}^{m}\\\sum _{i=1}^{n}x_{i}^{1}&\sum _{i=1}^{n}x_{i}^{2}&\sum _{i=1}^{n}x_{i}^{3}&\cdots &\sum _{i=1}^{n}x_{i}^{m+1}\\\sum _{i=1}^{n}x_{i}^{2}&\sum _{i=1}^{n}x_{i}^{3}&\sum _{i=1}^{n}x_{i}^{4}&\cdots &\sum _{i=1}^{n}x_{i}^{m+2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{n}x_{i}^{m}&\sum _{i=1}^{n}x_{i}^{m+1}&\sum _{i=1}^{n}x_{i}^{m+2}&\dots &\sum _{i=1}^{n}x_{i}^{2m}\\\end{bmatrix}}{\begin{bmatrix}\beta _{0}\\\beta _{1}\\\beta _{2}\\\cdots \\\beta _{m}\\\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}y_{i}x_{i}^{0}\\\sum _{i=1}^{n}y_{i}x_{i}^{1}\\\sum _{i=1}^{n}y_{i}x_{i}^{2}\\\cdots \\\sum _{i=1}^{n}y_{i}x_{i}^{m}\\\end{bmatrix}}} After solving the abovesystem of linear equationsforβ0throughβm{\displaystyle \beta _{0}{\text{ through }}\beta _{m}}, the regression polynomial may be constructed as follows: y^=β0x0+β1x1+β2x2+⋯+βmxmWhere:n=number ofxiyivariable pairs in the datam=order of the polynomial to be used for regressionβ(0−m)=polynomial coefficient for each correspondingx(0−m)y^=estimated y variable based on the polynomial regression calculations.{\displaystyle {\begin{aligned}&\qquad {\widehat {y}}=\beta _{0}x^{0}+\beta _{1}x^{1}+\beta _{2}x^{2}+\cdots +\beta _{m}x^{m}\\&\qquad \\&\qquad {\text{Where:}}\\&\qquad n={\text{number of }}x_{i}y_{i}{\text{ variable pairs in the data}}\\&\qquad m={\text{order of the polynomial to be used for regression}}\\&\qquad \beta _{(0-m)}={\text{polynomial coefficient for each corresponding }}x^{(0-m)}\\&\qquad {\widehat {y}}={\text{estimated y variable based on the polynomial regression calculations.}}\end{aligned}}} Although polynomial regression is technically a special case of multiple linear regression, the interpretation of a fitted polynomial regression model requires a somewhat different perspective. It is often difficult to interpret the individual coefficients in a polynomial regression fit, since the underlying monomials can be highly correlated. For example,xandx2have correlation around 0.97 when x isuniformly distributedon the interval (0, 1). Although the correlation can be reduced by usingorthogonal polynomials, it is generally more informative to consider the fitted regression function as a whole. Point-wise or simultaneousconfidence bandscan then be used to provide a sense of the uncertainty in the estimate of the regression function. Polynomial regression is one example of regression analysis usingbasis functionsto model a functional relationship between two quantities. More specifically, it replacesx∈Rdx{\displaystyle x\in \mathbb {R} ^{d_{x}}}in linear regression with polynomial basisφ(x)∈Rdφ{\displaystyle \varphi (x)\in \mathbb {R} ^{d_{\varphi }}}, e.g.[1,x]→φ[1,x,x2,…,xd]{\displaystyle [1,x]{\mathbin {\stackrel {\varphi }{\rightarrow }}}[1,x,x^{2},\ldots ,x^{d}]}. A drawback of polynomial bases is that the basis functions are "non-local", meaning that the fitted value ofyat a given valuex=x0depends strongly on data values withxfar fromx0.[9]In modern statistics, polynomial basis-functions are used along with newbasis functions, such assplines,radial basis functions, andwavelets. These families of basis functions offer a more parsimonious fit for many types of data. The goal of polynomial regression is to model a non-linear relationship between the independent and dependent variables (technically, between the independent variable and the conditional mean of the dependent variable). This is similar to the goal ofnonparametric regression, which aims to capture non-linear regression relationships. Therefore, non-parametric regression approaches such assmoothingcan be useful alternatives to polynomial regression. Some of these methods make use of a localized form of classical polynomial regression.[10]An advantage of traditional polynomial regression is that the inferential framework of multiple regression can be used (this also holds when using other families of basis functions such as splines). A final alternative is to usekernelizedmodels such assupport vector regressionwith apolynomial kernel. Ifresidualshaveunequal variance, aweighted least squaresestimator may be used to account for that.[11]
https://en.wikipedia.org/wiki/Polynomial_regression
Feature engineeringis a preprocessing step insupervised machine learningandstatistical modeling[1]which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability.[2][3][4] Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists constructdimensionless numberssuch as theReynolds numberinfluid dynamics, theNusselt numberinheat transfer, and theArchimedes numberinsedimentation. They also develop first approximations of solutions, such as analytical solutions for thestrength of materialsin mechanics.[5] One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based onmatrix decompositionhas been extensively used for data clustering under non-negativity constraints on the feature coefficients. These includeNon-Negative Matrix Factorization(NMF),[6]Non-Negative Matrix-Tri Factorization(NMTF),[7]Non-Negative Tensor Decomposition/Factorization(NTF/NTD),[8]etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, includingorthogonality-constrained factorizationfor hard clustering, andmanifold learningto overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example isMulti-view Classification based on Consensus Matrix Decomposition(MCMD),[2]which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: Coupled matrix and tensor decompositions are popular in multi-view feature engineering.[9] Feature engineering inmachine learningandstatistical modelinginvolves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods likePrincipal Components Analysis(PCA),Independent Component Analysis(ICA), andLinear Discriminant Analysis(LDA), and selecting the most relevant features for model training based on importance scores andcorrelation matrices.[10] Features vary in significance.[11]Even relatively insignificant features may contribute to a model.Feature selectioncan reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting).[12] Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature explosion can be limited via techniques such as:regularization,kernel methods, andfeature selection.[13] Automation of feature engineering is a research topic that dates back to the 1990s.[14]Machine learning software that incorporatesautomated feature engineeringhas been commercially available since 2016.[15]Related academic literature can be roughly separated into two types: Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods torelational databases, handling complex data relationships across tables. It innovatively uses selection graphs asdecision nodes, refined systematically until a specific termination criterion is reached.[14] Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as tuple id propagation.[16][17] There are a number of open-source libraries and tools that automate feature engineering on relational data and time series: [OneBM] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost.[22] The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition.[32][33] Thefeature storeis where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions.[34] A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used.[35] Feature stores can be standalone software tools or built into machine learning platforms. Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error.[36][37]Deep learning algorithmsmay be used to process a large raw dataset without having to resort to feature engineering.[38]However, deep learning algorithms still require careful preprocessing and cleaning of the input data.[39]In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process.[40]
https://en.wikipedia.org/wiki/Feature_engineering
Machine learning(ML) is afield of studyinartificial intelligenceconcerned with the development and study ofstatistical algorithmsthat can learn fromdataandgeneraliseto unseen data, and thus performtaskswithout explicitinstructions.[1]Within a subdiscipline in machine learning, advances in the field ofdeep learninghave allowedneural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.[2] ML finds application in many fields, includingnatural language processing,computer vision,speech recognition,email filtering,agriculture, andmedicine.[3][4]The application of ML to business problems is known aspredictive analytics. Statisticsandmathematical optimisation(mathematical programming) methods comprise the foundations of machine learning.Data miningis a related field of study, focusing onexploratory data analysis(EDA) viaunsupervised learning.[6][7] From a theoretical viewpoint,probably approximately correct learningprovides a framework for describing machine learning. The termmachine learningwas coined in 1959 byArthur Samuel, anIBMemployee and pioneer in the field ofcomputer gamingandartificial intelligence.[8][9]The synonymself-teaching computerswas also used in this time period.[10][11] Although the earliest machine learning model was introduced in the 1950s whenArthur Samuelinvented aprogramthat calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes.[12]In 1949,CanadianpsychologistDonald Hebbpublished the bookThe Organization of Behavior, in which he introduced atheoretical neural structureformed by certain interactions amongnerve cells.[13]Hebb's model ofneuronsinteracting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, orartificial neuronsused by computers to communicate data.[12]Other researchers who have studied humancognitive systemscontributed to the modern machine learning technologies as well, including logicianWalter PittsandWarren McCulloch, who proposed the early mathematical models of neural networks to come up withalgorithmsthat mirror human thought processes.[12] By the early 1960s, an experimental "learning machine" withpunched tapememory, called Cybertron, had been developed byRaytheon Companyto analysesonarsignals,electrocardiograms, and speech patterns using rudimentaryreinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions.[14]A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.[15]Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973.[16]In 1981 a report was given on using teaching strategies so that anartificial neural networklearns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.[17] Tom M. Mitchellprovided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experienceEwith respect to some class of tasksTand performance measurePif its performance at tasks inT, as measured byP, improves with experienceE."[18]This definition of the tasks in which machine learning is concerned offers a fundamentallyoperational definitionrather than defining the field in cognitive terms. This followsAlan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[19] Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.[20] As a scientific endeavour, machine learning grew out of the quest forartificial intelligence(AI). In the early days of AI as anacademic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostlyperceptronsandother modelsthat were later found to be reinventions of thegeneralised linear modelsof statistics.[22]Probabilistic reasoningwas also employed, especially inautomated medical diagnosis.[23]: 488 However, an increasing emphasis on thelogical, knowledge-based approachcaused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[23]: 488By 1980,expert systemshad come to dominate AI, and statistics was out of favour.[24]Work on symbolic/knowledge-based learning did continue within AI, leading toinductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, inpattern recognitionandinformation retrieval.[23]: 708–710, 755Neural networks research had been abandoned by AI andcomputer sciencearound the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines includingJohn Hopfield,David Rumelhart, andGeoffrey Hinton. Their main success came in the mid-1980s with the reinvention ofbackpropagation.[23]: 25 Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from thesymbolic approachesit had inherited from AI, and toward methods and models borrowed from statistics,fuzzy logic, andprobability theory.[24] There is a close connection between machine learning and compression. A system that predicts theposterior probabilitiesof a sequence given its entire history can be used for optimal data compression (by usingarithmetic codingon the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".[25][26][27] An alternative view can show compression algorithms implicitly map strings into implicitfeature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.[28] According toAIXItheory, a connection more directly explained inHutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software includeNVIDIA Maxine, AIVC.[29]Examples of software that can perform AI-powered image compression includeOpenCV,TensorFlow,MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.[30] Inunsupervised machine learning,k-means clusteringcan be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such asimage compression.[31] Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by thecentroidof its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial inimageandsignal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.[32] Machine learning anddata miningoften employ the same methods and overlap significantly, but while machine learning focuses on prediction, based onknownproperties learned from the training data, data mining focuses on thediscoveryof (previously)unknownproperties in the data (this is the analysis step ofknowledge discoveryin databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals,ECML PKDDbeing a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability toreproduce knownknowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previouslyunknownknowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Machine learning also has intimate ties tooptimisation: Many learning problems are formulated as minimisation of someloss functionon a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign alabelto instances, and models are trained to correctly predict the preassigned labels of a set of examples).[35] Characterizing the generalisation of various learning algorithms is an active topic of current research, especially fordeep learningalgorithms. Machine learning andstatisticsare closely related fields in terms of methods, but distinct in their principal goal: statistics draws populationinferencesfrom asample, while machine learning finds generalisable predictive patterns.[36]According toMichael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[37]He also suggested the termdata scienceas a placeholder to call the overall field.[37] Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.[38] Leo Breimandistinguished two statistical modelling paradigms: data model and algorithmic model,[39]wherein "algorithmic model" means more or less the machine learning algorithms likeRandom Forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they callstatistical learning.[40] Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space ofdeep neural networks.[41]Statistical physics is thus finding applications in the area ofmedical diagnostics.[42] A core objective of a learner is to generalise from its experience.[5][43]Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch oftheoretical computer scienceknown ascomputational learning theoryvia theprobably approximately correct learningmodel. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. Thebias–variance decompositionis one way to quantify generalisationerror. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject tooverfittingand generalisation will be poorer.[44] In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done inpolynomial time. There are two kinds oftime complexityresults: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Although each algorithm has advantages and limitations, no single algorithm works for all problems.[45][46][47] Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[48]The data, known astraining data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by anarrayor vector, sometimes called afeature vector, and the training data is represented by amatrix. Throughiterative optimisationof anobjective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[49]An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[18] Types of supervised-learning algorithms includeactive learning,classificationandregression.[50]Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.[51] Similarity learningis an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications inranking,recommendation systems, visual identity tracking, face verification, and speaker verification. Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering,dimensionality reduction,[7]anddensity estimation.[52] Cluster analysis is the assignment of a set of observations into subsets (calledclusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by somesimilarity metricand evaluated, for example, byinternal compactness, or the similarity between members of the same cluster, andseparation, the difference between clusters. Other methods are based onestimated densityandgraph connectivity. A special type of unsupervised learning called,self-supervised learninginvolves training a model by generating the supervisory signal from the data itself.[53][54] Semi-supervised learning falls betweenunsupervised learning(without any labelled training data) andsupervised learning(with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. Inweakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.[55] Reinforcement learning is an area of machine learning concerned with howsoftware agentsought to takeactionsin an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such asgame theory,control theory,operations research,information theory,simulation-based optimisation,multi-agent systems,swarm intelligence,statisticsandgenetic algorithms. In reinforcement learning, the environment is typically represented as aMarkov decision process(MDP). Many reinforcement learning algorithms usedynamic programmingtechniques.[56]Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. Dimensionality reductionis a process of reducing the number of random variables under consideration by obtaining a set of principal variables.[57]In other words, it is a process of reducing the dimension of thefeatureset, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination orextraction. One of the popular methods of dimensionality reduction isprincipal component analysis(PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). Themanifold hypothesisproposes that high-dimensional data sets lie along low-dimensionalmanifolds, and many dimensionality reduction techniques make this assumption, leading to the area ofmanifold learningandmanifold regularisation. Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example,topic modelling,meta-learning.[58] Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, namedcrossbar adaptive array(CAA).[59][60]It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.[61]The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations.[62] Several learning algorithms aim at discovering better representations of the inputs provided during training.[63]Classic examples includeprincipal component analysisand cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manualfeature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples includeartificial neural networks,multilayer perceptrons, and superviseddictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning,independent component analysis,autoencoders,matrix factorisation[64]and various forms ofclustering.[65][66][67] Manifold learningalgorithms attempt to do so under the constraint that the learned representation is low-dimensional.Sparse codingalgorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros.Multilinear subspace learningalgorithms aim to learn low-dimensional representations directly fromtensorrepresentations for multidimensional data, without reshaping them into higher-dimensional vectors.[68]Deep learningalgorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[69] Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination ofbasis functionsand assumed to be asparse matrix. The method isstrongly NP-hardand difficult to solve approximately.[70]A popularheuristicmethod for sparse dictionary learning is thek-SVDalgorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied inimage de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[71] Indata mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[72]Typically, the anomalous items represent an issue such asbank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to asoutliers, novelties, noise, deviations and exceptions.[73] In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.[74] Three broad categories of anomaly detection techniques exist.[75]Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model. Robot learningis inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning,[76][77]and finallymeta-learning(e.g. MAML). Association rule learning is arule-based machine learningmethod for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[78] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[79]Rule-based machine learning approaches includelearning classifier systems, association rule learning, andartificial immune systems. Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets.[80]For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotionalpricingorproduct placements. In addition tomarket basket analysis, association rules are employed today in application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems(LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically agenetic algorithm, with a learning component, performing eithersupervised learning,reinforcement learning, orunsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions.[81] Inductive logic programming(ILP) is an approach to rule learning usinglogic programmingas a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program thatentailsall positive and no negative examples.Inductive programmingis a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such asfunctional programs. Inductive logic programming is particularly useful inbioinformaticsandnatural language processing.Gordon PlotkinandEhud Shapirolaid the initial theoretical foundation for inductive machine learning in a logical setting.[82][83][84]Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[85]The terminductivehere refers tophilosophicalinduction, suggesting a theory to explain observed facts, rather thanmathematical induction, proving a property for all members of a well-ordered set. Amachine learning modelis a type ofmathematical modelthat, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions.[86]By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.[87] Various types of models have been used and researched for machine learning systems, picking the best model for a task is calledmodel selection. Artificial neural networks (ANNs), orconnectionistsystems, are computing systems vaguely inspired by thebiological neural networksthat constitute animalbrains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model theneuronsin a biological brain. Each connection, like thesynapsesin a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is areal number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have aweightthat adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that ahuman brainwould. However, over time, attention moved to performing specific tasks, leading to deviations frombiology. Artificial neural networks have been used on a variety of tasks, includingcomputer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesandmedical diagnosis. Deep learningconsists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[88] Decision tree learning uses adecision treeas apredictive modelto go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures,leavesrepresent class labels, and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making. Random forest regression (RFR) falls under umbrella of decisiontree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting.  To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application.[89][90] Support-vector machines (SVMs), also known as support-vector networks, are a set of relatedsupervised learningmethods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category.[91]An SVM training algorithm is a non-probabilistic,binary,linear classifier, although methods such asPlatt scalingexist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called thekernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form islinear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such asordinary least squares. The latter is often extended byregularisationmethods to mitigate overfitting and bias, as inridge regression. When dealing with non-linear problems, go-to models includepolynomial regression(for example, used for trendline fitting in Microsoft Excel[92]),logistic regression(often used instatistical classification) or evenkernel regression, which introduces non-linearity by taking advantage of thekernel trickto implicitly map input variables to higher-dimensional space. Multivariate linear regressionextends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting amultidimensionallinear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images,[93]which are inherently multi-dimensional. A Bayesian network, belief network, or directed acyclic graphical model is a probabilisticgraphical modelthat represents a set ofrandom variablesand theirconditional independencewith adirected acyclic graph(DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that performinferenceand learning. Bayesian networks that model sequences of variables, likespeech signalsorprotein sequences, are calleddynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams. A Gaussian process is astochastic processin which every finite collection of the random variables in the process has amultivariate normal distribution, and it relies on a pre-definedcovariance function, or kernel, that models how pairs of points relate to each other depending on their locations. Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point. Gaussian processes are popular surrogate models inBayesian optimisationused to dohyperparameter optimisation. A genetic algorithm (GA) is asearch algorithmandheuristictechnique that mimics the process ofnatural selection, using methods such asmutationandcrossoverto generate newgenotypesin the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[95][96]Conversely, machine learning techniques have been used to improve the performance of genetic andevolutionary algorithms.[97] The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in apmf-based Bayesian approach would combine probabilities.[98]However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance anduncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of variousensemble methodsto better handle the learner'sdecision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.[4][9]However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches. Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includeslearning classifier systems,[99]association rule learning,[100]artificial immune systems,[101]and other similar models. These methods extract patterns from data and evolve rules over time. Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representativesampleof data. Data from the training set can be as varied as acorpus of text, a collection of images,sensordata, and data collected from individual users of a service.Overfittingis something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives.Algorithmic biasis a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams. Federated learning is an adapted form ofdistributed artificial intelligenceto training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example,Gboarduses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back toGoogle.[102] There are many applications for machine learning, including: In 2006, the media-services providerNetflixheld the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers fromAT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built anensemble modelto win the Grand Prize in 2009 for $1 million.[105]Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[106]In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[107]In 2012, co-founder ofSun Microsystems,Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[108]In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists.[109]In 2019Springer Naturepublished the first research book created using machine learning.[110]In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19.[111]Machine learning was recently applied to predict the pro-environmental behaviour of travellers.[112]Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone.[113][114][115]When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns withoutoverfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques likeOLS.[116] Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.[117] Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes.[118][119][120]Other applications have been focusing on pre evacuation decisions in building fires.[121][122] Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns.[123] Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[124][125][126]Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[127] The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data.[128]The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.[128] In 2018, a self-driving car fromUberfailed to detect a pedestrian, who was killed after a collision.[129]Attempts to use machine learning in healthcare with theIBM Watsonsystem failed to deliver even after years of time and billions of dollars invested.[130][131]Microsoft'sBing Chatchatbot has been reported to produce hostile and offensive response against its users.[132] Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.[133] Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.[134]It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.[135]By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.[136] Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[137]A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.[138][139] Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.[140]Machine learning models are often vulnerable to manipulation or evasion viaadversarial machine learning.[141] Researchers have demonstrated howbackdoorscan be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type ofdata/software transparencyis provided, possibly includingwhite-box access.[142][143][144] Classification of machine learning models can be validated by accuracy estimation techniques like theholdoutmethod, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validationmethod randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods,bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[145] In addition to overall accuracy, investigators frequently reportsensitivity and specificitymeaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report thefalse positive rate(FPR) as well as thefalse negative rate(FNR). However, these rates are ratios that fail to reveal their numerators and denominators.Receiver operating characteristic(ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.[146] Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[147]This includesalgorithmic biases,fairness,[148]automated decision-making,[149]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[147] Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.[150] Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices.[151]For example, in 1988, the UK'sCommission for Racial Equalityfound thatSt. George's Medical Schoolhad been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names.[150]Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants.[152][153]Another example includes predictive policing companyGeolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.[154] While responsiblecollection of dataand documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases.[155]In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world.[156]Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.[156] Language models learned from data have been shown to contain human-like biases.[157][158]Because human languages contain biases, machines trained on languagecorporawill necessarily also learn these biases.[159][160]In 2016, Microsoft testedTay, achatbotthat learned from Twitter, and it quickly picked up racist and sexist language.[161] In an experiment carried out byProPublica, aninvestigative journalismorganisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants".[154]In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas.[162]Similar issues with recognising non-white people have been found in many other systems.[163] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[164]Concern forfairnessin machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, includingFei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."[165] There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.[166] Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for trainingdeep neural networks(a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units.[167]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[168]OpenAIestimated the hardware compute used in the largest deep learning projects fromAlexNet(2012) toAlphaZero(2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[169][170] Tensor Processing Units (TPUs)are specialised hardware accelerators developed byGooglespecifically for machine learning workloads. Unlike general-purposeGPUsandFPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency.[171]Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments. Neuromorphic computingrefers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures.[172] Aphysical neural networkis a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function ofneural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.[173][174] Embedded machine learning is a sub-field of machine learning where models are deployed onembedded systemswith limited computing resources, such aswearable computers,edge devicesandmicrocontrollers.[175][176][177][178]Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such ashardware acceleration,[179][180]approximate computing,[181]and model optimisation.[182][183]Common optimisation techniques includepruning,quantisation,knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing. Software suitescontaining a variety of machine learning algorithms include the following:
https://en.wikipedia.org/wiki/Generalization_(machine_learning)
Adecision treeis adecision supportrecursive partitioning structure that uses atree-likemodelof decisions and their possible consequences, includingchanceevent outcomes, resource costs, andutility. It is one way to display analgorithmthat only contains conditional control statements. Decision trees are commonly used inoperations research, specifically indecision analysis,[1]to help identify a strategy most likely to reach a goal, but are also a popular tool inmachine learning. A decision tree is aflowchart-like structure in which each internal node represents a test on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf representclassificationrules. Indecision analysis, a decision tree and the closely relatedinfluence diagramare used as a visual and analytical decision support tool, where theexpected values(orexpected utility) of competing alternatives are calculated. A decision tree consists of three types of nodes:[2] Decision trees are commonly used inoperations researchandoperations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by aprobabilitymodel as a best choice model or online selection modelalgorithm.[citation needed]Another use of decision trees is as a descriptive means for calculatingconditional probabilities. Decision trees,influence diagrams,utility functions, and otherdecision analysistools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research ormanagement sciencemethods. These tools are also used to predict decisions of householders in normal and emergency scenarios.[3][4] Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). So used manually they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually – as the aside example shows – although increasingly, specialized software is employed. The decision tree can belinearizedintodecision rules,[5]where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form: Decision rules can be generated by constructingassociation ruleswith the target variable on the right. They can also denote temporal or causal relations.[6] Commonly a decision tree is drawn usingflowchartsymbols as it is easier for many to read and understand. Note there is a conceptual error in the "Proceed" calculation of the tree shown below; the error relates to the calculation of "costs" awarded in a legal action. Analysis can take into account the decision maker's (e.g., the company's) preference orutility function, for example: The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B"). Another example, commonly used inoperations researchcourses, is the distribution of lifeguards on beaches (a.k.a. the "Life's a Beach" example).[7]The example describes two beaches with lifeguards to be distributed on each beach. There is maximum budgetBthat can be distributed among the two beaches (in total), and using a marginal returns table, analysts can decide how many lifeguards to allocate to each beach. In this example, a decision tree can be drawn to illustrate the principles ofdiminishing returnson beach #1. The decision tree illustrates that when sequentially distributing lifeguards, placing a first lifeguard on beach #1 would be optimal if there is only the budget for 1 lifeguard. But if there is a budget for two guards, then placing both on beach #2 would prevent more overall drownings. Much of the information in a decision tree can be represented more compactly as aninfluence diagram, focusing attention on the issues and relationships between events. Decision trees can also be seen asgenerative modelsof induction rules from empirical data. An optimal decision tree is then defined as a tree that accounts for most of the data, while minimizing the number of levels (or "questions").[8]Several algorithms to generate such optimal trees have been devised, such asID3/4/5,[9]CLS, ASSISTANT, and CART. Among decision support tools, decision trees (andinfluence diagrams) have several advantages. Decision trees: Disadvantages of decision trees: A few things should be considered when improving the accuracy of the decision tree classifier. The following are some possible optimizations to consider when looking to make sure the decision tree model produced makes the correct decision or classification. Note that these things are not the only things to consider but only some. Theaccuracyof the decision tree can change based on the depth of the decision tree. In many cases, the tree’s leaves arepurenodes.[11]When a node is pure, it means that all the data in that node belongs to a single class.[12]For example, if the classes in the data set are Cancer and Non-Cancer a leaf node would be considered pure when all the sample data in a leaf node is part of only one class, either cancer or non-cancer. It is important to note that a deeper tree is not always better when optimizing the decision tree. A deeper tree can influence the runtime in a negative way. If a certain classification algorithm is being used, then a deeper tree could mean the runtime of this classification algorithm is significantly slower. There is also the possibility that the actual algorithm building the decision tree will get significantly slower as the tree gets deeper. If the tree-building algorithm being used splits pure nodes, then a decrease in the overall accuracy of the tree classifier could be experienced. Occasionally, going deeper in the tree can cause an accuracy decrease in general, so it is very important to test modifying the depth of the decision tree and selecting the depth that produces the best results. To summarize, observe the points below, we will define the number D as the depth of the tree. Possible advantages of increasing the number D: Possible disadvantages of increasing D The ability to test the differences in classification results when changing D is imperative. We must be able to easily change and test the variables that could affect the accuracy and reliability of the decision tree-model. The node splitting function used can have an impact on improving the accuracy of the decision tree. For example, using theinformation-gainfunction may yield better results than using the phi function. The phi function is known as a measure of “goodness” of a candidate split at a node in the decision tree. The information gain function is known as a measure of the “reduction inentropy”. In the following, we will build two decision trees. One decision tree will be built using the phi function to split the nodes and one decision tree will be built using the information gain function to split the nodes. The main advantages and disadvantages ofinformation gainand phi function This is the information gain function formula. The formula states the information gain is a function of the entropy of a node of the decision tree minus the entropy of a candidate split at node t of a decision tree. This is the phi function formula. The phi function is maximized when the chosen feature splits the samples in a way that produces homogenous splits and have around the same number of samples in each split. We will set D, which is the depth of the decision tree we are building, to three (D = 3). We also have the following data set of cancer and non-cancer samples and the mutation features that the samples either have or do not have. If a sample has a feature mutation then the sample is positive for that mutation, and it will be represented by one. If a sample does not have a feature mutation then the sample is negative for that mutation, and it will be represented by zero. To summarize, C stands for cancer and NC stands for non-cancer. The letter M stands formutation, and if a sample has a particular mutation it will show up in the table as a one and otherwise zero. Now, we can use the formulas to calculate the phi function values and information gain values for each M in the dataset. Once all the values are calculated the tree can be produced. The first thing to be done is to select the root node. In information gain and the phi function we consider the optimal split to be the mutation that produces the highest value for information gain or the phi function. Now assume that M1 has the highest phi function value and M4 has the highest information gain value. The M1 mutation will be the root of our phi function tree and M4 will be the root of our information gain tree. You can observe the root nodes below Now, once we have chosen the root node we can split the samples into two groups based on whether a sample is positive or negative for the root node mutation. The groups will be called group A and group B. For example, if we use M1 to split the samples in the root node we get NC2 and C2 samples in group A and the rest of the samples NC4, NC3, NC1, C1 in group B. Disregarding the mutation chosen for the root node, proceed to place the next best features that have the highest values for information gain or the phi function in the left or right child nodes of the decision tree. Once we choose the root node and the two child nodes for the tree of depth = 3 we can just add the leaves. The leaves will represent the final classification decision the model has produced based on the mutations a sample either has or does not have. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes. Now assume theclassificationresults from both trees are given using aconfusion matrix. Information gain confusion matrix: Phi function confusion matrix: The tree using information gain has the same results when using the phi function when calculating the accuracy. When we classify the samples based on the model using information gain we get one true positive, one false positive, zero false negatives, and four true negatives. For the model using the phi function we get two true positives, zero false positives, one false negative, and three true negatives. The next step is to evaluate the effectiveness of the decision tree using some key metrics that will be discussed in the evaluating a decision tree section below. The metrics that will be discussed below can help determine the next steps to be taken when optimizing the decision tree. The above information is not where it ends for building and optimizing a decision tree. There are many techniques for improving the decision tree classification models we build. One of the techniques is making our decision tree model from abootstrappeddataset. The bootstrapped dataset helps remove the bias that occurs when building a decision tree model with the same data the model is tested with. The ability to leverage the power ofrandom forestscan also help significantly improve the overall accuracy of the model being built. This method generates many decisions from many decision trees and tallies up the votes from each decision tree to make the final classification. There are many techniques, but the main objective is to test building your decision tree model in different ways to make sure it reaches the highest performance level possible. It is important to know the measurements used to evaluate decision trees. The main metrics used areaccuracy,sensitivity,specificity,precision,miss rate,false discovery rate, andfalse omission rate. All these measurements are derived from the number oftrue positives,false positives,True negatives, andfalse negativesobtained when running a set of samples through the decision tree classification model. Also, a confusion matrix can be made to display these results. All these main metrics tell something different about the strengths and weaknesses of the classification model built based on your decision tree. For example, a low sensitivity with high specificity could indicate the classification model built from the decision tree does not do well identifying cancer samples over non-cancer samples. Let us take the confusion matrix below. We will now calculate the values accuracy, sensitivity, specificity, precision, miss rate, false discovery rate, and false omission rate. Accuracy: Accuracy=(TP+TN)/(TP+TN+FP+FN){\displaystyle {\text{Accuracy}}=(TP+TN)/(TP+TN+FP+FN)} =(11+105)/162=71.60%{\displaystyle =(11+105)/162=71.60\%} Sensitivity (TPR – true positive rate):[14] TPR=TP/(TP+FN){\displaystyle {\text{TPR}}=TP/(TP+FN)} =11/(11+45)=19.64%{\displaystyle =11/(11+45)=19.64\%} Specificity (TNR – true negative rate): TNR=TN/(TN+FP){\displaystyle {\text{TNR}}=TN/(TN+FP)} =105/(105+1)=99.06%{\displaystyle =105/(105+1)=99.06\%} Precision (PPV – positive predictive value): PPV=TP/(TP+FP){\displaystyle {\text{PPV}}=TP/(TP+FP)} =11/(11+1)=91.66%{\displaystyle =11/(11+1)=91.66\%} Miss Rate (FNR – false negative rate): FNR=FN/(FN+TP){\displaystyle {\text{FNR}}=FN/(FN+TP)} =45/(45+11)=80.35%{\displaystyle =45/(45+11)=80.35\%} False discovery rate (FDR): FDR=FP/(FP+TP){\displaystyle {\text{FDR}}=FP/(FP+TP)} =1/(1+11)=8.30%{\displaystyle =1/(1+11)=8.30\%} False omission rate (FOR): FOR=FN/(FN+TN){\displaystyle {\text{FOR}}=FN/(FN+TN)} =45/(45+105)=30.00%{\displaystyle =45/(45+105)=30.00\%} Once we have calculated the key metrics we can make some initial conclusions on the performance of the decision tree model built. The accuracy that we calculated was 71.60%. The accuracy value is good to start but we would like to get our models as accurate as possible while maintaining the overall performance. The sensitivity value of 19.64% means that out of everyone who was actually positive for cancer tested positive. If we look at the specificity value of 99.06% we know that out of all the samples that were negative for cancer actually tested negative. When it comes to sensitivity and specificity it is important to have a balance between the two values, so if we can decrease our specificity to increase the sensitivity that would prove to be beneficial.[15]These are just a few examples on how to use these values and the meanings behind them to evaluate the decision tree model and improve upon the next iteration.
https://en.wikipedia.org/wiki/Decision_tree
Inmachine learning,support vector machines(SVMs, alsosupport vector networks[1]) aresupervisedmax-marginmodels with associated learningalgorithmsthat analyze data forclassificationandregression analysis. Developed atAT&T Bell Laboratories,[1][2]SVMs are one of the most studied models, being based on statistical learning frameworks ofVC theoryproposed byVapnik(1982, 1995) andChervonenkis(1974). In addition to performinglinear classification, SVMs can efficiently perform non-linear classification using thekernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensionalfeature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed.[3]Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). SVMs can also be used forregressiontasks, where the objective becomesϵ{\displaystyle \epsilon }-sensitive. The support vector clustering[4]algorithm, created byHava SiegelmannandVladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.[citation needed]These data sets requireunsupervised learningapproaches, which attempt to find naturalclustering of the datainto groups, and then to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, and their flexibility in being applied to a wide variety of tasks, includingstructured predictionproblems. It is not clear that SVMs have better predictive performance than other linear models, such aslogistic regressionandlinear regression.[5] Classifying datais a common task inmachine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class anewdata pointwill be in. In the case of support vector machines, a data point is viewed as ap{\displaystyle p}-dimensional vector (a list ofp{\displaystyle p}numbers), and we want to know whether we can separate such points with a(p−1){\displaystyle (p-1)}-dimensionalhyperplane. This is called alinear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, ormargin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplaneand the linear classifier it defines is known as amaximum-margin classifier; or equivalently, theperceptron of optimal stability.[6] More formally, a support vector machine constructs ahyperplaneor set of hyperplanes in a high or infinite-dimensional space, which can be used forclassification,regression, or other tasks like outliers detection.[7]Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower thegeneralization errorof the classifier.[8]A lowergeneralization errormeans that the implementer is less likely to experienceoverfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are notlinearly separablein that space. For this reason, it was proposed[9]that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure thatdot productsof pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of akernel functionk(x,y){\displaystyle k(x,y)}selected to suit the problem.[10]The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parametersαi{\displaystyle \alpha _{i}}of images offeature vectorsxi{\displaystyle x_{i}}that occur in the data base. With this choice of a hyperplane, the pointsx{\displaystyle x}in thefeature spacethat are mapped into the hyperplane are defined by the relation∑iαik(xi,x)=constant.{\displaystyle \textstyle \sum _{i}\alpha _{i}k(x_{i},x)={\text{constant}}.}Note that ifk(x,y){\displaystyle k(x,y)}becomes small asy{\displaystyle y}grows further away fromx{\displaystyle x}, each term in the sum measures the degree of closeness of the test pointx{\displaystyle x}to the corresponding data base pointxi{\displaystyle x_{i}}. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of pointsx{\displaystyle x}mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space. SVMs can be used to solve various real-world problems: The original SVM algorithm was invented byVladimir N. VapnikandAlexey Ya. Chervonenkisin 1964.[citation needed]In 1992, Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trickto maximum-margin hyperplanes.[9]The "soft margin" incarnation, as is commonly used in software packages, was proposed byCorinna Cortesand Vapnik in 1993 and published in 1995.[1] We are given a training dataset ofn{\displaystyle n}points of the form(x1,y1),…,(xn,yn),{\displaystyle (\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n}),}where theyi{\displaystyle y_{i}}are either 1 or −1, each indicating the class to which the pointxi{\displaystyle \mathbf {x} _{i}}belongs. Eachxi{\displaystyle \mathbf {x} _{i}}is ap{\displaystyle p}-dimensionalrealvector. We want to find the "maximum-margin hyperplane" that divides the group of pointsxi{\displaystyle \mathbf {x} _{i}}for whichyi=1{\displaystyle y_{i}=1}from the group of points for whichyi=−1{\displaystyle y_{i}=-1}, which is defined so that the distance between the hyperplane and the nearest pointxi{\displaystyle \mathbf {x} _{i}}from either group is maximized. Anyhyperplanecan be written as the set of pointsx{\displaystyle \mathbf {x} }satisfyingwTx−b=0,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} -b=0,}wherew{\displaystyle \mathbf {w} }is the (not necessarily normalized)normal vectorto the hyperplane. This is much likeHesse normal form, except thatw{\displaystyle \mathbf {w} }is not necessarily a unit vector. The parameterb‖w‖{\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}}determines the offset of the hyperplane from the origin along the normal vectorw{\displaystyle \mathbf {w} }. Warning: most of the literature on the subject defines the bias so thatwTx+b=0.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} +b=0.} If the training data islinearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations and Geometrically, the distance between these two hyperplanes is2‖w‖{\displaystyle {\tfrac {2}{\|\mathbf {w} \|}}},[21]so to maximize the distance between the planes we want to minimize‖w‖{\displaystyle \|\mathbf {w} \|}. The distance is computed using thedistance from a point to a planeequation. We also have to prevent data points from falling into the margin, we add the following constraint: for eachi{\displaystyle i}eitherwTxi−b≥1,ifyi=1,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\geq 1\,,{\text{ if }}y_{i}=1,}orwTxi−b≤−1,ifyi=−1.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\leq -1\,,{\text{ if }}y_{i}=-1.}These constraints state that each data point must lie on the correct side of the margin. This can be rewritten as We can put this together to get the optimization problem: minimizew,b12‖w‖2subject toyi(w⊤xi−b)≥1∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b}{\operatorname {minimize} }}&&{\frac {1}{2}}\|\mathbf {w} \|^{2}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thew{\displaystyle \mathbf {w} }andb{\displaystyle b}that solve this problem determine the final classifier,x↦sgn⁡(wTx−b){\displaystyle \mathbf {x} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)}, wheresgn⁡(⋅){\displaystyle \operatorname {sgn}(\cdot )}is thesign function. An important consequence of this geometric description is that the max-margin hyperplane is completely determined by thosexi{\displaystyle \mathbf {x} _{i}}that lie nearest to it (explained below). Thesexi{\displaystyle \mathbf {x} _{i}}are calledsupport vectors. To extend SVM to cases in which the data are not linearly separable, thehinge lossfunction is helpfulmax(0,1−yi(wTxi−b)).{\displaystyle \max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right).} Note thatyi{\displaystyle y_{i}}is thei-th target (i.e., in this case, 1 or −1), andwTxi−b{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b}is thei-th output. This function is zero if the constraint in(1)is satisfied, in other words, ifxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin. The goal of the optimization then is to minimize: ‖w‖2+C[1n∑i=1nmax(0,1−yi(wTxi−b))],{\displaystyle \lVert \mathbf {w} \rVert ^{2}+C\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right],} where the parameterC>0{\displaystyle C>0}determines the trade-off between increasing the margin size and ensuring that thexi{\displaystyle \mathbf {x} _{i}}lie on the correct side of the margin (Note we can add a weight to either term in the equation above). By deconstructing the hinge loss, this optimization problem can be formulated into the following: minimizew,b,ζ‖w‖22+C∑i=1nζisubject toyi(w⊤xi−b)≥1−ζi,ζi≥0∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b,\;\mathbf {\zeta } }{\operatorname {minimize} }}&&\|\mathbf {w} \|_{2}^{2}+C\sum _{i=1}^{n}\zeta _{i}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1-\zeta _{i},\quad \zeta _{i}\geq 0\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thus, for large values ofC{\displaystyle C}, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed alinear classifier. However, in 1992,Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trick(originally proposed by Aizerman et al.[22]) to maximum-margin hyperplanes.[9]The kernel trick, wheredot productsare replaced by kernels, is easily derived in the dual representation of the SVM problem. This allows the algorithm to fit the maximum-margin hyperplane in a transformedfeature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. It is noteworthy that working in a higher-dimensional feature space increases thegeneralization errorof support vector machines, although given enough samples the algorithm still performs well.[23] Some common kernels include: The kernel is related to the transformφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}by the equationk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. The valuewis also in the transformed space, withw=∑iαiyiφ(xi){\textstyle \mathbf {w} =\sum _{i}\alpha _{i}y_{i}\varphi (\mathbf {x} _{i})}. Dot products withwfor classification can again be computed by the kernel trick, i.e.w⋅φ(x)=∑iαiyik(xi,x){\textstyle \mathbf {w} \cdot \varphi (\mathbf {x} )=\sum _{i}\alpha _{i}y_{i}k(\mathbf {x} _{i},\mathbf {x} )}. Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value forλ{\displaystyle \lambda }yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing(2)to aquadratic programmingproblem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed. Minimizing(2)can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}we introduce a variableζi=max(0,1−yi(wTxi−b)){\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)}. Note thatζi{\displaystyle \zeta _{i}}is the smallest nonnegative number satisfyingyi(wTxi−b)≥1−ζi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\geq 1-\zeta _{i}.} Thus we can rewrite the optimization problem as follows minimize1n∑i=1nζi+λ‖w‖2subject toyi(wTxi−b)≥1−ζiandζi≥0,for alli.{\displaystyle {\begin{aligned}&{\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|\mathbf {w} \|^{2}\\[0.5ex]&{\text{subject to }}y_{i}\left(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\right)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.\end{aligned}}} This is called theprimalproblem. By solving for theLagrangian dualof the above problem, one obtains the simplified problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xiTxj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\mathbf {x} _{i}^{\mathsf {T}}\mathbf {x} _{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} This is called thedualproblem. Since the dual maximization problem is a quadratic function of theci{\displaystyle c_{i}}subject to linear constraints, it is efficiently solvable byquadratic programmingalgorithms. Here, the variablesci{\displaystyle c_{i}}are defined such that w=∑i=1nciyixi.{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\mathbf {x} _{i}.} Moreover,ci=0{\displaystyle c_{i}=0}exactly whenxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin, and0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}whenxi{\displaystyle \mathbf {x} _{i}}lies on the margin's boundary. It follows thatw{\displaystyle \mathbf {w} }can be written as a linear combination of the support vectors. The offset,b{\displaystyle b}, can be recovered by finding anxi{\displaystyle \mathbf {x} _{i}}on the margin's boundary and solvingyi(wTxi−b)=1⟺b=wTxi−yi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)=1\iff b=\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-y_{i}.} (Note thatyi−1=yi{\displaystyle y_{i}^{-1}=y_{i}}sinceyi=±1{\displaystyle y_{i}=\pm 1}.) Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data pointsφ(xi).{\displaystyle \varphi (\mathbf {x} _{i}).}Moreover, we are given a kernel functionk{\displaystyle k}which satisfiesk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. We know the classification vectorw{\displaystyle \mathbf {w} }in the transformed space satisfies w=∑i=1nciyiφ(xi),{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\varphi (\mathbf {x} _{i}),} where, theci{\displaystyle c_{i}}are obtained by solving the optimization problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(φ(xi)⋅φ(xj))yjcj=∑i=1nci−12∑i=1n∑j=1nyicik(xi,xj)yjcjsubject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k(\mathbf {x} _{i},\mathbf {x} _{j})y_{j}c_{j}\\{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}&=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} The coefficientsci{\displaystyle c_{i}}can be solved for using quadratic programming, as before. Again, we can find some indexi{\displaystyle i}such that0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}, so thatφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}lies on the boundary of the margin in the transformed space, and then solve b=wTφ(xi)−yi=[∑j=1ncjyjφ(xj)⋅φ(xi)]−yi=[∑j=1ncjyjk(xj,xi)]−yi.{\displaystyle {\begin{aligned}b=\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {x} _{i})-y_{i}&=\left[\sum _{j=1}^{n}c_{j}y_{j}\varphi (\mathbf {x} _{j})\cdot \varphi (\mathbf {x} _{i})\right]-y_{i}\\&=\left[\sum _{j=1}^{n}c_{j}y_{j}k(\mathbf {x} _{j},\mathbf {x} _{i})\right]-y_{i}.\end{aligned}}} Finally, z↦sgn⁡(wTφ(z)−b)=sgn⁡([∑i=1nciyik(xi,z)]−b).{\displaystyle \mathbf {z} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {z} )-b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k(\mathbf {x} _{i},\mathbf {z} )\right]-b\right).} Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high. Sub-gradient descentalgorithms for the SVM work directly with the expression f(w,b)=[1n∑i=1nmax(0,1−yi(wTxi−b))]+λ‖w‖2.{\displaystyle f(\mathbf {w} ,b)=\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} Note thatf{\displaystyle f}is aconvex functionofw{\displaystyle \mathbf {w} }andb{\displaystyle b}. As such, traditionalgradient descent(orSGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function'ssub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale withn{\displaystyle n}, the number of data points.[24] Coordinate descentalgorithms for the SVM work from the dual problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xi⋅xj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}, iteratively, the coefficientci{\displaystyle c_{i}}is adjusted in the direction of∂f/∂ci{\displaystyle \partial f/\partial c_{i}}. Then, the resulting vector of coefficients(c1′,…,cn′){\displaystyle (c_{1}',\,\ldots ,\,c_{n}')}is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.[25] The soft-margin support vector machine described above is an example of anempirical risk minimization(ERM) algorithm for thehinge loss. Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties. In supervised learning, one is given a set of training examplesX1…Xn{\displaystyle X_{1}\ldots X_{n}}with labelsy1…yn{\displaystyle y_{1}\ldots y_{n}}, and wishes to predictyn+1{\displaystyle y_{n+1}}givenXn+1{\displaystyle X_{n+1}}. To do so one forms ahypothesis,f{\displaystyle f}, such thatf(Xn+1){\displaystyle f(X_{n+1})}is a "good" approximation ofyn+1{\displaystyle y_{n+1}}. A "good" approximation is usually defined with the help of aloss function,ℓ(y,z){\displaystyle \ell (y,z)}, which characterizes how badz{\displaystyle z}is as a prediction ofy{\displaystyle y}. We would then like to choose a hypothesis that minimizes theexpected risk: ε(f)=E[ℓ(yn+1,f(Xn+1))].{\displaystyle \varepsilon (f)=\mathbb {E} \left[\ell (y_{n+1},f(X_{n+1}))\right].} In most cases, we don't know the joint distribution ofXn+1,yn+1{\displaystyle X_{n+1},\,y_{n+1}}outright. In these cases, a common strategy is to choose the hypothesis that minimizes theempirical risk: ε^(f)=1n∑k=1nℓ(yk,f(Xk)).{\displaystyle {\hat {\varepsilon }}(f)={\frac {1}{n}}\sum _{k=1}^{n}\ell (y_{k},f(X_{k})).} Under certain assumptions about the sequence of random variablesXk,yk{\displaystyle X_{k},\,y_{k}}(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk asn{\displaystyle n}grows large. This approach is calledempirical risk minimization,or ERM. In order for the minimization problem to have a well-defined solution, we have to place constraints on the setH{\displaystyle {\mathcal {H}}}of hypotheses being considered. IfH{\displaystyle {\mathcal {H}}}is anormed space(as is the case for SVM), a particularly effective technique is to consider only those hypothesesf{\displaystyle f}for which‖f‖H<k{\displaystyle \lVert f\rVert _{\mathcal {H}}<k}. This is equivalent to imposing aregularization penaltyR(f)=λk‖f‖H{\displaystyle {\mathcal {R}}(f)=\lambda _{k}\lVert f\rVert _{\mathcal {H}}}, and solving the new optimization problem f^=argminf∈Hε^(f)+R(f).{\displaystyle {\hat {f}}=\mathrm {arg} \min _{f\in {\mathcal {H}}}{\hat {\varepsilon }}(f)+{\mathcal {R}}(f).} This approach is calledTikhonov regularization. More generally,R(f){\displaystyle {\mathcal {R}}(f)}can be some measure of the complexity of the hypothesisf{\displaystyle f}, so that simpler hypotheses are preferred. Recall that the (soft-margin) SVM classifierw^,b:x↦sgn⁡(w^Tx−b){\displaystyle {\hat {\mathbf {w} }},b:\mathbf {x} \mapsto \operatorname {sgn}({\hat {\mathbf {w} }}^{\mathsf {T}}\mathbf {x} -b)}is chosen to minimize the following expression: [1n∑i=1nmax(0,1−yi(wTx−b))]+λ‖w‖2.{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is thehinge loss ℓ(y,z)=max(0,1−yz).{\displaystyle \ell (y,z)=\max \left(0,1-yz\right).} From this perspective, SVM is closely related to other fundamentalclassification algorithmssuch asregularized least-squaresandlogistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with thesquare-loss,ℓsq(y,z)=(y−z)2{\displaystyle \ell _{sq}(y,z)=(y-z)^{2}}; logistic regression employs thelog-loss, ℓlog(y,z)=ln⁡(1+e−yz).{\displaystyle \ell _{\log }(y,z)=\ln(1+e^{-yz}).} The difference between the hinge loss and these other loss functions is best stated in terms oftarget functions -the function that minimizes expected risk for a given pair of random variablesX,y{\displaystyle X,\,y}. In particular, letyx{\displaystyle y_{x}}denotey{\displaystyle y}conditional on the event thatX=x{\displaystyle X=x}. In the classification setting, we have: yx={1with probabilitypx−1with probability1−px{\displaystyle y_{x}={\begin{cases}1&{\text{with probability }}p_{x}\\-1&{\text{with probability }}1-p_{x}\end{cases}}} The optimal classifier is therefore: f∗(x)={1ifpx≥1/2−1otherwise{\displaystyle f^{*}(x)={\begin{cases}1&{\text{if }}p_{x}\geq 1/2\\-1&{\text{otherwise}}\end{cases}}} For the square-loss, the target function is the conditional expectation function,fsq(x)=E[yx]{\displaystyle f_{sq}(x)=\mathbb {E} \left[y_{x}\right]}; For the logistic loss, it's the logit function,flog(x)=ln⁡(px/(1−px)){\displaystyle f_{\log }(x)=\ln \left(p_{x}/({1-p_{x}})\right)}. While both of these target functions yield the correct classifier, assgn⁡(fsq)=sgn⁡(flog)=f∗{\displaystyle \operatorname {sgn}(f_{sq})=\operatorname {sgn}(f_{\log })=f^{*}}, they give us more information than we need. In fact, they give us enough information to completely describe the distribution ofyx{\displaystyle y_{x}}. On the other hand, one can check that the target function for the hinge loss isexactlyf∗{\displaystyle f^{*}}. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms ofR{\displaystyle {\mathcal {R}}}) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.[26] SVMs belong to a family of generalizedlinear classifiersand can be interpreted as an extension of theperceptron.[27]They can also be considered a special case ofTikhonov regularization. A special property is that they simultaneously minimize the empiricalclassification errorand maximize thegeometric margin; hence they are also known asmaximummargin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.[28] The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameterλ{\displaystyle \lambda }. A common choice is a Gaussian kernel, which has a single parameterγ{\displaystyle \gamma }. The best combination ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }is often selected by agrid searchwith exponentially growing sequences ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, for example,λ∈{2−5,2−3,…,213,215}{\displaystyle \lambda \in \{2^{-5},2^{-3},\dots ,2^{13},2^{15}\}};γ∈{2−15,2−13,…,21,23}{\displaystyle \gamma \in \{2^{-15},2^{-13},\dots ,2^{1},2^{3}\}}. Typically, each combination of parameter choices is checked usingcross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work inBayesian optimizationcan be used to selectλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[29] Potential drawbacks of the SVM include the following aspects: Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the singlemulticlass probleminto multiplebinary classificationproblems.[30]Common methods for such reduction include:[30][31] Crammer and Singer proposed a multiclass SVM method which casts themulticlass classificationproblem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[34]See also Lee, Lin and Wahba[35][36]and Van den Burg and Groenen.[37] Transductive support vector machines extend SVMs in that they could also treat partially labeled data insemi-supervised learningby following the principles oftransduction. Here, in addition to the training setD{\displaystyle {\mathcal {D}}}, the learner is also given a set D⋆={xi⋆∣xi⋆∈Rp}i=1k{\displaystyle {\mathcal {D}}^{\star }=\{\mathbf {x} _{i}^{\star }\mid \mathbf {x} _{i}^{\star }\in \mathbb {R} ^{p}\}_{i=1}^{k}} of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:[38] Minimize (inw,b,y⋆{\displaystyle \mathbf {w} ,b,\mathbf {y} ^{\star }}) 12‖w‖2{\displaystyle {\frac {1}{2}}\|\mathbf {w} \|^{2}} subject to (for anyi=1,…,n{\displaystyle i=1,\dots ,n}and anyj=1,…,k{\displaystyle j=1,\dots ,k}) yi(w⋅xi−b)≥1,yj⋆(w⋅xj⋆−b)≥1,{\displaystyle {\begin{aligned}&y_{i}(\mathbf {w} \cdot \mathbf {x} _{i}-b)\geq 1,\\&y_{j}^{\star }(\mathbf {w} \cdot \mathbf {x} _{j}^{\star }-b)\geq 1,\end{aligned}}} and yj⋆∈{−1,1}.{\displaystyle y_{j}^{\star }\in \{-1,1\}.} Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. Structured support-vector machine is an extension of the traditional SVM model. While the SVM model is primarily designed for binary classification, multiclass classification, and regression tasks, structured SVM broadens its application to handle general structured output labels, for example parse trees, classification with taxonomies, sequence alignment and many more.[39] A version of SVM forregressionwas proposed in 1996 byVladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola.[40]This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known asleast-squares support vector machine(LS-SVM) has been proposed by Suykens and Vandewalle.[41] Training the original SVR means solving[42] wherexi{\displaystyle x_{i}}is a training sample with target valueyi{\displaystyle y_{i}}. The inner product plus intercept⟨w,xi⟩+b{\displaystyle \langle w,x_{i}\rangle +b}is the prediction for that sample, andε{\displaystyle \varepsilon }is a free parameter that serves as a threshold: all predictions have to be within anε{\displaystyle \varepsilon }range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible. In 2011 it was shown by Polson and Scott that the SVM admits aBayesianinterpretation through the technique ofdata augmentation.[43]In this approach the SVM is viewed as agraphical model(where the parameters are connected via probability distributions). This extended view allows the application ofBayesiantechniques to SVMs, such as flexible feature modeling, automatichyperparametertuning, andpredictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed byFlorian Wenzel, enabling the application of Bayesian SVMs tobig data.[44]Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM.[45] The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving thequadratic programming(QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks. Another approach is to use aninterior-point methodthat usesNewton-like iterations to find a solution of theKarush–Kuhn–Tucker conditionsof the primal and dual problems.[46]Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick. Another common method is Platt'ssequential minimal optimization(SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems.[47] The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin,logistic regression; this class of algorithms includessub-gradient descent(e.g., PEGASOS[48]) andcoordinate descent(e.g., LIBLINEAR[49]). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have aQ-linear convergenceproperty, making the algorithm extremely fast. The general kernel SVMs can also be solved more efficiently usingsub-gradient descent(e.g. P-packSVM[50]), especially whenparallelizationis allowed. Kernel SVMs are available in many machine-learning toolkits, includingLIBSVM,MATLAB,SAS, SVMlight,kernlab,scikit-learn,Shogun,Weka,Shark,JKernelMachines,OpenCVand others. Preprocessing of data (standardization) is highly recommended to enhance accuracy of classification.[51]There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score.[52]Subtraction of mean and division by variance of each feature is usually used for SVM.[53]
https://en.wikipedia.org/wiki/Support_vector_machine#Hard_margin
k-means clusteringis a method ofvector quantization, originally fromsignal processing, that aims topartitionnobservations intokclusters in which each observation belongs to theclusterwith the nearestmean(cluster centers or clustercentroid), serving as a prototype of the cluster. This results in a partitioning of the data space intoVoronoi cells.k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficultWeber problem: the mean optimizes squared errors, whereas only thegeometric medianminimizes Euclidean distances. For instance, better Euclidean solutions can be found usingk-mediansandk-medoids. The problem is computationally difficult (NP-hard); however, efficientheuristic algorithmsconverge quickly to alocal optimum. These are usually similar to theexpectation–maximization algorithmformixturesofGaussian distributionsvia an iterative refinement approach employed by bothk-meansandGaussian mixture modeling. They both use cluster centers to model the data; however,k-means clustering tends to find clusters of comparable spatial extent, while the Gaussian mixture model allows clusters to have different shapes. The unsupervisedk-means algorithm has a loose relationship to thek-nearest neighbor classifier, a popular supervisedmachine learningtechnique for classification that is often confused withk-means due to the name. Applying the 1-nearest neighbor classifier to the cluster centers obtained byk-means classifies new data into the existing clusters. This is known asnearest centroid classifierorRocchio algorithm. Given a set of observations(x1,x2, ...,xn), where each observation is ad{\displaystyle d}-dimensional real vector,k-means clustering aims to partition thenobservations intok(≤n) setsS= {S1,S2, ...,Sk}so as to minimize the within-cluster sum of squares (WCSS) (i.e.variance). Formally, the objective is to find:argminS⁡∑i=1k∑x∈Si‖x−μi‖2=argminS⁡∑i=1k|Si|Var⁡Si{\displaystyle \mathop {\operatorname {arg\,min} } _{\mathbf {S} }\sum _{i=1}^{k}\sum _{\mathbf {x} \in S_{i}}\left\|\mathbf {x} -{\boldsymbol {\mu }}_{i}\right\|^{2}=\mathop {\operatorname {arg\,min} } _{\mathbf {S} }\sum _{i=1}^{k}|S_{i}|\operatorname {Var} S_{i}}whereμiis the mean (also called centroid) of points inSi{\displaystyle S_{i}}, i.e.μi=1|Si|∑x∈Six,{\displaystyle {\boldsymbol {\mu _{i}}}={\frac {1}{|S_{i}|}}\sum _{\mathbf {x} \in S_{i}}\mathbf {x} ,}|Si|{\displaystyle |S_{i}|}is the size ofSi{\displaystyle S_{i}}, and‖⋅‖{\displaystyle \|\cdot \|}is the usualL2norm. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:argminS⁡∑i=1k1|Si|∑x,y∈Si‖x−y‖2{\displaystyle \mathop {\operatorname {arg\,min} } _{\mathbf {S} }\sum _{i=1}^{k}\,{\frac {1}{|S_{i}|}}\,\sum _{\mathbf {x} ,\mathbf {y} \in S_{i}}\left\|\mathbf {x} -\mathbf {y} \right\|^{2}}The equivalence can be deduced from identity|Si|∑x∈Si‖x−μi‖2=12∑x,y∈Si‖x−y‖2{\textstyle |S_{i}|\sum _{\mathbf {x} \in S_{i}}\left\|\mathbf {x} -{\boldsymbol {\mu }}_{i}\right\|^{2}={\frac {1}{2}}\sum _{\mathbf {x} ,\mathbf {y} \in S_{i}}\left\|\mathbf {x} -\mathbf {y} \right\|^{2}}. Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points indifferentclusters (between-cluster sum of squares, BCSS).[1]This deterministic relationship is also related to thelaw of total variancein probability theory. The term "k-means" was first used by James MacQueen in 1967,[2]though the idea goes back toHugo Steinhausin 1956.[3]The standard algorithm was first proposed by Stuart Lloyd ofBell Labsin 1957 as a technique forpulse-code modulation, although it was not published as a journal article until 1982.[4]In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.[5] The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "thek-means algorithm"; it is also referred to asLloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïvek-means", because there exist much faster alternatives.[6] Given an initial set ofkmeansm1(1), ...,mk(1)(see below), the algorithm proceeds by alternating between two steps:[7] The objective function ink-means is the WCSS (within cluster sum of squares). After each iteration, the WCSS decreases and so we have a nonnegative monotonically decreasing sequence. This guarantees that thek-means always converges, but not necessarily to the global optimum. The algorithm has converged when the assignments no longer change or equivalently, when the WCSS has become stable. The algorithm is not guaranteed to find the optimum.[9] The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications ofk-means such as sphericalk-means andk-medoidshave been proposed to allow using other distance measures. The below pseudocode outlines the implementation of the standardk-means clustering algorithm. Initialization of centroids, distance metric between points and centroids, and the calculation of new centroids are design choices and will vary with different implementations. In this example pseudocode,argminis used to find the index of the minimum value. Commonly used initialization methods are Forgy and Random Partition.[10]The Forgy method randomly chooseskobservations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,[10]the Random Partition method is generally preferable for algorithms such as thek-harmonic means and fuzzyk-means. For expectation maximization and standardk-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,[11]however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach[12]performs "consistently" in "the best group" andk-means++performs "generally well". The algorithm does not guarantee convergence to the global optimum. The result may depend on the initial clusters. As the algorithm is usually fast, it is common to run it multiple times with different starting conditions. However, worst-case performance can be slow: in particular certain point sets, even in two dimensions, converge in exponential time, that is2Ω(n).[13]These point sets do not seem to arise in practice: this is corroborated by the fact that thesmoothedrunning time ofk-means is polynomial.[14] The "assignment" step is referred to as the "expectation step", while the "update step" is a maximization step, making this algorithm a variant of thegeneralizedexpectation–maximization algorithm. Finding the optimal solution to thek-means clustering problem for observations inddimensions is: Thus, a variety ofheuristic algorithmssuch as Lloyd's algorithm given above are generally used. The running time of Lloyd's algorithm (and most variants) isO(nkdi){\displaystyle O(nkdi)},[9][19]where: On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. Lloyd's algorithm is therefore often considered to be of "linear" complexity in practice, although it is in theworst casesuperpolynomial when performed until convergence.[20] Lloyd's algorithm is the standard approach for this problem. However, it spends a lot of processing time computing the distances between each of thekcluster centers and thendata points. Since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naïve implementation very inefficient. Some implementations use caching and the triangle inequality in order to create bounds and accelerate Lloyd's algorithm.[22][9][23][24][25][26] Finding the optimal number of clusters (k) fork-means clustering is a crucial step to ensure that the clustering results are meaningful and useful.[27]Several techniques are available to determine a suitable number of clusters. Here are some of commonly used methods: Hartiganand Wong's method[9]provides a variation ofk-means algorithm which progresses towards a local minimum of the minimum sum-of-squares problem with different solution updates. The method is alocal searchthat iteratively attempts to relocate a sample into a different cluster as long as this process improves the objective function. When no sample can be relocated into a different cluster with an improvement of the objective, the method stops (in a local minimum). In a similar way as the classicalk-means, the approach remains a heuristic since it does not necessarily guarantee that the final solution is globally optimum. Letφ(Sj){\displaystyle \varphi (S_{j})}be the individual cost ofSj{\displaystyle S_{j}}defined by∑x∈Sj(x−μj)2{\textstyle \sum _{x\in S_{j}}(x-\mu _{j})^{2}}, withμj{\displaystyle \mu _{j}}the center of the cluster. Different move acceptance strategies can be used. In afirst-improvementstrategy, any improving relocation can be applied, whereas in abest-improvementstrategy, all possible relocations are iteratively tested and only the best is applied at each iteration. The former approach favors speed, whether the latter approach generally favors solution quality at the expense of additional computational time. The functionΔ{\displaystyle \Delta }used to calculate the result of a relocation can also be efficiently evaluated by using equality[45] Δ(x,n,m)=∣Sn∣∣Sn∣−1⋅‖μn−x‖2−∣Sm∣∣Sm∣+1⋅‖μm−x‖2.{\displaystyle \Delta (x,n,m)={\frac {\mid S_{n}\mid }{\mid S_{n}\mid -1}}\cdot \lVert \mu _{n}-x\rVert ^{2}-{\frac {\mid S_{m}\mid }{\mid S_{m}\mid +1}}\cdot \lVert \mu _{m}-x\rVert ^{2}.} The classicalk-means algorithm and its variations are known to only converge to local minima of the minimum-sum-of-squares clustering problem defined asargminS⁡∑i=1k∑x∈Si‖x−μi‖2.{\displaystyle \mathop {\operatorname {arg\,min} } _{\mathbf {S} }\sum _{i=1}^{k}\sum _{\mathbf {x} \in S_{i}}\left\|\mathbf {x} -{\boldsymbol {\mu }}_{i}\right\|^{2}.}Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, global optimization algorithms based onbranch-and-boundandsemidefinite programminghave produced ‘’provenly optimal’’ solutions for datasets with up to 4,177 entities and 20,531 features.[46]As expected, due to theNP-hardnessof the subjacent optimization problem, the computational time of optimal algorithms fork-means quickly increases beyond this size. Optimal solutions for small- and medium-scale still remain valuable as a benchmark tool, to evaluate the quality of other heuristics. To find high-quality local minima within a controlled computational time but without optimality guarantees, other works have exploredmetaheuristicsand otherglobal optimizationtechniques, e.g., based on incremental approaches and convex optimization,[47]random swaps[48](i.e.,iterated local search),variable neighborhood search[49]andgenetic algorithms.[50][51]It is indeed known that finding better local minima of the minimum sum-of-squares clustering problem can make the difference between failure and success to recover cluster structures in feature spaces of high dimension.[51] Three key features ofk-means that make it efficient are often regarded as its biggest drawbacks: A key limitation ofk-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applyingk-means with a value ofk=3{\displaystyle k=3}onto the well-knownIris flower data set, the result often fails to separate the threeIrisspecies contained in the data set. Withk=2{\displaystyle k=2}, the two visible clusters (one containing two species) will be discovered, whereas withk=3{\displaystyle k=3}one of the two clusters will be split into two even parts. In fact,k=2{\displaystyle k=2}is more appropriate for this data set, despite the data set's containing 3classes. As with any other clustering algorithm, thek-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others. The result ofk-means can be seen as theVoronoi cellsof the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by theexpectation–maximization algorithm(arguably a generalization ofk-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better thank-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices.k-means is closely related to nonparametricBayesian modeling.[53] k-means clustering is rather easy to apply to even large data sets, particularly when using heuristics such asLloyd's algorithm. It has been successfully used inmarket segmentation,computer vision, andastronomyamong many other domains. It often is used as a preprocessing step for other algorithms, for example to find a starting configuration. Vector quantization, a technique commonly used in signal processing and computer graphics, involves reducing thecolor paletteof an image to a fixed number of colors, known ask. One popular method for achievingvector quantizationis throughk-means clustering. In this process,k-means is applied to the color space of an image to partition it into k clusters, with each cluster representing a distinct color in the image. This technique is particularly useful in image segmentation tasks, where it helps identify and group similar colors together. Example: In the field ofcomputer graphics,k-means clustering is often employed forcolor quantizationin image compression. By reducing the number of colors used to represent an image, file sizes can be significantly reduced without significant loss of visual quality. For instance, consider an image with millions of colors. By applyingk-means clustering withkset to a smaller number, the image can be represented using a more limitedcolor palette, resulting in a compressed version that consumes less storage space and bandwidth. Other uses of vector quantization includenon-random sampling, ask-means can easily be used to choosekdifferent but prototypical objects from a large data set for further analysis. Cluster analysis, a fundamental task in data mining andmachine learning, involves grouping a set of data points into clusters based on their similarity.k-means clustering is a popular algorithm used for partitioning data into k clusters, where each cluster is represented by its centroid. However, the purek-means algorithm is not very flexible, and as such is of limited use (except for when vector quantization as above is actually the desired use case). In particular, the parameterkis known to be hard to choose (as discussed above) when not given by external constraints. Another limitation is that it cannot be used with arbitrary distance functions or on non-numerical data. For these use cases, many other algorithms are superior. Example: In marketing,k-means clustering is frequently employed formarket segmentation, where customers with similar characteristics or behaviors are grouped together. For instance, a retail company may usek-means clustering to segment its customer base into distinct groups based on factors such as purchasing behavior, demographics, and geographic location. These customer segments can then be targeted with tailored marketing strategies and product offerings to maximize sales and customer satisfaction. k-means clustering has been used as afeature learning(ordictionary learning) step, in either (semi-)supervised learningorunsupervised learning.[54]The basic approach is first to train ak-means clustering representation, using the input training data (which need not be labelled). Then, to project any input datum into the new feature space, an "encoding" function, such as the thresholded matrix-product of the datum with the centroid locations, computes the distance from the datum to each centroid, or simply an indicator function for the nearest centroid,[54][55]or some smooth transformation of the distance.[56]Alternatively, transforming the sample-cluster distance through aGaussian RBF, obtains the hidden layer of aradial basis function network.[57] This use ofk-means has been successfully combined with simple,linear classifiersfor semi-supervised learning inNLP(specifically fornamed-entity recognition)[58]and incomputer vision. On an object recognition task, it was found to exhibit comparable performance with more sophisticated feature learning approaches such asautoencodersandrestricted Boltzmann machines.[56]However, it generally requires more data, for equivalent performance, because each data point only contributes to one "feature".[54] Example: Innatural language processing(NLP),k-means clustering has been integrated with simple linear classifiers for semi-supervised learning tasks such asnamed-entity recognition(NER). By first clustering unlabeled text data usingk-means, meaningful features can be extracted to improve the performance of NER models. For instance,k-means clustering can be applied to identify clusters of words or phrases that frequently co-occur in the input text, which can then be used as features for training the NER model. This approach has been shown to achieve comparable performance with more complexfeature learningtechniques such asautoencodersand restrictedBoltzmann machines, albeit with a greater requirement for labeled data. Recent advancements in the application ofk-means clustering include improvements in initialization techniques, such as the use ofk-means++initialization to select initial cluster centroids in a more effective manner. Additionally, researchers have explored the integration ofk-means clustering with deep learning methods, such asconvolutional neural networks(CNNs) andrecurrent neural networks(RNNs), to enhance the performance of various tasks incomputer vision,natural language processing, and other domains. The slow "standard algorithm" fork-means clustering, and its associatedexpectation–maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance.[59]: 850Instead of small variances, a hard cluster assignment can also be used to show another equivalence ofk-means clustering to a special case of "hard" Gaussian mixture modelling.[60]: 354, 11.4.2.5This does not mean that it is efficient to use Gaussian mixture modelling to computek-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization ofk-means; on the contrary, it has been suggested to usek-means clustering to find starting points for Gaussian mixture modelling on difficult data.[59]: 849 Another generalization of thek-means algorithm is thek-SVD algorithm, which estimates data points as a sparse linear combination of "codebook vectors".k-means corresponds to the special case of using a single codebook vector, with a weight of 1.[61] The relaxed solution ofk-means clustering, specified by the cluster indicators, is given by principal component analysis (PCA).[62][63]The intuition is thatk-means describe spherically shaped (ball-like) clusters. If the data has 2 clusters, the line connecting the two centroids is the best 1-dimensional projection direction, which is also the first PCA direction. Cutting the line at the center of mass separates the clusters (this is the continuous relaxation of the discrete cluster indicator). If the data have three clusters, the 2-dimensional plane spanned by three cluster centroids is the best 2-D projection. This plane is also defined by the first two PCA dimensions. Well-separated clusters are effectively modelled by ball-shaped clusters and thus discovered byk-means. Non-ball-shaped clusters are hard to separate when they are close. For example, two half-moon shaped clusters intertwined in space do not separate well when projected onto PCA subspace.k-means should not be expected to do well on this data.[64]It is straightforward to produce counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[65] Basic mean shift clustering algorithms maintain a set of data points the same size as the input data set. Initially, this set is copied from the input set. All points are then iteratively moved towards the mean of the points surrounding them. By contrast,k-means restricts the set of clusters tokclusters, usually much less than the number of points in the input data set, using the mean of all points in the prior cluster that are closer to that point than any other for the centroid (e.g. within the Voronoi partition of each updating point). A mean shift algorithm that is similar then tok-means, calledlikelihood mean shift, replaces the set of points undergoing replacement by the mean of all points in the input set that are within a given distance of the changing set.[66]An advantage of mean shift clustering overk-means is the detection of an arbitrary number of clusters in the data set, as there is not a parameter determining the number of clusters. Mean shift can be much slower thank-means, and still requires selection of a bandwidth parameter. Under sparsity assumptions and when input data is pre-processed with thewhitening transformation,k-means produces the solution to the linear independent component analysis (ICA) task. This aids in explaining the successful application ofk-means tofeature learning.[67] k-means implicitly assumes that the ordering of the input data set does not matter. The bilateral filter is similar tok-means andmean shiftin that it maintains a set of data points that are iteratively replaced by means. However, the bilateral filter restricts the calculation of the (kernel weighted) mean to include only points that are close in the ordering of the input data.[66]This makes it applicable to problems such as image denoising, where the spatial arrangement of pixels in an image is of critical importance. The set of squared error minimizing cluster functions also includes thek-medoidsalgorithm, an approach which forces the center point of each cluster to be one of the actual points, i.e., it usesmedoidsin place ofcentroids. Different implementations of the algorithm exhibit performance differences, with the fastest on a test data set finishing in 10 seconds, the slowest taking 25,988 seconds (~7 hours).[1]The differences can be attributed to implementation quality, language and compiler differences, different termination criteria and precision levels, and the use of indexes for acceleration. The following implementations are available underFree/Open Source Softwarelicenses, with publicly available source code. The following implementations are available underproprietarylicense terms, and may not have publicly available source code.
https://en.wikipedia.org/wiki/K-means_clustering
Gradient descentis a method for unconstrainedmathematical optimization. It is afirst-orderiterativealgorithmfor minimizing adifferentiablemultivariate function. The idea is to take repeated steps in the opposite direction of thegradient(or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a trajectory that maximizes that function; the procedure is then known asgradient ascent. It is particularly useful in machine learning for minimizing the cost or loss function.[1]Gradient descent should not be confused withlocal searchalgorithms, although both areiterative methodsforoptimization. Gradient descent is generally attributed toAugustin-Louis Cauchy, who first suggested it in 1847.[2]Jacques Hadamardindependently proposed a similar method in 1907.[3][4]Its convergence properties for non-linear optimization problems were first studied byHaskell Curryin 1944,[5]with the method becoming increasingly well-studied and used in the following decades.[6][7] A simple extension of gradient descent,stochastic gradient descent, serves as the most basic algorithm used for training mostdeep networkstoday. Gradient descent is based on the observation that if themulti-variable functionF(x){\displaystyle F(\mathbf {x} )}isdefinedanddifferentiablein a neighborhood of a pointa{\displaystyle \mathbf {a} }, thenF(x){\displaystyle F(\mathbf {x} )}decreasesfastestif one goes froma{\displaystyle \mathbf {a} }in the direction of the negativegradientofF{\displaystyle F}ata,−∇F(a){\displaystyle \mathbf {a} ,-\nabla F(\mathbf {a} )}. It follows that, if for a small enough step size orlearning rateγ∈R+{\displaystyle \gamma \in \mathbb {R} _{+}}, thenF(an)≥F(an+1){\displaystyle F(\mathbf {a_{n}} )\geq F(\mathbf {a_{n+1}} )}. In other words, the termγ∇F(a){\displaystyle \gamma \nabla F(\mathbf {a} )}is subtracted froma{\displaystyle \mathbf {a} }because we want to move against the gradient, toward the local minimum. With this observation in mind, one starts with a guessx0{\displaystyle \mathbf {x} _{0}}for a local minimum ofF{\displaystyle F}, and considers the sequencex0,x1,x2,…{\displaystyle \mathbf {x} _{0},\mathbf {x} _{1},\mathbf {x} _{2},\ldots }such that We have amonotonicsequence so the sequence(xn){\displaystyle (\mathbf {x} _{n})}converges to the desired local minimum. Note that the value of thestep sizeγ{\displaystyle \gamma }is allowed to change at every iteration. It is possible to guarantee theconvergenceto a local minimum under certain assumptions on the functionF{\displaystyle F}(for example,F{\displaystyle F}convexand∇F{\displaystyle \nabla F}Lipschitz) and particular choices ofγ{\displaystyle \gamma }. Those include the sequence γn=|(xn−xn−1)T[∇F(xn)−∇F(xn−1)]|‖∇F(xn)−∇F(xn−1)‖2{\displaystyle \gamma _{n}={\frac {\left|\left(\mathbf {x} _{n}-\mathbf {x} _{n-1}\right)^{T}\left[\nabla F(\mathbf {x} _{n})-\nabla F(\mathbf {x} _{n-1})\right]\right|}{\left\|\nabla F(\mathbf {x} _{n})-\nabla F(\mathbf {x} _{n-1})\right\|^{2}}}} as in theBarzilai-Borwein method,[8][9]or a sequenceγn{\displaystyle \gamma _{n}}satisfying theWolfe conditions(which can be found by usingline search). When the functionF{\displaystyle F}isconvex, all local minima are also global minima, so in this case gradient descent can converge to the global solution. This process is illustrated in the adjacent picture. Here,F{\displaystyle F}is assumed to be defined on the plane, and that its graph has abowlshape. The blue curves are thecontour lines, that is, the regions on which the value ofF{\displaystyle F}is constant. A red arrow originating at a point shows the direction of the negative gradient at that point. Note that the (negative) gradient at a point isorthogonalto the contour line going through that point. We see that gradientdescentleads us to the bottom of the bowl, that is, to the point where the value of the functionF{\displaystyle F}is minimal. The basic intuition behind gradient descent can be illustrated by a hypothetical scenario. People are stuck in the mountains and are trying to get down (i.e., trying to find the global minimum). There is heavy fog such that visibility is extremely low. Therefore, the path down the mountain is not visible, so they must use local information to find the minimum. They can use the method of gradient descent, which involves looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent (i.e., downhill). If they were trying to find the top of the mountain (i.e., the maximum), then they would proceed in the direction of steepest ascent (i.e., uphill). Using this method, they would eventually find their way down the mountain or possibly get stuck in some hole (i.e., local minimum orsaddle point), like a mountain lake. However, assume also that the steepness of the hill is not immediately obvious with simple observation, but rather it requires a sophisticated instrument to measure, which the persons happen to have at the moment. It takes quite some time to measure the steepness of the hill with the instrument, thus they should minimize their use of the instrument if they wanted to get down the mountain before sunset. The difficulty then is choosing the frequency at which they should measure the steepness of the hill so not to go off track. In this analogy, the persons represent the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. The steepness of the hill represents theslopeof the function at that point. The instrument used to measure steepness isdifferentiation. The direction they choose to travel in aligns with thegradientof the function at that point. The amount of time they travel before taking another measurement is the step size. Since using a step sizeγ{\displaystyle \gamma }that is too small would slow convergence, and aγ{\displaystyle \gamma }too large would lead to overshoot and divergence, finding a good setting ofγ{\displaystyle \gamma }is an important practical problem.Philip Wolfealso advocated using "clever choices of the [descent] direction" in practice.[10]While using a direction that deviates from the steepest descent direction may seem counter-intuitive, the idea is that the smaller slope may be compensated for by being sustained over a much longer distance. To reason about this mathematically, consider a directionpn{\displaystyle \mathbf {p} _{n}}and step sizeγn{\displaystyle \gamma _{n}}and consider the more general update: Finding good settings ofpn{\displaystyle \mathbf {p} _{n}}andγn{\displaystyle \gamma _{n}}requires some thought. First of all, we would like the update direction to point downhill. Mathematically, lettingθn{\displaystyle \theta _{n}}denote the angle between−∇F(an){\displaystyle -\nabla F(\mathbf {a_{n}} )}andpn{\displaystyle \mathbf {p} _{n}}, this requires thatcos⁡θn>0.{\displaystyle \cos \theta _{n}>0.}To say more, we need more information about the objective function that we are optimising. Under the fairly weak assumption thatF{\displaystyle F}is continuously differentiable, we may prove that:[11] This inequality implies that the amount by which we can be sure the functionF{\displaystyle F}is decreased depends on a trade off between the two terms in square brackets. The first term in square brackets measures the angle between the descent direction and the negative gradient. The second term measures how quickly the gradient changes along the descent direction. In principle inequality (1) could be optimized overpn{\displaystyle \mathbf {p} _{n}}andγn{\displaystyle \gamma _{n}}to choose an optimal step size and direction. The problem is that evaluating the second term in square brackets requires evaluating∇F(an−tγnpn){\displaystyle \nabla F(\mathbf {a} _{n}-t\gamma _{n}\mathbf {p} _{n})}, and extra gradient evaluations are generally expensive and undesirable. Some ways around this problem are: Usually by following one of the recipes above,convergenceto a local minimum can be guaranteed. When the functionF{\displaystyle F}isconvex, all local minima are also global minima, so in this case gradient descent can converge to the global solution. Gradient descent can be used to solve asystem of linear equations reformulated as a quadratic minimization problem. If the system matrixA{\displaystyle A}is realsymmetricandpositive-definite, an objective function is defined as the quadratic function, with minimization of so that For a general real matrixA{\displaystyle A},linear least squaresdefine In traditional linear least squares for realA{\displaystyle A}andb{\displaystyle \mathbf {b} }theEuclidean normis used, in which case Theline searchminimization, finding the locally optimal step sizeγ{\displaystyle \gamma }on every iteration, can be performed analytically for quadratic functions, and explicit formulas for the locally optimalγ{\displaystyle \gamma }are known.[6][13] For example, for realsymmetricandpositive-definitematrixA{\displaystyle A}, a simple algorithm can be as follows,[6] To avoid multiplying byA{\displaystyle A}twice per iteration, we note thatx:=x+γr{\displaystyle \mathbf {x} :=\mathbf {x} +\gamma \mathbf {r} }impliesr:=r−γAr{\displaystyle \mathbf {r} :=\mathbf {r} -\gamma \mathbf {Ar} }, which gives the traditional algorithm,[14] The method is rarely used for solving linear equations, with theconjugate gradient methodbeing one of the most popular alternatives. The number of gradient descent iterations is commonly proportional to the spectralcondition numberκ(A){\displaystyle \kappa (A)}of the system matrixA{\displaystyle A}(the ratio of the maximum to minimumeigenvaluesofATA{\displaystyle A^{T}A}), while the convergence ofconjugate gradient methodis typically determined by a square root of the condition number, i.e., is much faster. Both methods can benefit frompreconditioning, where gradient descent may require less assumptions on the preconditioner.[14] In steepest descent applied to solvingAx→=b→{\displaystyle A{\vec {x}}={\vec {b}}}, whereA{\displaystyle A}is symmetric positive-definite, the residual vectorsr→k=b→−Ax→k{\displaystyle {\vec {r}}_{k}={\vec {b}}-A{\vec {x}}_{k}}are orthogonal across iterations: Because each step is taken in the steepest direction, steepest-descent steps alternate between directions aligned with the extreme axes of the elongated level sets. Whenκ(A){\displaystyle \kappa (A)}is large, this produces a characteristic zig-zag path. The poor conditioning ofA{\displaystyle A}is the primary cause of the slow convergence, and orthogonality of successive residuals reinforces this alternation. As shown in the image on the right, steepest descent converges slowly due to the high condition number ofA{\displaystyle A}, and the orthogonality of residuals forces each new direction to undo the overshoot from the previous step. The result is a path that zigzags toward the solution. This inefficiency is one reason conjugate gradient or preconditioning methods are preferred.[15] Gradient descent can also be used to solve a system ofnonlinear equations. Below is an example that shows how to use the gradient descent to solve for three unknown variables,x1,x2, andx3. This example shows one iteration of the gradient descent. Consider the nonlinear system of equations Let us introduce the associated function where One might now define the objective function which we will attempt to minimize. As an initial guess, let us use We know that where theJacobian matrixJG{\displaystyle J_{G}}is given by We calculate: Thus and Now, a suitableγ0{\displaystyle \gamma _{0}}must be found such that This can be done with any of a variety ofline searchalgorithms. One might also simply guessγ0=0.001,{\displaystyle \gamma _{0}=0.001,}which gives Evaluating the objective function at this value, yields The decrease fromF(0)=58.456{\displaystyle F(\mathbf {0} )=58.456}to the next step's value of is a sizable decrease in the objective function. Further steps would reduce its value further until an approximate solution to the system was found. Gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case, the search space is typically afunction space, and one calculates theFréchet derivativeof the functional to be minimized to determine the descent direction.[7] That gradient descent works in any number of dimensions (finite number at least) can be seen as a consequence of theCauchy-Schwarz inequality, i.e. the magnitude of the inner (dot) product of two vectors of any dimension is maximized when they arecolinear. In the case of gradient descent, that would be when the vector of independent variable adjustments is proportional to the gradient vector of partial derivatives. The gradient descent can take many iterations to compute a local minimum with a requiredaccuracy, if thecurvaturein different directions is very different for the given function. For such functions,preconditioning, which changes the geometry of the space to shape the function level sets likeconcentric circles, cures the slow convergence. Constructing and applying preconditioning can be computationally expensive, however. The gradient descent can be modified via momentums[16](Nesterov, Polyak,[17]and Frank-Wolfe[18]) and heavy-ball parameters (exponential moving averages[19]and positive-negative momentum[20]). The main examples of such optimizers are Adam, DiffGrad, Yogi, AdaBelief, etc. Methods based onNewton's methodand inversion of theHessianusingconjugate gradienttechniques can be better alternatives.[21][22]Generally, such methods converge in fewer iterations, but the cost of each iteration is higher. An example is theBFGS methodwhich consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a "better" direction, combined with a more sophisticatedline searchalgorithm, to find the "best" value ofγ.{\displaystyle \gamma .}For extremely large problems, where the computer-memory issues dominate, a limited-memory method such asL-BFGSshould be used instead of BFGS or the steepest descent. While it is sometimes possible to substitute gradient descent for alocal searchalgorithm, gradient descent is not in the same family: although it is aniterative methodforlocal optimization, it relies on anobjective function’s gradientrather than an explicit exploration of asolution space. Gradient descent can be viewed as applyingEuler's methodfor solvingordinary differential equationsx′(t)=−∇f(x(t)){\displaystyle x'(t)=-\nabla f(x(t))}to agradient flow. In turn, this equation may be derived as an optimal controller[23]for the control systemx′(t)=u(t){\displaystyle x'(t)=u(t)}withu(t){\displaystyle u(t)}given in feedback formu(t)=−∇f(x(t)){\displaystyle u(t)=-\nabla f(x(t))}. Gradient descent can converge to a local minimum and slow down in a neighborhood of asaddle point. Even for unconstrained quadratic minimization, gradient descent develops a zig-zag pattern of subsequent iterates as iterations progress, resulting in slow convergence. Multiple modifications of gradient descent have been proposed to address these deficiencies. Yurii Nesterovhas proposed[24]a simple modification that enables faster convergence for convex problems and has been since further generalized. For unconstrained smooth problems, the method is called thefast gradient method(FGM) or theaccelerated gradient method(AGM). Specifically, if the differentiable functionF{\displaystyle F}is convex and∇F{\displaystyle \nabla F}isLipschitz, and it is not assumed thatF{\displaystyle F}isstrongly convex, then the error in the objective value generated at each stepk{\displaystyle k}by the gradient descent method will bebounded byO(k−1){\textstyle {\mathcal {O}}\left({k^{-1}}\right)}. Using the Nesterov acceleration technique, the error decreases atO(k−2){\textstyle {\mathcal {O}}\left({k^{-2}}\right)}.[25][26]It is known that the rateO(k−2){\displaystyle {\mathcal {O}}\left({k^{-2}}\right)}for the decrease of thecost functionis optimal for first-order optimization methods. Nevertheless, there is the opportunity to improve the algorithm by reducing the constant factor. Theoptimized gradient method(OGM)[27]reduces that constant by a factor of two and is an optimal first-order method for large-scale problems.[28] For constrained or non-smooth problems, Nesterov's FGM is called thefast proximal gradient method(FPGM), an acceleration of theproximal gradient method. Trying to break the zig-zag pattern of gradient descent, themomentum or heavy ball methoduses a momentum term in analogy to a heavy ball sliding on the surface of values of the function being minimized,[6]or to mass movement inNewtonian dynamicsthrough aviscousmedium in aconservative forcefield.[29]Gradient descent with momentum remembers the solution update at each iteration, and determines the next update as alinear combinationof the gradient and the previous update. For unconstrained quadratic minimization, a theoretical convergence rate bound of the heavy ball method is asymptotically the same as that for the optimalconjugate gradient method.[6] This technique is used instochastic gradient descentand as an extension to thebackpropagationalgorithms used to trainartificial neural networks.[30][31]In the direction of updating, stochastic gradient descent adds a stochastic property. The weights can be used to calculate the derivatives. Gradient descent can be extended to handleconstraintsby including aprojectiononto the set of constraints. This method is only feasible when the projection is efficiently computable on a computer. Under suitable assumptions, this method converges. This method is a specific case of theforward-backward algorithmfor monotone inclusions (which includesconvex programmingandvariational inequalities).[32] Gradient descent is a special case ofmirror descentusing the squared Euclidean distance as the givenBregman divergence.[33] The properties of gradient descent depend on the properties of the objective function and the variant of gradient descent used (for example, if aline searchstep is used). The assumptions made affect the convergence rate, and other properties, that can be proven for gradient descent.[34]For example, if the objective is assumed to bestrongly convexandlipschitz smooth, then gradient descent converges linearly with a fixed step size.[1]Looser assumptions lead to either weaker convergence guarantees or require a more sophisticated step size selection.[34]
https://en.wikipedia.org/wiki/Gradient_descent
Instatisticsandmachine learning, thebias–variance tradeoffdescribes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, orbias. However, for more flexible models, there will tend to be greatervarianceto the model fit each time we take a set ofsamplesto create a new training data set. It is said that there is greatervariancein the model'sestimatedparameters. Thebias–variance dilemmaorbias–variance problemis the conflict in trying to simultaneously minimize these two sources oferrorthat preventsupervised learningalgorithms from generalizing beyond theirtraining set:[1][2] Thebias–variance decompositionis a way of analyzing a learning algorithm'sexpectedgeneralization errorwith respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called theirreducible error, resulting from noise in the problem itself. The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants tochoose a modelthat both accurately captures the regularities in its training data, but alsogeneralizeswell to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data. It is an often madefallacy[3][4]to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true.[5]In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from:[6]The modelfa,b(x)=asin⁡(bx){\displaystyle f_{a,b}(x)=a\sin(bx)}has only two parameters (a,b{\displaystyle a,b}) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance. An analogy can be made to the relationship betweenaccuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from onlylocalinformation. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words,test datamay not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can besmoothedvia explicitregularization, such asshrinkage. Suppose that we have a training set consisting of a set of pointsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and real-valued labelsyi{\displaystyle y_{i}}associated with the pointsxi{\displaystyle x_{i}}. We assume that the data is generated by a functionf(x){\displaystyle f(x)}such asy=f(x)+ε{\displaystyle y=f(x)+\varepsilon }, where the noise,ε{\displaystyle \varepsilon }, has zero mean and varianceσ2{\displaystyle \sigma ^{2}}. That is,yi=f(xi)+εi{\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}}, whereεi{\displaystyle \varepsilon _{i}}is a noise sample. We want to find a functionf^(x;D){\displaystyle {\hat {f}}(x;D)}, that approximates the true functionf(x){\displaystyle f(x)}as well as possible, by means of some learning algorithm based on a training dataset (sample)D={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}. We make "as well as possible" precise by measuring themean squared errorbetweeny{\displaystyle y}andf^(x;D){\displaystyle {\hat {f}}(x;D)}: we want(y−f^(x;D))2{\displaystyle (y-{\hat {f}}(x;D))^{2}}to be minimal, both forx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}and for points outside of our sample. Of course, we cannot hope to do so perfectly, since theyi{\displaystyle y_{i}}contain noiseε{\displaystyle \varepsilon }; this means we must be prepared to accept anirreducible errorin any function we come up with. Finding anf^{\displaystyle {\hat {f}}}that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever functionf^{\displaystyle {\hat {f}}}we select, we can decompose itsexpectederror on an unseen samplex{\displaystyle x}(i.e. conditional to x) as follows:[7]: 34[8]: 223 where and and The expectation ranges over different choices of the training setD={(x1,y1)…,(xn,yn)}{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}, all sampled from the same joint distributionP(x,y){\displaystyle P(x,y)}which can for example be done viabootstrapping. The three terms represent: Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.[7]: 34 The more complex the modelf^(x){\displaystyle {\hat {f}}(x)}is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger. The derivation of the bias–variance decomposition for squared error proceeds as follows.[9][10]For convenience, we drop theD{\displaystyle D}subscript in the following lines, such thatf^(x;D)=f^(x){\displaystyle {\hat {f}}(x;D)={\hat {f}}(x)}. Let us write the mean-squared error of our model: We can show that the second term of this equation is null: E[(f(x)−f^(x))ε]=E[f(x)−f^(x)]E[ε]sinceεis independent fromx=0sinceE[ε]=0{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}&=\mathbb {E} {\big [}f(x)-{\hat {f}}(x){\big ]}\ \mathbb {E} {\big [}\varepsilon {\big ]}&&{\text{since }}\varepsilon {\text{ is independent from }}x\\&=0&&{\text{since }}\mathbb {E} {\big [}\varepsilon {\big ]}=0\end{aligned}}} Moreover, the third term of this equation is nothing butσ2{\displaystyle \sigma ^{2}}, the variance ofε{\displaystyle \varepsilon }. Let us now expand the remaining term: E[(f(x)−f^(x))2]=E[(f(x)−E[f^(x)]+E[f^(x)]−f^(x))2]=E[(f(x)−E[f^(x)])2]+2E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]+E[(E[f^(x)]−f^(x))2]{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}&=\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&={\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}\,+\,2\ {\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}\,+\,\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\end{aligned}}} We show that: E[(f(x)−E[f^(x)])2]=E[f(x)2]−2E[f(x)E[f^(x)]]+E[E[f^(x)]2]=f(x)2−2f(x)E[f^(x)]+E[f^(x)]2=(f(x)−E[f^(x)])2{\displaystyle {\begin{aligned}{\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}&=\mathbb {E} {\big [}f(x)^{2}{\big ]}\,-\,2\ \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}\,+\,\mathbb {E} {\Big [}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}{\Big ]}\\&=f(x)^{2}\,-\,2\ f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}\end{aligned}}} This last series of equalities comes from the fact thatf(x){\displaystyle f(x)}is not a random variable, but a fixed, deterministic function ofx{\displaystyle x}. Therefore,E[f(x)]=f(x){\displaystyle \mathbb {E} {\big [}f(x){\big ]}=f(x)}. SimilarlyE[f(x)2]=f(x)2{\displaystyle \mathbb {E} {\big [}f(x)^{2}{\big ]}=f(x)^{2}}, andE[f(x)E[f^(x)]]=f(x)E[E[f^(x)]]=f(x)E[f^(x)]{\displaystyle \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\Big [}\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}}. Using the same reasoning, we can expand the second term and show that it is null: E[(f(x)−E[f^(x)])(E[f^(x)]−f^(x))]=E[f(x)E[f^(x)]−f(x)f^(x)−E[f^(x)]2+E[f^(x)]f^(x)]=f(x)E[f^(x)]−f(x)E[f^(x)]−E[f^(x)]2+E[f^(x)]2=0{\displaystyle {\begin{aligned}{\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}&=\mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x){\hat {f}}(x)\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}\ {\hat {f}}(x){\Big ]}\\&=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&=0\end{aligned}}} Eventually, we plug our derivations back into the original equation, and identify each term: MSE=(f(x)−E[f^(x)])2+E[(E[f^(x)]−f^(x))2]+σ2=Bias⁡(f^(x))2+Var⁡[f^(x)]+σ2{\displaystyle {\begin{aligned}{\text{MSE}}&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}+\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}+\sigma ^{2}\\&=\operatorname {Bias} {\big (}{\hat {f}}(x){\big )}^{2}\,+\,\operatorname {Var} {\big [}{\hat {f}}(x){\big ]}\,+\,\sigma ^{2}\end{aligned}}} Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value overx∼P{\displaystyle x\sim P}: Dimensionality reductionandfeature selectioncan decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example, One way of resolving the trade-off is to usemixture modelsandensemble learning.[14][15]For example,boostingcombines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, whilebaggingcombines "strong" learners in a way that reduces their variance. Model validationmethods such ascross-validation (statistics)can be used to tune models so as to optimize the trade-off. In the case ofk-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, aclosed-form expressionexists that relates the bias–variance decomposition to the parameterk:[8]: 37, 223 whereN1(x),…,Nk(x){\displaystyle N_{1}(x),\dots ,N_{k}(x)}are theknearest neighbors ofxin the training set. The bias (first term) is a monotone rising function ofk, while the variance (second term) drops off askis increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.[12] The bias–variance decomposition forms the conceptual basis for regressionregularizationmethods such asLASSOandridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to theordinary least squares (OLS)solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance. The bias–variance decomposition was originally formulated for least-squares regression. For the case ofclassificationunder the0-1 loss(misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label.[16][17]Alternatively, if the classification problem can be phrased asprobabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form. It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.[18] Even though the bias–variance decomposition does not directly apply inreinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.[19] While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such asMarkov chain Monte Carloare only asymptotically unbiased, at best.[20]Convergence diagnostics can be used to control bias viaburn-inremoval, but due to a limited computational budget, a bias–variance trade-off arises,[21]leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.[22][23][24] While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context ofhuman cognition, most notably byGerd Gigerenzerand co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.[25] Gemanet al.[12]argue that the bias–variance dilemma implies that abilities such as genericobject recognitioncannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
https://en.wikipedia.org/wiki/Bias–variance_tradeoff
Instatistics, thek-nearest neighbors algorithm(k-NN) is anon-parametricsupervised learningmethod. It was first developed byEvelyn FixandJoseph Hodgesin 1951,[1]and later expanded byThomas Cover.[2]Most often, it is used forclassification, as ak-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among itsknearest neighbors (kis a positiveinteger, typically small). Ifk= 1, then the object is simply assigned to the class of that single nearest neighbor. Thek-NN algorithm can also be generalized forregression. Ink-NN regression, also known asnearest neighbor smoothing, the output is the property value for the object. This value is the average of the values ofknearest neighbors. Ifk= 1, then the output is simply assigned to the value of that single nearest neighbor, also known asnearest neighbor interpolation. For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, wheredis the distance to the neighbor.[3] The input consists of thekclosest training examples in adata set. The neighbors are taken from a set of objects for which the class (fork-NN classification) or the object property value (fork-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity (sometimes even a disadvantage) of thek-NN algorithm is its sensitivity to the local structure of the data. Ink-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wisenormalizingof the training data can greatly improve its accuracy.[4] Suppose we have pairs(X1,Y1),(X2,Y2),…,(Xn,Yn){\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})}taking values inRd×{1,2}{\displaystyle \mathbb {R} ^{d}\times \{1,2\}}, whereYis the class label ofX, so thatX|Y=r∼Pr{\displaystyle X|Y=r\sim P_{r}}forr=1,2{\displaystyle r=1,2}(and probability distributionsPr{\displaystyle P_{r}}). Given some norm‖⋅‖{\displaystyle \|\cdot \|}onRd{\displaystyle \mathbb {R} ^{d}}and a pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}, let(X(1),Y(1)),…,(X(n),Y(n)){\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})}be a reordering of the training data such that‖X(1)−x‖≤⋯≤‖X(n)−x‖{\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|}. The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing thefeature vectorsand class labels of the training samples. In the classification phase,kis a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among thektraining samples nearest to that query point. A commonly used distance metric forcontinuous variablesisEuclidean distance. For discrete variables, such as for text classification, another metric can be used, such as theoverlap metric(orHamming distance). In the context of gene expression microarray data, for example,k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric.[5]Often, the classification accuracy ofk-NN can be improved significantly if the distance metric is learned with specialized algorithms such asLarge Margin Nearest NeighbororNeighbourhood components analysis. A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among theknearest neighbors due to their large number.[7]One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of itsknearest neighbors. The class (or value, in regression problems) of each of theknearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in aself-organizing map(SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data.K-NN can then be applied to the SOM. The best choice ofkdepends upon the data; generally, larger values ofkreduces effect of the noise on the classification,[8]but make boundaries between classes less distinct. A goodkcan be selected by variousheuristictechniques (seehyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. whenk= 1) is called the nearest neighbor algorithm. The accuracy of thek-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put intoselectingorscalingfeatures to improve classification. A particularly popular[citation needed]approach is the use ofevolutionary algorithmsto optimize feature scaling.[9]Another popular approach is to scale features by themutual informationof the training data with the training classes.[citation needed] In binary (two class) classification problems, it is helpful to choosekto be an odd number as this avoids tied votes. One popular way of choosing the empirically optimalkin this setting is via bootstrap method.[10] The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a pointxto the class of its closest neighbour in the feature space, that isCn1nn(x)=Y(1){\displaystyle C_{n}^{1nn}(x)=Y_{(1)}}. As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data). Thek-nearest neighbour classifier can be viewed as assigning theknearest neighbours a weight1/k{\displaystyle 1/k}and all others0weight. This can be generalised to weighted nearest neighbour classifiers. That is, where theith nearest neighbour is assigned a weightwni{\displaystyle w_{ni}}, with∑i=1nwni=1{\textstyle \sum _{i=1}^{n}w_{ni}=1}. An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.[11] LetCnwnn{\displaystyle C_{n}^{wnn}}denote the weighted nearest classifier with weights{wni}i=1n{\displaystyle \{w_{ni}\}_{i=1}^{n}}. Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion[12]RR(Cnwnn)−RR(CBayes)=(B1sn2+B2tn2){1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},}for constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}wheresn2=∑i=1nwni2{\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}}andtn=n−2/d∑i=1nwni{i1+2/d−(i−1)1+2/d}{\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}}. The optimal weighting scheme{wni∗}i=1n{\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}}, that balances the two terms in the display above, is given as follows: setk∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor },wni∗=1k∗[1+d2−d2k∗2/d{i1+2/d−(i−1)1+2/d}]{\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]}fori=1,2,…,k∗{\displaystyle i=1,2,\dots ,k^{*}}andwni∗=0{\displaystyle w_{ni}^{*}=0}fori=k∗+1,…,n{\displaystyle i=k^{*}+1,\dots ,n}. With optimal weights the dominant term in the asymptotic expansion of the excess risk isO(n−4d+4){\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})}. Similar results are true when using abagged nearest neighbour classifier. k-NN is a special case of avariable-bandwidth, kernel density "balloon" estimatorwith a uniformkernel.[13][14] The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximatenearest neighbor searchalgorithm makesk-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. k-NN has some strongconsistencyresults. As the amount of data approaches infinity, the two-classk-NN algorithm is guaranteed to yield an error rate no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data).[2]Various improvements to thek-NN speed are possible by using proximity graphs.[15] For multi-classk-NN classification,CoverandHart(1967) prove an upper bound error rate ofR∗≤RkNN≤R∗(2−MR∗M−1){\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)}whereR∗{\displaystyle R^{*}}is the Bayes error rate (which is the minimal error rate possible),RkNN{\displaystyle R_{kNN}}is the asymptotick-NN error rate, andMis the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution.[16]ForM=2{\displaystyle M=2}and as the Bayesian error rateR∗{\displaystyle R^{*}}approaches zero, this limit reduces to "not more than twice the Bayesian error rate". There are many results on the error rate of theknearest neighbour classifiers.[17]Thek-nearest neighbour classifier is strongly (that is for any joint distribution on(X,Y){\displaystyle (X,Y)})consistentprovidedk:=kn{\displaystyle k:=k_{n}}diverges andkn/n{\displaystyle k_{n}/n}converges to zero asn→∞{\displaystyle n\to \infty }. LetCnknn{\displaystyle C_{n}^{knn}}denote theknearest neighbour classifier based on a training set of sizen. Under certain regularity conditions, theexcess riskyields the following asymptotic expansion[12]RR(Cnknn)−RR(CBayes)={B11k+B2(kn)4/d}{1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},}for some constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}. The choicek∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor }offers a trade off between the two terms in the above display, for which thek∗{\displaystyle k^{*}}-nearest neighbour error converges to the Bayes error at the optimal (minimax) rateO(n−4d+4){\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)}. The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms areneighbourhood components analysisandlarge margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a newmetricorpseudo-metric. When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is calledfeature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applyingk-NN algorithm on the transformed data infeature space. An example of a typicalcomputer visioncomputation pipeline forface recognitionusingk-NN including feature extraction and dimension reduction pre-processing steps (usually implemented withOpenCV): For high-dimensional data (e.g., with number of dimensions more than 10)dimension reductionis usually performed prior to applying thek-NN algorithm in order to avoid the effects of thecurse of dimensionality.[18] Thecurse of dimensionalityin thek-NN context basically means thatEuclidean distanceis unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same). Feature extractionand dimension reduction can be combined in one step usingprincipal component analysis(PCA),linear discriminant analysis(LDA), orcanonical correlation analysis(CCA) techniques as a pre-processing step, followed by clustering byk-NN onfeature vectorsin reduced-dimension space. This process is also called low-dimensionalembedding.[19] For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensionaltime series) running a fastapproximatek-NN search usinglocality sensitive hashing, "random projections",[20]"sketches"[21]or other high-dimensional similarity search techniques from theVLDBtoolbox might be the only feasible option. Nearest neighbor rules in effect implicitly compute thedecision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity.[22] Data reductionis one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called theprototypesand can be found as follows: A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include: Class outliers withk-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers,k>r>0, a training example is called a (k,r)NN class-outlier if itsknearest neighbors include more thanrexamples of other classes. Condensed nearest neighbor (CNN, theHartalgorithm) is an algorithm designed to reduce the data set fork-NN classification.[23]It selects the set of prototypesUfrom the training data, such that 1NN withUcan classify the examples almost as accurately as 1NN does with the whole data set. Given a training setX, CNN works iteratively: UseUinstead ofXfor classification. The examples that are not prototypes are called "absorbed" points. It is efficient to scan the training examples in order of decreasing border ratio.[24]The border ratio of a training examplexis defined as where‖x-y‖is the distance to the closest exampleyhaving a different color thanx, and‖x'-y‖is the distance fromyto its closest examplex'with the same label asx. The border ratio is in the interval [0,1] because‖x'-y‖never exceeds‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypesU. A point of a different label thanxis called external tox. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point isxand its label is red. External points are blue and green. The closest toxexternal point isy. The closest toyred point isx'. The border ratioa(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial pointx. Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet.[24] Ink-NN regression, also known ask-NN smoothing, thek-NN algorithm is used for estimatingcontinuous variables.[citation needed]One such algorithm uses a weighted average of theknearest neighbors, weighted by the inverse of their distance. This algorithm works as follows: The distance to thekth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score inanomaly detection. The larger the distance to thek-NN, the lower the local density, the more likely the query point is an outlier.[25]Although quite simple, this outlier model, along with another classic data mining method,local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis.[26] Aconfusion matrixor "matching matrix" is often used as a tool to validate the accuracy ofk-NN classification. More robust statistical methods such aslikelihood-ratio testcan also be applied.[how?]
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
Thenearest neighbour algorithmwas one of the firstalgorithmsused to solve thetravelling salesman problemapproximately. In that problem, the salesman starts at a random city and repeatedly visits the nearest city until all have been visited. The algorithm quickly yields a short tour, but usually not the optimal one. These are the steps of the algorithm: The sequence of the visited vertices is the output of the algorithm. The nearest neighbour algorithm is easy to implement and executes quickly, but it can sometimes miss shorter routes which are easily noticed with human insight, due to its "greedy" nature. As a general guide, if the last few stages of the tour are comparable in length to the first stages, then the tour is reasonable; if they are much greater, then it is likely that much better tours exist. Another check is to use an algorithm such as thelower boundalgorithm to estimate if this tour is good enough. In the worst case, the algorithm results in a tour that is much longer than the optimal tour. To be precise, for every constantrthere is an instance of the traveling salesman problem such that the length of the tour computed by the nearest neighbour algorithm is greater thanrtimes the length of the optimal tour. Moreover, for each number of cities there is an assignment of distances between the cities for which the nearest neighbour heuristic produces the unique worst possible tour. (If the algorithm is applied on every vertex as the starting vertex, the best path found will be better than at least N/2-1 other tours, where N is the number of vertices.)[1] The nearest neighbour algorithm may not find a feasible tour at all, even when one exists.
https://en.wikipedia.org/wiki/Nearest_neighbour_algorithm
Whenclassificationis performed by a computer, statistical methods are normally used to develop the algorithm. Often, the individual observations are analyzed into a set of quantifiable properties, known variously asexplanatory variablesorfeatures. These properties may variously becategorical(e.g. "A", "B", "AB" or "O", forblood type),ordinal(e.g. "large", "medium" or "small"),integer-valued(e.g. the number of occurrences of a particular word in anemail) orreal-valued(e.g. a measurement ofblood pressure). Other classifiers work by comparing observations to previous observations by means of asimilarityordistancefunction. Analgorithmthat implements classification, especially in a concrete implementation, is known as aclassifier. The term "classifier" sometimes also refers to the mathematicalfunction, implemented by a classification algorithm, that maps input data to a category. Terminology across fields is quite varied. Instatistics, where classification is often done withlogistic regressionor a similar procedure, the properties of observations are termedexplanatory variables(orindependent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of thedependent variable. Inmachine learning, the observations are often known asinstances, the explanatory variables are termedfeatures(grouped into afeature vector), and the possible categories to be predicted areclasses. Other fields may use different terminology: e.g. incommunity ecology, the term "classification" normally refers tocluster analysis. Classificationand clustering are examples of the more general problem ofpattern recognition, which is the assignment of some sort of output value to a given input value. Other examples areregression, which assigns a real-valued output to each input;sequence labeling, which assigns a class to each member of a sequence of values (for example,part of speech tagging, which assigns apart of speechto each word in an input sentence);parsing, which assigns aparse treeto an input sentence, describing thesyntactic structureof the sentence; etc. A common subclass of classification isprobabilistic classification. Algorithms of this nature usestatistical inferenceto find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output aprobabilityof the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers: Early work on statistical classification was undertaken byFisher,[1][2]in the context of two-group problems, leading toFisher's linear discriminantfunction as the rule for assigning a group to a new observation.[3]This early work assumed that data-values within each of the two groups had amultivariate normal distribution. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should belinear.[3][4]Later work for the multivariate normal distribution allowed the classifier to benonlinear:[5]several classification rules can be derived based on different adjustments of theMahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation. Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.[6]Bayesian procedures tend to be computationally expensive and, in the days beforeMarkov chain Monte Carlocomputations were developed, approximations for Bayesian clustering rules were devised.[7] Some Bayesian procedures involve the calculation ofgroup-membership probabilities: these provide a more informative outcome than a simple attribution of a single group-label to each new observation. Classification can be thought of as two separate problems –binary classificationandmulticlass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.[8]Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers. Most algorithms describe an individual instance whose category is to be predicted using afeature vectorof individual, measurable properties of the instance. Each property is termed afeature, also known in statistics as anexplanatory variable(orindependent variable, although features may or may not bestatistically independent). Features may variously bebinary(e.g. "on" or "off");categorical(e.g. "A", "B", "AB" or "O", forblood type);ordinal(e.g. "large", "medium" or "small");integer-valued(e.g. the number of occurrences of a particular word in an email); orreal-valued(e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data bediscretizedinto groups (e.g. less than 5, between 5 and 10, or greater than 10). A large number ofalgorithmsfor classification can be phrased in terms of alinear functionthat assigns a score to each possible categorykbycombiningthe feature vector of an instance with a vector of weights, using adot product. The predicted category is the one with the highest score. This type of score function is known as alinear predictor functionand has the following general form:score⁡(Xi,k)=βk⋅Xi,{\displaystyle \operatorname {score} (\mathbf {X} _{i},k)={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i},}whereXiis the feature vector for instancei,βkis the vector of weights corresponding to categoryk, and score(Xi,k) is the score associated with assigning instanceito categoryk. Indiscrete choicetheory, where instances represent people and categories represent choices, the score is considered theutilityassociated with personichoosing categoryk. Algorithms with this basic setup are known aslinear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. Examples of such algorithms include Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:[9] Choices between different possible algorithms are frequently made on the basis of quantitativeevaluation of accuracy. Classification has many applications. In some of these, it is employed as adata miningprocedure, while in others more detailed statistical modeling is undertaken.
https://en.wikipedia.org/wiki/Classification_(machine_learning)
Instatistics, alogistic model(orlogit model) is astatistical modelthat models thelog-oddsof an event as alinear combinationof one or moreindependent variables. Inregression analysis,logistic regression[1](orlogit regression)estimatesthe parameters of a logistic model (the coefficients in the linear or non linear combinations). In binary logistic regression there is a singlebinarydependent variable, coded by anindicator variable, where the two values are labeled "0" and "1", while theindependent variablescan each be a binary variable (two classes, coded by an indicator variable) or acontinuous variable(any real value). The corresponding probability of the value labeled "1" can vary between 0 (certainly the value "0") and 1 (certainly the value "1"), hence the labeling;[2]the function that converts log-odds to probability is thelogistic function, hence the name. Theunit of measurementfor the log-odds scale is called alogit, fromlogistic unit, hence the alternative names. See§ Backgroundand§ Definitionfor formal mathematics, and§ Examplefor a worked example. Binary variables are widely used in statistics to model the probability of a certain class or event taking place, such as the probability of a team winning, of a patient being healthy, etc. (see§ Applications), and the logistic model has been the most commonly used model forbinary regressionsince about 1970.[3]Binary variables can be generalized tocategorical variableswhen there are more than two possible values (e.g. whether an image is of a cat, dog, lion, etc.), and the binary logistic regression generalized tomultinomial logistic regression. If the multiple categories areordered, one can use theordinal logistic regression(for example the proportional odds ordinal logistic model[4]). See§ Extensionsfor further extensions. The logistic regression model itself simply models probability of output in terms of input and does not performstatistical classification(it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make abinary classifier. Analogous linear models for binary variables with a differentsigmoid functioninstead of the logistic function (to convert the linear combination to a probability) can also be used, most notably theprobit model; see§ Alternatives. The defining characteristic of the logistic model is that increasing one of the independent variables multiplicatively scales the odds of the given outcome at aconstantrate, with each independent variable having its own parameter; for a binary dependent variable this generalizes theodds ratio. More abstractly, the logistic function is thenatural parameterfor theBernoulli distribution, and in this sense is the "simplest" way to convert a real number to a probability. In particular, it maximizes entropy (minimizes added information), and in this sense makes the fewest assumptions of the data being modeled; see§ Maximum entropy. The parameters of a logistic regression are most commonly estimated bymaximum-likelihood estimation(MLE). This does not have a closed-form expression, unlikelinear least squares; see§ Model fitting. Logistic regression by MLE plays a similarly basic role for binary or categorical responses as linear regression byordinary least squares(OLS) plays forscalarresponses: it is a simple, well-analyzed baseline model; see§ Comparison with linear regressionfor discussion. The logistic regression as a general statistical model was originally developed and popularized primarily byJoseph Berkson,[5]beginning inBerkson (1944), where he coined "logit"; see§ History. Logistic regression is used in various fields, including machine learning, most medical fields, and social sciences. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boydet al.using logistic regression.[6]Many other medical scales used to assess severity of a patient have been developed using logistic regression.[7][8][9][10]Logistic regression may be used to predict the risk of developing a given disease (e.g.diabetes;coronary heart disease), based on observed characteristics of the patient (age, sex,body mass index, results of variousblood tests, etc.).[11][12]Another example might be to predict whether a Nepalese voter will vote Nepali Congress or Communist Party of Nepal or Any Other Party, based on age, income, sex, race, state of residence, votes in previous elections, etc.[13]The technique can also be used inengineering, especially for predicting the probability of failure of a given process, system or product.[14][15]It is also used inmarketingapplications such as prediction of a customer's propensity to purchase a product or halt a subscription, etc.[16]Ineconomics, it can be used to predict the likelihood of a person ending up in the labor force, and a business application would be to predict the likelihood of a homeowner defaulting on amortgage.Conditional random fields, an extension of logistic regression to sequential data, are used innatural language processing. Disaster planners and engineers rely on these models to predict decisions taken by householders or building occupants in small-scale and large-scales evacuations, such as building fires, wildfires, hurricanes among others.[17][18][19]These models help in the development of reliabledisaster managing plansand safer design for thebuilt environment. Logistic regression is asupervised machine learningalgorithm widely used forbinary classificationtasks, such as identifying whether an email is spam or not and diagnosing diseases by assessing the presence or absence of specific conditions based on patient test results. This approach utilizes the logistic (or sigmoid) function to transform a linear combination of input features into a probability value ranging between 0 and 1. This probability indicates the likelihood that a given input corresponds to one of two predefined categories. The essential mechanism of logistic regression is grounded in the logistic function's ability to model the probability of binary outcomes accurately. With its distinctive S-shaped curve, the logistic function effectively maps any real-valued number to a value within the 0 to 1 interval. This feature renders it particularly suitable for binary classification tasks, such as sorting emails into "spam" or "not spam". By calculating the probability that the dependent variable will be categorized into a specific group, logistic regression provides a probabilistic framework that supports informed decision-making.[20] As a simple example, we can use a logistic regression with one explanatory variable and two categories to answer the following question: A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam? The reason for using logistic regression for this problem is that the values of the dependent variable, pass and fail, while represented by "1" and "0", are notcardinal numbers. If the problem was changed so that pass/fail was replaced with the grade 0–100 (cardinal numbers), then simpleregression analysiscould be used. The table shows the number of hours each student spent studying, and whether they passed (1) or failed (0). We wish to fit a logistic function to the data consisting of the hours studied (xk) and the outcome of the test (yk=1 for pass, 0 for fail). The data points are indexed by the subscriptkwhich runs fromk=1{\displaystyle k=1}tok=K=20{\displaystyle k=K=20}. Thexvariable is called the "explanatory variable", and theyvariable is called the "categorical variable" consisting of two categories: "pass" or "fail" corresponding to the categorical values 1 and 0 respectively. Thelogistic functionis of the form: whereμis alocation parameter(the midpoint of the curve, wherep(μ)=1/2{\displaystyle p(\mu )=1/2}) andsis ascale parameter. This expression may be rewritten as: whereβ0=−μ/s{\displaystyle \beta _{0}=-\mu /s}and is known as theintercept(it is theverticalintercept ory-intercept of the liney=β0+β1x{\displaystyle y=\beta _{0}+\beta _{1}x}), andβ1=1/s{\displaystyle \beta _{1}=1/s}(inverse scale parameter orrate parameter): these are they-intercept and slope of the log-odds as a function ofx. Conversely,μ=−β0/β1{\displaystyle \mu =-\beta _{0}/\beta _{1}}ands=1/β1{\displaystyle s=1/\beta _{1}}. Remark: This model is actually an oversimplification, since it assumes everybody will pass if they learn long enough (limit = 1). The limit value should be a variable parameter too, if you want to make it more realistic. The usual measure ofgoodness of fitfor a logistic regression useslogistic loss(orlog loss), the negativelog-likelihood. For a givenxkandyk, writepk=p(xk){\displaystyle p_{k}=p(x_{k})}. The⁠pk{\displaystyle p_{k}}⁠are the probabilities that the corresponding⁠yk{\displaystyle y_{k}}⁠will equal one and⁠1−pk{\displaystyle 1-p_{k}}⁠are the probabilities that they will be zero (seeBernoulli distribution). We wish to find the values of⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠which give the "best fit" to the data. In the case of linear regression, the sum of the squared deviations of the fit from the data points (yk), thesquared error loss, is taken as a measure of the goodness of fit, and the best fit is obtained when that function isminimized. The log loss for thek-th point⁠ℓk{\displaystyle \ell _{k}}⁠is: The log loss can be interpreted as the "surprisal" of the actual outcome⁠yk{\displaystyle y_{k}}⁠relative to the prediction⁠pk{\displaystyle p_{k}}⁠, and is a measure ofinformation content. Log loss is always greater than or equal to 0, equals 0 only in case of a perfect prediction (i.e., whenpk=1{\displaystyle p_{k}=1}andyk=1{\displaystyle y_{k}=1}, orpk=0{\displaystyle p_{k}=0}andyk=0{\displaystyle y_{k}=0}), and approaches infinity as the prediction gets worse (i.e., whenyk=1{\displaystyle y_{k}=1}andpk→0{\displaystyle p_{k}\to 0}oryk=0{\displaystyle y_{k}=0}andpk→1{\displaystyle p_{k}\to 1}), meaning the actual outcome is "more surprising". Since the value of the logistic function is always strictly between zero and one, the log loss is always greater than zero and less than infinity. Unlike in a linear regression, where the model can have zero loss at a point by passing through a data point (and zero loss overall if all points are on a line), in a logistic regression it is not possible to have zero loss at any points, since⁠yk{\displaystyle y_{k}}⁠is either 0 or 1, but⁠0<pk<1{\displaystyle 0<p_{k}<1}⁠. These can be combined into a single expression: This expression is more formally known as thecross-entropyof the predicted distribution(pk,(1−pk)){\displaystyle {\big (}p_{k},(1-p_{k}){\big )}}from the actual distribution(yk,(1−yk)){\displaystyle {\big (}y_{k},(1-y_{k}){\big )}}, as probability distributions on the two-element space of (pass, fail). The sum of these, the total loss, is the overall negative log-likelihood⁠−ℓ{\displaystyle -\ell }⁠, and the best fit is obtained for those choices of⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠for which⁠−ℓ{\displaystyle -\ell }⁠isminimized. Alternatively, instead ofminimizingthe loss, one canmaximizeits inverse, the (positive) log-likelihood: or equivalently maximize thelikelihood functionitself, which is the probability that the given data set is produced by a particular logistic function: This method is known asmaximum likelihood estimation. Sinceℓis nonlinear in⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠, determining their optimum values will require numerical methods. One method of maximizingℓis to require the derivatives ofℓwith respect to⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠to be zero: and the maximization procedure can be accomplished by solving the above two equations for⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠, which, again, will generally require the use of numerical methods. The values of⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠which maximizeℓandLusing the above data are found to be: which yields a value forμandsof: The⁠β0{\displaystyle \beta _{0}}⁠and⁠β1{\displaystyle \beta _{1}}⁠coefficients may be entered into the logistic regression equation to estimate the probability of passing the exam. For example, for a student who studies 2 hours, entering the valuex=2{\displaystyle x=2}into the equation gives the estimated probability of passing the exam of 0.25: Similarly, for a student who studies 4 hours, the estimated probability of passing the exam is 0.87: This table shows the estimated probability of passing the exam for several values of hours studying. The logistic regression analysis gives the following output. By theWald test, the output indicates that hours studying is significantly associated with the probability of passing the exam (p=0.017{\displaystyle p=0.017}). Rather than the Wald method, the recommended method[21]to calculate thep-value for logistic regression is thelikelihood-ratio test(LRT), which for these data givep≈0.00064{\displaystyle p\approx 0.00064}(see§ Deviance and likelihood ratio testsbelow). This simple model is an example of binary logistic regression, and has one explanatory variable and a binary categorical variable which can assume one of two categorical values.Multinomial logistic regressionis the generalization of binary logistic regression to include any number of explanatory variables and any number of categories. An explanation of logistic regression can begin with an explanation of the standardlogistic function. The logistic function is asigmoid function, which takes anyrealinputt{\displaystyle t}, and outputs a value between zero and one.[2]For the logit, this is interpreted as taking inputlog-oddsand having outputprobability. Thestandardlogistic functionσ:R→(0,1){\displaystyle \sigma :\mathbb {R} \rightarrow (0,1)}is defined as follows: A graph of the logistic function on thet-interval (−6,6) is shown in Figure 1. Let us assume thatt{\displaystyle t}is a linear function of a singleexplanatory variablex{\displaystyle x}(the case wheret{\displaystyle t}is alinear combinationof multiple explanatory variables is treated similarly). We can then expresst{\displaystyle t}as follows: And the general logistic functionp:R→(0,1){\displaystyle p:\mathbb {R} \rightarrow (0,1)}can now be written as: In the logistic model,p(x){\displaystyle p(x)}is interpreted as the probability of the dependent variableY{\displaystyle Y}equaling a success/case rather than a failure/non-case. It is clear that theresponse variablesYi{\displaystyle Y_{i}}are not identically distributed:P(Yi=1∣X){\displaystyle P(Y_{i}=1\mid X)}differs from one data pointXi{\displaystyle X_{i}}to another, though they are independent givendesign matrixX{\displaystyle X}and shared parametersβ{\displaystyle \beta }.[11] We can now define thelogit(log odds) function as the inverseg=σ−1{\displaystyle g=\sigma ^{-1}}of the standard logistic function. It is easy to see that it satisfies: and equivalently, after exponentiating both sides we have the odds: In the above equations, the terms are as follows: The odds of the dependent variable equaling a case (given some linear combinationx{\displaystyle x}of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how thelogitserves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.[2] So we define odds of the dependent variable equaling a case (given some linear combinationx{\displaystyle x}of the predictors) as follows: For a continuous independent variable the odds ratio can be defined as: This exponential relationship provides an interpretation forβ1{\displaystyle \beta _{1}}: The odds multiply byeβ1{\displaystyle e^{\beta _{1}}}for every 1-unit increase in x.[22] For a binary independent variable the odds ratio is defined asadbc{\displaystyle {\frac {ad}{bc}}}wherea,b,canddare cells in a 2×2contingency table.[23] If there are multiple explanatory variables, the above expressionβ0+β1x{\displaystyle \beta _{0}+\beta _{1}x}can be revised toβ0+β1x1+β2x2+⋯+βmxm=β0+∑i=1mβixi{\displaystyle \beta _{0}+\beta _{1}x_{1}+\beta _{2}x_{2}+\cdots +\beta _{m}x_{m}=\beta _{0}+\sum _{i=1}^{m}\beta _{i}x_{i}}. Then when this is used in the equation relating the log odds of a success to the values of the predictors, the linear regression will be amultiple regressionwithmexplanators; the parametersβi{\displaystyle \beta _{i}}for alli=0,1,2,…,m{\displaystyle i=0,1,2,\dots ,m}are all estimated. Again, the more traditional equations are: and where usuallyb=e{\displaystyle b=e}. A dataset containsNpoints. Each pointiconsists of a set ofminput variablesx1,i...xm,i(also calledindependent variables, explanatory variables, predictor variables, features, or attributes), and abinaryoutcome variableYi(also known as adependent variable, response variable, output variable, or class), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to use the dataset to create a predictive model of the outcome variable. As in linear regression, the outcome variablesYiare assumed to depend on the explanatory variablesx1,i...xm,i. The explanatory variables may be of anytype:real-valued,binary,categorical, etc. The main distinction is betweencontinuous variablesanddiscrete variables. (Discrete variables referring to more than two possible choices are typically coded usingdummy variables(orindicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value".) Formally, the outcomesYiare described as beingBernoulli-distributeddata, where each outcome is determined by an unobserved probabilitypithat is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms: The meanings of these four lines are: The basic idea of logistic regression is to use the mechanism already developed forlinear regressionby modeling the probabilitypiusing alinear predictor function, i.e. alinear combinationof the explanatory variables and a set ofregression coefficientsthat are specific to the model at hand but the same for all trials. The linear predictor functionf(i){\displaystyle f(i)}for a particular data pointiis written as: whereβ0,…,βm{\displaystyle \beta _{0},\ldots ,\beta _{m}}areregression coefficientsindicating the relative effect of a particular explanatory variable on the outcome. The model is usually put into a more compact form as follows: This makes it possible to write the linear predictor function as follows: using the notation for adot productbetween two vectors. The above example of binary logistic regression on one explanatory variable can be generalized to binary logistic regression on any number of explanatory variablesx1, x2,...and any number of categorical valuesy=0,1,2,…{\displaystyle y=0,1,2,\dots }. To begin with, we may consider a logistic model withMexplanatory variables,x1,x2...xMand, as in the example above, two categorical values (y= 0 and 1). For the simple binary logistic regression model, we assumed alinear relationshipbetween the predictor variable and the log-odds (also calledlogit) of the event thaty=1{\displaystyle y=1}. This linear relationship may be extended to the case ofMexplanatory variables: wheretis the log-odds andβi{\displaystyle \beta _{i}}are parameters of the model. An additional generalization has been introduced in which the base of the model (b) is not restricted toEuler's numbere. In most applications, the baseb{\displaystyle b}of the logarithm is usually taken to bee. However, in some cases it can be easier to communicate results by working in base 2 or base 10. For a more compact notation, we will specify the explanatory variables and theβcoefficients as⁠(M+1){\displaystyle (M+1)}⁠-dimensional vectors: with an added explanatory variablex0=1. The logit may now be written as: Solving for the probabilitypthaty=1{\displaystyle y=1}yields: whereSb{\displaystyle S_{b}}is thesigmoid functionwith baseb{\displaystyle b}. The above formula shows that once theβm{\displaystyle \beta _{m}}are fixed, we can easily compute either the log-odds thaty=1{\displaystyle y=1}for a given observation, or the probability thaty=1{\displaystyle y=1}for a given observation. The main use-case of a logistic model is to be given an observationx{\displaystyle {\boldsymbol {x}}}, and estimate the probabilityp(x){\displaystyle p({\boldsymbol {x}})}thaty=1{\displaystyle y=1}. The optimum beta coefficients may again be found by maximizing the log-likelihood. ForKmeasurements, definingxk{\displaystyle {\boldsymbol {x}}_{k}}as the explanatory vector of thek-th measurement, andyk{\displaystyle y_{k}}as the categorical outcome of that measurement, the log likelihood may be written in a form very similar to the simpleM=1{\displaystyle M=1}case above: As in the simple example above, finding the optimumβparameters will require numerical methods. One useful technique is to equate the derivatives of the log likelihood with respect to each of theβparameters to zero yielding a set of equations which will hold at the maximum of the log likelihood: wherexmkis the value of thexmexplanatory variable from thek-thmeasurement. Consider an example withM=2{\displaystyle M=2}explanatory variables,b=10{\displaystyle b=10}, and coefficientsβ0=−3{\displaystyle \beta _{0}=-3},β1=1{\displaystyle \beta _{1}=1}, andβ2=2{\displaystyle \beta _{2}=2}which have been determined by the above method. To be concrete, the model is: wherepis the probability of the event thaty=1{\displaystyle y=1}. This can be interpreted as follows: In the above cases of two categories (binomial logistic regression), the categories were indexed by "0" and "1", and we had two probabilities: The probability that the outcome was in category 1 was given byp(x){\displaystyle p({\boldsymbol {x}})}and the probability that the outcome was in category 0 was given by1−p(x){\displaystyle 1-p({\boldsymbol {x}})}. The sum of these probabilities equals 1, which must be true, since "0" and "1" are the only possible categories in this setup. In general, if we have⁠M+1{\displaystyle M+1}⁠explanatory variables (includingx0) and⁠N+1{\displaystyle N+1}⁠categories, we will need⁠N+1{\displaystyle N+1}⁠separate probabilities, one for each category, indexed byn, which describe the probability that the categorical outcomeywill be in categoryy=n, conditional on the vector of covariatesx. The sum of these probabilities over all categories must equal 1. Using the mathematically convenient basee, these probabilities are: Each of the probabilities exceptp0(x){\displaystyle p_{0}({\boldsymbol {x}})}will have their own set of regression coefficientsβn{\displaystyle {\boldsymbol {\beta }}_{n}}. It can be seen that, as required, the sum of thepn(x){\displaystyle p_{n}({\boldsymbol {x}})}over all categoriesnis 1. The selection ofp0(x){\displaystyle p_{0}({\boldsymbol {x}})}to be defined in terms of the other probabilities is artificial. Any of the probabilities could have been selected to be so defined. This special value ofnis termed the "pivot index", and the log-odds (tn) are expressed in terms of the pivot probability and are again expressed as a linear combination of the explanatory variables: Note also that for the simple case ofN=1{\displaystyle N=1}, the two-category case is recovered, withp(x)=p1(x){\displaystyle p({\boldsymbol {x}})=p_{1}({\boldsymbol {x}})}andp0(x)=1−p1(x){\displaystyle p_{0}({\boldsymbol {x}})=1-p_{1}({\boldsymbol {x}})}. The log-likelihood that a particular set ofKmeasurements or data points will be generated by the above probabilities can now be calculated. Indexing each measurement byk, let thek-th set of measured explanatory variables be denoted byxk{\displaystyle {\boldsymbol {x}}_{k}}and their categorical outcomes be denoted byyk{\displaystyle y_{k}}which can be equal to any integer in [0,N]. The log-likelihood is then: whereΔ(n,yk){\displaystyle \Delta (n,y_{k})}is anindicator functionwhich equals 1 ifyk= nand zero otherwise. In the case of two explanatory variables, this indicator function was defined asykwhenn= 1 and1-ykwhenn= 0. This was convenient, but not necessary.[24]Again, the optimum beta coefficients may be found by maximizing the log-likelihood function generally using numerical methods. A possible method of solution is to set the derivatives of the log-likelihood with respect to each beta coefficient equal to zero and solve for the beta coefficients: whereβnm{\displaystyle \beta _{nm}}is them-th coefficient of theβn{\displaystyle {\boldsymbol {\beta }}_{n}}vector andxmk{\displaystyle x_{mk}}is them-th explanatory variable of thek-th measurement. Once the beta coefficients have been estimated from the data, we will be able to estimate the probability that any subsequent set of explanatory variables will result in any of the possible outcome categories. There are various equivalent specifications and interpretations of logistic regression, which fit into different types of more general models, and allow different generalizations. The particular model used by logistic regression, which distinguishes it from standardlinear regressionand from other types ofregression analysisused forbinary-valuedoutcomes, is the way the probability of a particular outcome is linked to the linear predictor function: Written using the more compact notation described above, this is: This formulation expresses logistic regression as a type ofgeneralized linear model, which predicts variables with various types ofprobability distributionsby fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable. The intuition for transforming using the logit function (the natural log of the odds) was explained above[clarification needed]. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over(−∞,+∞){\displaystyle (-\infty ,+\infty )}— thereby matching the potential range of the linear prediction function on the right side of the equation. Both the probabilitiespiand the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g.maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject toregularizationconditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doingmaximum a posteriori(MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done usinga squared regularizing function, which is equivalent to placing a zero-meanGaussianprior distributionon the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such asiteratively reweighted least squares(IRLS) or, more commonly these days, aquasi-Newton methodsuch as theL-BFGS method.[25] The interpretation of theβjparameter estimates is as the additive effect on the log of theoddsfor a unit change in thejthe explanatory variable. In the case of a dichotomous explanatory variable, for instance, gendereβ{\displaystyle e^{\beta }}is the estimate of the odds of having the outcome for, say, males compared with females. An equivalent formula uses the inverse of the logit function, which is thelogistic function, i.e.: The formula can also be written as aprobability distribution(specifically, using aprobability mass function): The logistic model has an equivalent formulation as alatent-variable model. This formulation is common in the theory ofdiscrete choicemodels and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely relatedprobit model. Imagine that, for each triali, there is a continuouslatent variableYi*(i.e. an unobservedrandom variable) that is distributed as follows: where i.e. the latent variable can be written directly in terms of the linear predictor function and an additive randomerror variablethat is distributed according to a standardlogistic distribution. ThenYican be viewed as an indicator for whether this latent variable is positive: The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact, it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameterμ(which sets the mean) is equivalent to a distribution with a zero location parameter, whereμhas been added to the intercept coefficient. Both situations produce the same value forYi*regardless of settings of explanatory variables. Similarly, an arbitrary scale parametersis equivalent to setting the scale parameter to 1 and then dividing all regression coefficients bys. In the latter case, the resulting value ofYi*will be smaller by a factor ofsthan in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the sameYichoice. (This predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.) It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of thegeneralized linear modeland without anylatent variables. This can be shown as follows, using the fact that thecumulative distribution function(CDF) of the standardlogistic distributionis thelogistic function, which is the inverse of thelogit function, i.e. Then: This formulation—which is standard indiscrete choicemodels—makes clear the relationship between logistic regression (the "logit model") and theprobit model, which uses an error variable distributed according to a standardnormal distributioninstead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhatheavier tails, which means that it is less sensitive to outlying data (and hence somewhat morerobustto model mis-specifications or erroneous data). Yet another formulation uses two separate latent variables: where whereEV1(0,1) is a standard type-1extreme value distribution: i.e. Then This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in themultinomial logitmodel. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoreticalutilityassociated with making the associated choice, and thus motivate logistic regression in terms ofutility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulatingdiscrete choicemodels, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.) The choice of the type-1extreme value distributionseems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use throughrational choice theory. It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions: An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes onedegree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e.ε=ε1−ε0∼Logistic⁡(0,1).{\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}\sim \operatorname {Logistic} (0,1).}We can demonstrate the equivalent as follows: As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. theParti Québécois, which wantsQuebecto secede fromCanada). We would then use three latent variables, one for each choice. Then, in accordance withutility theory, we can then interpret the latent variables as expressing theutilitythat results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps a weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money. These intuitions can be expressed as follows: This clearly shows that Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of themultinomial logit. Here, instead of writing thelogitof the probabilitiespias a linear predictor, we separate the linear predictor into two, one for each of the two outcomes: Two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes thelogarithmof the associated probability as a linear predictor, with an extra term−ln⁡Z{\displaystyle -\ln Z}at the end. This term, as it turns out, serves as thenormalizing factorensuring that the result is a distribution. This can be seen by exponentiating both sides: In this form it is clear that the purpose ofZis to ensure that the resulting distribution overYiis in fact aprobability distribution, i.e. it sums to 1. This means thatZis simply the sum of all un-normalized probabilities, and by dividing each probability byZ, the probabilities become "normalized". That is: and the resulting equations are Or generally: This shows clearly how to generalize this formulation to more than two outcomes, as inmultinomial logit. This general formulation is exactly thesoftmax functionas in In order to prove that this is equivalent to the previous model, the above model is overspecified, in thatPr(Yi=0){\displaystyle \Pr(Y_{i}=0)}andPr(Yi=1){\displaystyle \Pr(Y_{i}=1)}cannot be independently specified: ratherPr(Yi=0)+Pr(Yi=1)=1{\displaystyle \Pr(Y_{i}=0)+\Pr(Y_{i}=1)=1}so knowing one automatically determines the other. As a result, the model isnonidentifiable, in that multiple combinations ofβ0andβ1will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities: As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to setβ0=0.{\displaystyle {\boldsymbol {\beta }}_{0}=\mathbf {0} .}Then, and so which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings whereβ=β1−β0{\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}will produce equivalent results.) Most treatments of themultinomial logitmodel start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common ineconometricsandpolitical science, wherediscrete choicemodels andutility theoryreign, while the "log-linear" formulation here is more common incomputer science, e.g.machine learningandnatural language processing. The model has an equivalent formulation This functional form is commonly called a single-layerperceptronor single-layerartificial neural network. A single-layer neural network computes a continuous output instead of astep function. The derivative ofpiwith respect toX= (x1, ...,xk) is computed from the general form: wheref(X) is ananalytic functioninX. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used inbackpropagation. This function is also preferred because its derivative is easily calculated: A closely related model assumes that eachiis associated not with a single Bernoulli trial but withniindependent identically distributedtrials, where the observationYiis the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows abinomial distribution: An example of this distribution is the fraction of seeds (pi) that germinate afterniare planted. In terms ofexpected values, this model is expressed as follows: so that Or equivalently: This model can be fit using the same sorts of methods as the above more basic model. The regression coefficients are usually estimated usingmaximum likelihood estimation.[26][27]Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximize the likelihood function so an iterative process must be used instead; for exampleNewton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until no more improvement is made, at which point the process is said to have converged.[26] In some instances, the model may not reach convergence. Non-convergence of a model indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large ratio of predictors to cases,multicollinearity,sparseness, or completeseparation. Binary logistic regression (y=0{\displaystyle y=0}ory=1{\displaystyle y=1}) can, for example, be calculated usingiteratively reweighted least squares(IRLS), which is equivalent to maximizing thelog-likelihoodof aBernoulli distributedprocess usingNewton's method. If the problem is written in vector matrix form, with parameterswT=[β0,β1,β2,…]{\displaystyle \mathbf {w} ^{T}=[\beta _{0},\beta _{1},\beta _{2},\ldots ]}, explanatory variablesx(i)=[1,x1(i),x2(i),…]T{\displaystyle \mathbf {x} (i)=[1,x_{1}(i),x_{2}(i),\ldots ]^{T}}and expected value of the Bernoulli distributionμ(i)=11+e−wTx(i){\displaystyle \mu (i)={\frac {1}{1+e^{-\mathbf {w} ^{T}\mathbf {x} (i)}}}}, the parametersw{\displaystyle \mathbf {w} }can be found using the following iterative algorithm: whereS=diag⁡(μ(i)(1−μ(i))){\displaystyle \mathbf {S} =\operatorname {diag} (\mu (i)(1-\mu (i)))}is a diagonal weighting matrix,μ=[μ(1),μ(2),…]{\displaystyle {\boldsymbol {\mu }}=[\mu (1),\mu (2),\ldots ]}the vector of expected values, The regressor matrix andy(i)=[y(1),y(2),…]T{\displaystyle \mathbf {y} (i)=[y(1),y(2),\ldots ]^{T}}the vector of response variables. More details can be found in the literature.[29] In aBayesian statisticscontext,prior distributionsare normally placed on the regression coefficients, for example in the form ofGaussian distributions. There is noconjugate priorof thelikelihood functionin logistic regression. When Bayesian inference was performed analytically, this made theposterior distributiondifficult to calculate except in very low dimensions. Now, though, automatic software such asOpenBUGS,JAGS,PyMC,StanorTuring.jlallows these posteriors to be computed using simulation, so lack of conjugacy is not a concern. However, when the sample size or the number of parameters is large, full Bayesian simulation can be slow, and people often use approximate methods such asvariational Bayesian methodsandexpectation propagation. Widely used, the "one in ten rule", states that logistic regression models give stable values for the explanatory variables if based on a minimum of about 10 events per explanatory variable (EPV); whereeventdenotes the cases belonging to the less frequent category in the dependent variable. Thus a study designed to usek{\displaystyle k}explanatory variables for an event (e.g.myocardial infarction) expected to occur in a proportionp{\displaystyle p}of participants in the study will require a total of10k/p{\displaystyle 10k/p}participants. However, there is considerable debate about the reliability of this rule, which is based on simulation studies and lacks a secure theoretical underpinning.[30]According to some authors[31]the rule is overly conservative in some circumstances, with the authors stating, "If we (somewhat subjectively) regard confidence interval coverage less than 93 percent, type I error greater than 7 percent, or relative bias greater than 15 percent as problematic, our results indicate that problems are fairly frequent with 2–4 EPV, uncommon with 5–9 EPV, and still observed with 10–16 EPV. The worst instances of each problem were not severe with 5–9 EPV and usually comparable to those with 10–16 EPV".[32] Others have found results that are not consistent with the above, using different criteria. A useful criterion is whether the fitted model will be expected to achieve the same predictive discrimination in a new sample as it appeared to achieve in the model development sample. For that criterion, 20 events per candidate variable may be required.[33]Also, one can argue that 96 observations are needed only to estimate the model's intercept precisely enough that the margin of error in predicted probabilities is ±0.1 with a 0.95 confidence level.[13] In any fitting procedure, the addition of another fitting parameter to a model (e.g. the beta parameters in a logistic regression model) will almost always improve the ability of the model to predict the measured outcomes. This will be true even if the additional term has no predictive value, since the model will simply be "overfitting" to the noise in the data. The question arises as to whether the improvement gained by the addition of another fitting parameter is significant enough to recommend the inclusion of the additional term, or whether the improvement is simply that which may be expected from overfitting. In short, for logistic regression, a statistic known as thedevianceis defined which is a measure of the error between the logistic model fit and the outcome data. In the limit of a large number of data points, the deviance ischi-squareddistributed, which allows achi-squared testto be implemented in order to determine the significance of the explanatory variables. Linear regression and logistic regression have many similarities. For example, in simple linear regression, a set ofKdata points (xk,yk) are fitted to a proposed model function of the formy=b0+b1x{\displaystyle y=b_{0}+b_{1}x}. The fit is obtained by choosing thebparameters which minimize the sum of the squares of the residuals (the squared error term) for each data point: The minimum value which constitutes the fit will be denoted byε^2{\displaystyle {\hat {\varepsilon }}^{2}} The idea of anull modelmay be introduced, in which it is assumed that thexvariable is of no use in predicting the ykoutcomes: The data points are fitted to a null model function of the formy=b0with a squared error term: The fitting process consists of choosing a value ofb0which minimizesε2{\displaystyle \varepsilon ^{2}}of the fit to the null model, denoted byεφ2{\displaystyle \varepsilon _{\varphi }^{2}}where theφ{\displaystyle \varphi }subscript denotes the null model. It is seen that the null model is optimized byb0=y¯{\displaystyle b_{0}={\overline {y}}}wherey¯{\displaystyle {\overline {y}}}is the mean of theykvalues, and the optimizedεφ2{\displaystyle \varepsilon _{\varphi }^{2}}is: which is proportional to the square of the (uncorrected) sample standard deviation of theykdata points. We can imagine a case where theykdata points are randomly assigned to the variousxk, and then fitted using the proposed model. Specifically, we can consider the fits of the proposed model to every permutation of theykoutcomes. It can be shown that the optimized error of any of these fits will never be less than the optimum error of the null model, and that the difference between these minimum error will follow achi-squared distribution, with degrees of freedom equal those of the proposed model minus those of the null model which, in this case, will be2−1=1{\displaystyle 2-1=1}. Using thechi-squared test, we may then estimate how many of these permuted sets ofykwill yield a minimum error less than or equal to the minimum error using the originalyk, and so we can estimate how significant an improvement is given by the inclusion of thexvariable in the proposed model. For logistic regression, the measure of goodness-of-fit is the likelihood functionL, or its logarithm, the log-likelihoodℓ. The likelihood functionLis analogous to theε2{\displaystyle \varepsilon ^{2}}in the linear regression case, except that the likelihood is maximized rather than minimized. Denote the maximized log-likelihood of the proposed model byℓ^{\displaystyle {\hat {\ell }}}. In the case of simple binary logistic regression, the set ofKdata points are fitted in a probabilistic sense to a function of the form: where⁠p(x){\displaystyle p(x)}⁠is the probability thaty=1{\displaystyle y=1}. The log-odds are given by: and the log-likelihood is: For the null model, the probability thaty=1{\displaystyle y=1}is given by: The log-odds for the null model are given by: and the log-likelihood is: Since we havepφ=y¯{\displaystyle p_{\varphi }={\overline {y}}}at the maximum ofL, the maximum log-likelihood for the null model is The optimumβ0{\displaystyle \beta _{0}}is: wherey¯{\displaystyle {\overline {y}}}is again the mean of theykvalues. Again, we can conceptually consider the fit of the proposed model to every permutation of theykand it can be shown that the maximum log-likelihood of these permutation fits will never be smaller than that of the null model: Also, as an analog to the error of the linear regression case, we may define thedevianceof a logistic regression fit as: which will always be positive or zero. The reason for this choice is that not only is the deviance a good measure of the goodness of fit, it is also approximately chi-squared distributed, with the approximation improving as the number of data points (K) increases, becoming exactly chi-square distributed in the limit of an infinite number of data points. As in the case of linear regression, we may use this fact to estimate the probability that a random set of data points will give a better fit than the fit obtained by the proposed model, and so have an estimate how significantly the model is improved by including thexkdata points in the proposed model. For the simple model of student test scores described above, the maximum value of the log-likelihood of the null model isℓ^φ=−13.8629…{\displaystyle {\hat {\ell }}_{\varphi }=-13.8629\ldots }The maximum value of the log-likelihood for the simple model isℓ^=−8.02988…{\displaystyle {\hat {\ell }}=-8.02988\ldots }so that the deviance isD=2(ℓ^−ℓ^φ)=11.6661…{\displaystyle D=2({\hat {\ell }}-{\hat {\ell }}_{\varphi })=11.6661\ldots } Using thechi-squared testof significance, the integral of thechi-squared distributionwith one degree of freedom from 11.6661... to infinity is equal to 0.00063649... This effectively means that about 6 out of a 10,000 fits to randomykcan be expected to have a better fit (smaller deviance) than the givenykand so we can conclude that the inclusion of thexvariable and data in the proposed model is a very significant improvement over the null model. In other words, we reject thenull hypothesiswith1−D≈99.94%{\displaystyle 1-D\approx 99.94\%}confidence. Goodness of fitin linear regression models is generally measured usingR2. Since this has no direct analog in logistic regression, various methods[34]: ch.21including the following can be used instead. In linear regression analysis, one is concerned with partitioning variance via thesum of squarescalculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis,devianceis used in lieu of a sum of squares calculations.[35]Deviance is analogous to the sum of squares calculations in linear regression[2]and is a measure of the lack of fit to the data in a logistic regression model.[35]When a "saturated" model is available (a model with a theoretically perfect fit), deviance is calculated by comparing a given model with the saturated model.[2]This computation gives thelikelihood-ratio test:[2] In the above equation,Drepresents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign.Dcan be shown to follow an approximatechi-squared distribution.[2]Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained. When the saturated model is not available (a common case), deviance is calculated simply as −2·(log likelihood of the fitted model), and the reference to the saturated model's log likelihood can be removed from all that follows without harm. Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. The model deviance represents the difference between a model with at least one predictor and the saturated model.[35]In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Thus, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on aχs−p2,{\displaystyle \chi _{s-p}^{2},}chi-square distribution withdegrees of freedom[2]equal to the difference in the number of parameters estimated. Let Then the difference of both is: If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improve the model's fit. This is analogous to theF-test used in linear regression analysis to assess the significance of prediction.[35] In linear regression the squared multiple correlation,R2is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[35]In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[35][36] Four of the most commonly used indices and one less commonly used one are examined on this page: TheHosmer–Lemeshow testuses a test statistic that asymptotically follows aχ2{\displaystyle \chi ^{2}}distributionto assess whether or not the observed event rates match expected event rates in subgroups of the model population. This test is considered to be obsolete by some statisticians because of its dependence on arbitrary binning of predicted probabilities and relative low power.[37] After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor.[35]In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (seedefinition). In linear regression, the significance of a regression coefficient is assessed by computing attest. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic. Thelikelihood-ratio testdiscussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model.[2][26][35]In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has significantly smaller deviance (c.f. chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case.[citation needed]To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor.[35]There is some debate among statisticians about the appropriateness of so-called "stepwise" procedures.[weasel words]The fear is that they may not preserve nominal statistical properties and may become misleading.[38] Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of theWald statistic. The Wald statistic, analogous to thet-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.[26] Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be larger increasing the probability ofType-II error. The Wald statistic also tends to be biased when data are sparse.[35] Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals. Thus, we may evaluate more diseased individuals, perhaps all of the rare outcomes. This is also retrospective sampling, or equivalently it is called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases will produce sufficient control data.[39] Logistic regression is unique in that it may be estimated on unbalanced data, rather than randomly sampled data, and still yield correct coefficient estimates of the effects of each independent variable on the outcome. That is to say, if we form a logistic model from such data, if the model is correct in the general population, theβj{\displaystyle \beta _{j}}parameters are all correct except forβ0{\displaystyle \beta _{0}}. We can correctβ0{\displaystyle \beta _{0}}if we know the true prevalence as follows:[39] whereπ{\displaystyle \pi }is the true prevalence andπ~{\displaystyle {\tilde {\pi }}}is the prevalence in the sample. Like other forms ofregression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that takemembership in one of a limited number of categories(treating the dependent variable in the binomial case as the outcome of aBernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive). To do that, binomial logistic regression first calculates theoddsof the event happening for different levels of each independent variable, and then takes itslogarithmto create a continuous criterion as a transformed version of the dependent variable. The logarithm of the odds is thelogitof the probability, thelogitis defined as follows:logit⁡p=ln⁡p1−pfor0<p<1.{\displaystyle \operatorname {logit} p=\ln {\frac {p}{1-p}}\quad {\text{for }}0<p<1\,.} Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale.[2]The logit function is thelink functionin this kind of generalized linear model, i.e.logit⁡E⁡(Y)=β0+β1x{\displaystyle \operatorname {logit} \operatorname {\mathcal {E}} (Y)=\beta _{0}+\beta _{1}x} Yis the Bernoulli-distributed response variable andxis the predictor variable; theβvalues are the linear parameters. Thelogitof the probability of success is then fitted to the predictors. The predicted value of thelogitis converted back into predicted odds, via the inverse of the natural logarithm – theexponential function. Thus, although the observed dependent variable in binary logistic regression is a 0-or-1 variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a 'success'. In some applications, the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a 'success'; this categorical prediction can be based on the computed odds of success, with predicted odds above some chosen cutoff value being translated into a prediction of success. Of all the functional forms used for estimating the probabilities of a particular categorical outcome which optimize the fit by maximizing the likelihood function (e.g.probit regression,Poisson regression, etc.), the logistic regression solution is unique in that it is amaximum entropysolution.[40]This is a case of a general property: anexponential familyof distributions maximizes entropy, given an expected value. In the case of the logistic model, the logistic function is thenatural parameterof the Bernoulli distribution (it is in "canonical form", and the logistic function is the canonical link function), while other sigmoid functions are non-canonical link functions; this underlies its mathematical elegance and ease of optimization. SeeExponential family § Maximum entropy derivationfor details. In order to show this, we use the method ofLagrange multipliers. The Lagrangian is equal to the entropy plus the sum of the products of Lagrange multipliers times various constraint expressions. The general multinomial case will be considered, since the proof is not made that much simpler by considering simpler cases. Equating the derivative of the Lagrangian with respect to the various probabilities to zero yields a functional form for those probabilities which corresponds to those used in logistic regression.[40] As in the above section onmultinomial logistic regression, we will consider⁠M+1{\displaystyle M+1}⁠explanatory variables denoted⁠xm{\displaystyle x_{m}}⁠and which includex0=1{\displaystyle x_{0}=1}. There will be a total ofKdata points, indexed byk={1,2,…,K}{\displaystyle k=\{1,2,\dots ,K\}}, and the data points are given byxmk{\displaystyle x_{mk}}and⁠yk{\displaystyle y_{k}}⁠. Thexmkwill also be represented as an⁠(M+1){\displaystyle (M+1)}⁠-dimensional vectorxk={x0k,x1k,…,xMk}{\displaystyle {\boldsymbol {x}}_{k}=\{x_{0k},x_{1k},\dots ,x_{Mk}\}}. There will be⁠N+1{\displaystyle N+1}⁠possible values of the categorical variableyranging from 0 to N. Letpn(x)be the probability, given explanatory variable vectorx, that the outcome will bey=n{\displaystyle y=n}. Definepnk=pn(xk){\displaystyle p_{nk}=p_{n}({\boldsymbol {x}}_{k})}which is the probability that for thek-th measurement, the categorical outcome isn. The Lagrangian will be expressed as a function of the probabilitiespnkand will minimized by equating the derivatives of the Lagrangian with respect to these probabilities to zero. An important point is that the probabilities are treated equally and the fact that they sum to 1 is part of the Lagrangian formulation, rather than being assumed from the beginning. The first contribution to the Lagrangian is theentropy: The log-likelihood is: Assuming the multinomial logistic function, the derivative of the log-likelihood with respect the beta coefficients was found to be: A very important point here is that this expression is (remarkably) not an explicit function of the beta coefficients. It is only a function of the probabilitiespnkand the data. Rather than being specific to the assumed multinomial logistic case, it is taken to be a general statement of the condition at which the log-likelihood is maximized and makes no reference to the functional form ofpnk. There are then (M+1)(N+1) fitting constraints and the fitting constraint term in the Lagrangian is then: where theλnmare the appropriate Lagrange multipliers. There areKnormalization constraints which may be written: so that the normalization term in the Lagrangian is: where theαkare the appropriate Lagrange multipliers. The Lagrangian is then the sum of the above three terms: Setting the derivative of the Lagrangian with respect to one of the probabilities to zero yields: Using the more condensed vector notation: and dropping the primes on thenandkindices, and then solving forpnk{\displaystyle p_{nk}}yields: where: Imposing the normalization constraint, we can solve for theZkand write the probabilities as: Theλn{\displaystyle {\boldsymbol {\lambda }}_{n}}are not all independent. We can add any constant⁠(M+1){\displaystyle (M+1)}⁠-dimensional vector to each of theλn{\displaystyle {\boldsymbol {\lambda }}_{n}}without changing the value of thepnk{\displaystyle p_{nk}}probabilities so that there are onlyNrather than⁠N+1{\displaystyle N+1}⁠independentλn{\displaystyle {\boldsymbol {\lambda }}_{n}}. In themultinomial logistic regressionsection above, theλ0{\displaystyle {\boldsymbol {\lambda }}_{0}}was subtracted from eachλn{\displaystyle {\boldsymbol {\lambda }}_{n}}which set the exponential term involvingλ0{\displaystyle {\boldsymbol {\lambda }}_{0}}to 1, and the beta coefficients were given byβn=λn−λ0{\displaystyle {\boldsymbol {\beta }}_{n}={\boldsymbol {\lambda }}_{n}-{\boldsymbol {\lambda }}_{0}}. In machine learning applications where logistic regression is used for binary classification, the MLE minimises thecross-entropyloss function. Logistic regression is an importantmachine learningalgorithm. The goal is to model the probability of a random variableY{\displaystyle Y}being 0 or 1 given experimental data.[41] Consider ageneralized linear modelfunction parameterized byθ{\displaystyle \theta }, Therefore, and sinceY∈{0,1}{\displaystyle Y\in \{0,1\}}, we see thatPr(y∣X;θ){\displaystyle \Pr(y\mid X;\theta )}is given byPr(y∣X;θ)=hθ(X)y(1−hθ(X))(1−y).{\displaystyle \Pr(y\mid X;\theta )=h_{\theta }(X)^{y}(1-h_{\theta }(X))^{(1-y)}.}We now calculate thelikelihood functionassuming that all the observations in the sample are independently Bernoulli distributed, Typically, the log likelihood is maximized, which is maximized using optimization techniques such asgradient descent. Assuming the(x,y){\displaystyle (x,y)}pairs are drawn uniformly from the underlying distribution, then in the limit of largeN, whereH(Y∣X){\displaystyle H(Y\mid X)}is theconditional entropyandDKL{\displaystyle D_{\text{KL}}}is theKullback–Leibler divergence. This leads to the intuition that by maximizing the log-likelihood of a model, you are minimizing the KL divergence of your model from the maximal entropy distribution. Intuitively searching for the model that makes the fewest assumptions in its parameters. Logistic regression can be seen as a special case of thegeneralized linear modeland thus analogous tolinear regression. The model of logistic regression, however, is based on quite different assumptions (about the relationship between the dependent and independent variables) from those of linear regression. In particular, the key differences between these two models can be seen in the following two features of logistic regression. First, the conditional distributiony∣x{\displaystyle y\mid x}is aBernoulli distributionrather than aGaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through thelogistic distribution functionbecause logistic regression predicts theprobabilityof particular outcomes rather than the outcomes themselves. A common alternative to the logistic model (logit model) is theprobit model, as the related names suggest. From the perspective ofgeneralized linear models, these differ in the choice oflink function: the logistic model uses thelogit function(inverse logistic function), while the probit model uses theprobit function(inverseerror function). Equivalently, in the latent variable interpretations of these two methods, the first assumes a standardlogistic distributionof errors and the second a standardnormal distributionof errors.[42]Othersigmoid functionsor error distributions can be used instead. Logistic regression is an alternative to Fisher's 1936 method,linear discriminant analysis.[43]If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.[44] The assumption of linear predictor effects can easily be relaxed using techniques such asspline functions.[13] A detailed history of the logistic regression is given inCramer (2002). The logistic function was developed as a model ofpopulation growthand named "logistic" byPierre François Verhulstin the 1830s and 1840s, under the guidance ofAdolphe Quetelet; seeLogistic function § Historyfor details.[45]In his earliest paper (1838), Verhulst did not specify how he fit the curves to the data.[46][47]In his more detailed paper (1845), Verhulst determined the three parameters of the model by making the curve pass through three observed points, which yielded poor predictions.[48][49] The logistic function was independently developed in chemistry as a model ofautocatalysis(Wilhelm Ostwald, 1883).[50]An autocatalytic reaction is one in which one of the products is itself acatalystfor the same reaction, while the supply of one of the reactants is fixed. This naturally gives rise to the logistic equation for the same reason as population growth: the reaction is self-reinforcing but constrained. The logistic function was independently rediscovered as a model of population growth in 1920 byRaymond PearlandLowell Reed, published asPearl & Reed (1920), which led to its use in modern statistics. They were initially unaware of Verhulst's work and presumably learned about it fromL. Gustave du Pasquier, but they gave him little credit and did not adopt his terminology.[51]Verhulst's priority was acknowledged and the term "logistic" revived byUdny Yulein 1925 and has been followed since.[52]Pearl and Reed first applied the model to the population of the United States, and also initially fitted the curve by making it pass through three points; as with Verhulst, this again yielded poor results.[53] In the 1930s, theprobit modelwas developed and systematized byChester Ittner Bliss, who coined the term "probit" inBliss (1934), and byJohn GadduminGaddum (1933), and the model fit bymaximum likelihood estimationbyRonald A. FisherinFisher (1935), as an addendum to Bliss's work. The probit model was principally used inbioassay, and had been preceded by earlier work dating to 1860; seeProbit model § History. The probit model influenced the subsequent development of the logit model and these models competed with each other.[54] The logistic model was likely first used as an alternative to the probit model in bioassay byEdwin Bidwell Wilsonand his studentJane WorcesterinWilson & Worcester (1943).[55]However, the development of the logistic model as a general alternative to the probit model was principally due to the work ofJoseph Berksonover many decades, beginning inBerkson (1944), where he coined "logit", by analogy with "probit", and continuing throughBerkson (1951)and following years.[56]The logit model was initially dismissed as inferior to the probit model, but "gradually achieved an equal footing with the probit",[57]particularly between 1960 and 1970. By 1970, the logit model achieved parity with the probit model in use in statistics journals and thereafter surpassed it. This relative popularity was due to the adoption of the logit outside of bioassay, rather than displacing the probit within bioassay, and its informal use in practice; the logit's popularity is credited to the logit model's computational simplicity, mathematical properties, and generality, allowing its use in varied fields.[3] Various refinements occurred during that time, notably byDavid Cox, as inCox (1958).[4] The multinomial logit model was introduced independently inCox (1966)andTheil (1969), which greatly increased the scope of application and the popularity of the logit model.[58]In 1973Daniel McFaddenlinked the multinomial logit to the theory ofdiscrete choice, specificallyLuce's choice axiom, showing that the multinomial logit followed from the assumption ofindependence of irrelevant alternativesand interpreting odds of alternatives as relative preferences;[59]this gave a theoretical foundation for the logistic regression.[58] There are large numbers of extensions:
https://en.wikipedia.org/wiki/Logistic_regression
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Loss_function
Transfer learning(TL) is a technique inmachine learning(ML) in which knowledge learned from a task is re-used in order to boost performance on a related task.[1]For example, forimage classification, knowledge gained while learning torecognizecars could be applied when trying to recognize trucks. This topic is related to the psychological literature ontransfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.[2] Since transfer learning makes use of training with multiple objective functions it is related tocost-sensitive machine learningandmulti-objective optimization.[3] In 1976, Bozinovski and Fulgosi published a paper addressing transfer learning inneural networktraining.[4][5]The paper gives a mathematical and geometrical model of the topic. In 1981, a report considered the application of transfer learning to a dataset of images representing letters of computer terminals, experimentally demonstrating positive and negative transfer learning.[6] In 1992,Lorien Prattformulated the discriminability-based transfer (DBT) algorithm.[7] By 1998, the field had advanced to includemulti-task learning,[8]along with more formal theoretical foundations.[9]Influential publications on transfer learning include the bookLearning to Learnin 1998,[10]a 2009 survey[11]and a 2019 survey.[12] Ngsaid in his NIPS 2016 tutorial[13][14]that TL would become the next driver ofmachine learningcommercial success aftersupervised learning. In the 2020 paper, "Rethinking Pre-Training and self-training",[15]Zoph et al. reported that pre-training can hurt accuracy, and advocate self-training instead. The definition of transfer learning is given in terms of domains and tasks. A domainD{\displaystyle {\mathcal {D}}}consists of: afeature spaceX{\displaystyle {\mathcal {X}}}and amarginal probability distributionP(X){\displaystyle P(X)}, whereX={x1,...,xn}∈X{\displaystyle X=\{x_{1},...,x_{n}\}\in {\mathcal {X}}}. Given a specific domain,D={X,P(X)}{\displaystyle {\mathcal {D}}=\{{\mathcal {X}},P(X)\}}, a task consists of two components: a label spaceY{\displaystyle {\mathcal {Y}}}and an objective predictive functionf:X→Y{\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The functionf{\displaystyle f}is used to predict the corresponding labelf(x){\displaystyle f(x)}of a new instancex{\displaystyle x}. This task, denoted byT={Y,f(x)}{\displaystyle {\mathcal {T}}=\{{\mathcal {Y}},f(x)\}}, is learned from the training data consisting of pairs{xi,yi}{\displaystyle \{x_{i},y_{i}\}}, wherexi∈X{\displaystyle x_{i}\in {\mathcal {X}}}andyi∈Y{\displaystyle y_{i}\in {\mathcal {Y}}}.[16] Given a source domainDS{\displaystyle {\mathcal {D}}_{S}}and learning taskTS{\displaystyle {\mathcal {T}}_{S}}, a target domainDT{\displaystyle {\mathcal {D}}_{T}}and learning taskTT{\displaystyle {\mathcal {T}}_{T}}, whereDS≠DT{\displaystyle {\mathcal {D}}_{S}\neq {\mathcal {D}}_{T}}, orTS≠TT{\displaystyle {\mathcal {T}}_{S}\neq {\mathcal {T}}_{T}}, transfer learning aims to help improve the learning of the target predictive functionfT(⋅){\displaystyle f_{T}(\cdot )}inDT{\displaystyle {\mathcal {D}}_{T}}using the knowledge inDS{\displaystyle {\mathcal {D}}_{S}}andTS{\displaystyle {\mathcal {T}}_{S}}.[16] Algorithms are available for transfer learning inMarkov logic networks[17]andBayesian networks.[18]Transfer learning has been applied to cancer subtype discovery,[19]building utilization,[20][21]general game playing,[22]text classification,[23][24]digit recognition,[25]medical imaging andspam filtering.[26] In 2020, it was discovered that, due to their similar physical natures, transfer learning is possible betweenelectromyographic(EMG) signals from the muscles and classifying the behaviors ofelectroencephalographic(EEG) brainwaves, from thegesture recognitiondomain to the mental state recognition domain. It was noted that this relationship worked in both directions, showing thatelectroencephalographiccan likewise be used to classify EMG.[27]The experiments noted that the accuracy ofneural networksandconvolutional neural networkswere improved[28]through transfer learning both prior to any learning (compared to standard random weight distribution) and at the end of the learning process (asymptote). That is, results are improved by exposure to another domain. Moreover, the end-user of a pre-trained model can change the structure of fully-connected layers to improve performance.[29] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Transfer_learning
Aconvolutional neural network(CNN) is a type offeedforward neural networkthat learnsfeaturesviafilter(or kernel) optimization. This type ofdeep learningnetwork has been applied to process and makepredictionsfrom many different types of data including text, images and audio.[1]Convolution-based networks are the de-facto standard indeep learning-based approaches tocomputer vision[2]and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as thetransformer. Vanishing gradientsand exploding gradients, seen duringbackpropagationin earlier neural networks, are prevented by theregularizationthat comes from using shared weights over fewer connections.[3][4]For example, foreachneuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascadedconvolution(or cross-correlation) kernels,[5][6]only 25 weights for each convolutional layer are required to process 5x5-sized tiles.[7][8]Higher-layer features are extracted from wider context windows, compared to lower-layer features. Some applications of CNNs include: CNNs are also known asshift invariantorspace invariant artificial neural networks, based on the shared-weight architecture of theconvolutionkernels or filters that slide along input features and provide translation-equivariantresponses known as feature maps.[14][15]Counter-intuitively, most convolutional neural networks are notinvariant to translation, due to the downsampling operation they apply to the input.[16] Feedforward neural networksare usually fully connected networks, that is, each neuron in onelayeris connected to all neurons in the nextlayer. The "full connectivity" of these networks makes them prone tooverfittingdata. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.[17] Convolutional networks wereinspiredbybiologicalprocesses[18][19][20][21]in that the connectivity pattern betweenneuronsresembles the organization of the animalvisual cortex. Individualcortical neuronsrespond to stimuli only in a restricted region of thevisual fieldknown as thereceptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to otherimage classification algorithms. This means that the network learns to optimize thefilters(or kernels) through automated learning, whereas in traditional algorithms these filters arehand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks. A convolutional neural network consists of an input layer,hidden layersand an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs adot productof the convolution kernel with the layer's input matrix. This product is usually theFrobenius inner product, and its activation function is commonlyReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such aspooling layers, fully connected layers, and normalization layers. Here it should be noted how close a convolutional neural network is to amatched filter.[22] In a CNN, the input is atensorwith shape: (number of inputs) × (input height) × (input width) × (inputchannels) After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape: (number of inputs) × (feature map height) × (feature map width) × (feature mapchannels). Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus.[23]Each convolutional neuron processes data only for itsreceptive field. Althoughfully connected feedforward neural networkscan be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights foreachneuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper.[7]For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen duringbackpropagationin earlier neural networks.[3][4] To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers,[24]which are based on a depthwise convolution followed by a pointwise convolution. Thedepthwise convolutionis a spatial convolution applied independently over each channel of the input tensor, while thepointwise convolutionis a standard convolution restricted to the use of1×1{\displaystyle 1\times 1}kernels. Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map.[25][26]There are two common types of pooling in popular use: max and average.Max poolinguses the maximum value of each local cluster of neurons in the feature map,[27][28]whileaverage poolingtakes the average value. Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditionalmultilayer perceptronneural network (MLP). The flattened matrix goes through a fully connected layer to classify the images. In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron'sreceptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is theentire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers. To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution[29][30]expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios,[31]thus having a variable receptive field size. Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights. The vectors of weights and biases are calledfiltersand represent particularfeaturesof the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces thememory footprintbecause a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting.[32] A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers.[33] A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix.[34] An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is[x]↦[xxxx]{\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}}. Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve.[35] CNN are often compared to the way the brain achieves vision processing in livingorganisms.[36] Work byHubelandWieselin the 1950s and 1960s showed that catvisual corticescontain neurons that individually respond to small regions of thevisual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as itsreceptive field.[37]Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space.[citation needed]The cortex in each hemisphere represents the contralateralvisual field.[citation needed] Their 1968 paper identified two basic visual cell types in the brain:[19] Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks.[38][37] In 1969,Kunihiko Fukushimaintroduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced theReLU(rectified linear unit)activation function.[39][40] The "neocognitron"[18]was introduced by Fukushima in 1980.[20][28][41]The neocognitron introduced the two basic types of layers: Severalsupervisedandunsupervised learningalgorithms have been proposed over the decades to train the weights of a neocognitron.[18]Today, however, the CNN architecture is usually trained throughbackpropagation. Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs anddeep neural networksin general.[42] The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the firstConference on Neural Information Processing Systemsin 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to thesignal-processing concept of a filter, and demonstrated it on a speech recognition task.[8]They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t).").[8]Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here. Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelet al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance.[43]A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, usingbackpropagation.[44]Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one.[43] TDNNs are convolutional networks that share weights along the temporal dimension.[45]They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution.[46]Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron. TDNNs improved the performance of far-distance speech recognition.[47] Denker et al. (1989) designed a 2-D CNN system to recognize hand-writtenZIP Codenumbers.[48]However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed.[49] Following the advances in the training of 1-D CNNs by Waibel et al. (1987),Yann LeCunet al. (1989)[49]used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types. Wei Zhang et al. (1988)[14][15]used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991)[50]and breast cancer detection in mammograms (1994).[51] This approach became a foundation of moderncomputer vision. In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system.[27]In their system they used several TDNNs per word, one for eachsyllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification. In a variant of the neocognitron called thecresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch,[52]introducing this method into the vision field. Max pooling is often used in modern CNNs.[53] LeNet-5, a pioneering 7-level convolutional network byLeCunet al. in 1995,[54]classifies hand-written numbers on checks (British English:cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources. It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated inNCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day.[55] A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988.[14][15]It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991[56]to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991)[50]and automatic detection of breast cancer inmammograms (1994).[51] A different convolution-based design was proposed in 1988[57]for application to decomposition of one-dimensionalelectromyographyconvolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.[58][59] Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations ongraphics processing units(GPUs). In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation onCPU.[60]In 2005, another paper also emphasised the value ofGPGPUformachine learning.[61] The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU.[62]In the same period, GPUs were also used for unsupervised training ofdeep belief networks.[63][64][65][66] In 2010, Dan Ciresan et al. atIDSIAtrained deep feedforward networks on GPUs.[67]In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU.[25]In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time.[68]Then they won more competitions and achieved state of the art on several benchmarks.[69][53][28] Subsequently,AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won theImageNet Large Scale Visual Recognition Challenge2012.[70]It was an early catalytic event for theAI boom. Compared to the training of CNNs usingGPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- andSIMD-level parallelism that is available on theIntel Xeon Phi.[71][72] In the past, traditionalmultilayer perceptron(MLP) models were used for image recognition.[example needed]However, the full connectivity between nodes caused thecurse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image withRGB colorchannels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale. For example, inCIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights. Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignoreslocality of referencein data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated byspatially localinput patterns. Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of avisual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features: Together, these properties allow CNNs to achieve better generalization onvision problems. Weight sharing dramatically reduces the number offree parameterslearned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks. A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below. The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnablefilters(orkernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter isconvolvedacross the width and height of the input volume, computing thedot productbetween the filter entries and the input, producing a 2-dimensionalactivation mapof that filter. As a result, the network learns filters that activate when it detects some specific type offeatureat some spatial position in the input.[75][nb 1] Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter. Self-supervised learninghas been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer.[citation needed] When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing asparse local connectivitypattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is ahyperparametercalled thereceptive fieldof the neuron. The connections arelocal in space(along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern.[76] Threehyperparameterscontrol the size of the output volume of the convolutional layer: the depth,stride, and padding size: The spatial size of the output volume is a function of the input volume sizeW{\displaystyle W}, the kernel field sizeK{\displaystyle K}of the convolutional layer neurons, the strideS{\displaystyle S}, and the amount of zero paddingP{\displaystyle P}on the border. The number of neurons that "fit" in a given volume is then: If this number is not aninteger, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in asymmetricway. In general, setting zero padding to beP=(K−1)/2{\textstyle P=(K-1)/2}when the stride isS=1{\displaystyle S=1}ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding. A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as adepth slice, the neurons in each depth slice are constrained to use the same weights and bias. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as aconvolutionof the neuron's weights with the input volume.[nb 2]Therefore, it is common to refer to the sets of weights as a filter (or akernel), which is convolved with the input. The result of this convolution is anactivation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to thetranslation invarianceof the CNN architecture.[16] Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer". Another important concept of CNNs is pooling, which is used as a form of non-lineardown-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, wheremax poolingandaverage poolingare the most common. Pooling aggregates information from small regions of the input creatingpartitionsof the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input.[78]Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input. Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters,memory footprintand amount of computation in the network, and hence to also controloverfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as aReLU layer) in a CNN architecture.[75]: 460–461While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used.[16][74]The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations:fX,Y(S)=maxa,b=01S2X+a,2Y+b.{\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.}In this case, everymax operationis over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well). In addition to max pooling, pooling units can use other functions, such asaveragepooling orℓ2-normpooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice.[79] Due to the effects of fast spatial reduction of the size of the representation,[which?]there is a recent trend towards using smaller filters[80]or discarding pooling layers altogether.[81] A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation.[82] See[83][84]for reviews for pooling methods. ReLU is the abbreviation ofrectified linear unit. It was proposed byAlston Householderin 1941,[85]and used in CNN byKunihiko Fukushimain 1969.[39]ReLU applies the non-saturatingactivation functionf(x)=max(0,x){\textstyle f(x)=\max(0,x)}.[70]It effectively removes negative values from an activation map by setting them to zero.[86]It introducesnonlinearityto thedecision functionand in the overall network without affecting the receptive fields of the convolution layers. In 2011, Xavier Glorot, Antoine Bordes andYoshua Bengiofound that ReLU enables better training of deeper networks,[87]compared to widely used activation functions prior to 2011. Other functions can also be used to increase nonlinearity, for example the saturatinghyperbolic tangentf(x)=tanh⁡(x){\displaystyle f(x)=\tanh(x)},f(x)=|tanh⁡(x)|{\displaystyle f(x)=|\tanh(x)|}, and thesigmoid functionσ(x)=(1+e−x)−1{\textstyle \sigma (x)=(1+e^{-x})^{-1}}. ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty togeneralizationaccuracy.[88] After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional)artificial neural networks. Their activations can thus be computed as anaffine transformation, withmatrix multiplicationfollowed by a bias offset (vector additionof a learned or fixed bias term). The "loss layer", or "loss function", exemplifies howtrainingpenalizes the deviation between the predicted output of the network, and thetruedata labels (during supervised learning). Variousloss functionscan be used, depending on the specific task. TheSoftmaxloss function is used for predicting a single class ofKmutually exclusive classes.[nb 3]Sigmoidcross-entropyloss is used for predictingKindependent probability values in[0,1]{\displaystyle [0,1]}.Euclideanloss is used forregressingtoreal-valuedlabels(−∞,∞){\displaystyle (-\infty ,\infty )}. Hyperparameters are various settings that are used to control the learning process. CNNs use morehyperparametersthan a standard multilayer perceptron (MLP). Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image.[citation needed] The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor. Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature valuesvawith pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next. The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity. Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples,AlexNetused 3x3, 5x5, and 11x11.Inceptionv3used 1x1, 3x3, and 5x5. The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and withoutoverfitting. Max poolingis typically used, often with a 2x2 dimension. This implies that the input is drasticallydownsampled, reducing processing cost. Greater poolingreduces the dimensionof the signal, and may result in unacceptableinformation loss. Often, non-overlapping pooling windows perform best.[79] Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7.[citation needed] It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeedequivariantto translations of the input.[74]However, layers with a stride greater than one ignore theNyquist–Shannon sampling theoremand might lead toaliasingof the input signal[74]While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice,[89]and therefore yield models that are not equivariant to translations. Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input.[90][16]One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer.[74]Additionally, several other partial solutions have been proposed, such asanti-aliasingbefore downsampling operations,[91]spatial transformer networks,[92]data augmentation, subsampling combined with pooling,[16]andcapsule neural networks.[93] The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such ask-fold cross-validationare applied. Other strategies include usingconformal prediction.[94][95] Regularizationis a process of introducing additional information to solve anill-posed problemor to preventoverfitting. CNNs use various types of regularization. Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting isdropout, introduced in 2014.[96]At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability1−p{\displaystyle 1-p}or kept with probabilityp{\displaystyle p}, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights. In the training stages,p{\displaystyle p}is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored. At testing time after training has finished, we would ideally like to find a sample average of all possible2n{\displaystyle 2^{n}}dropped-out networks; unfortunately this is unfeasible for large values ofn{\displaystyle n}. However, we can find an approximation by using the full network with each node's output weighted by a factor ofp{\displaystyle p}, so theexpected valueof the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates2n{\displaystyle 2^{n}}neural nets, and as such allows for model combination, at test time only a single network needs to be tested. By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even fordeep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features[clarification needed]that better generalize to new data. DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability1−p{\displaystyle 1-p}. Each unit thus receives input from a random subset of units in the previous layer.[97] DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected. Even before dropout, in 2013 a technique called stochastic pooling,[98]the conventionaldeterministicpooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to amultinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout anddata augmentation. An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small localdeformations. This is similar to explicitelastic deformationsof the input images,[99]which delivers excellent performance on theMNIST data set.[99]Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below. Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s.[54]For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set.[100] One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted. Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm". A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors. L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot. L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is calledelastic net regularization. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and useprojected gradient descentto enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vectorw→{\displaystyle {\vec {w}}}of every neuron to satisfy‖w→‖2<c{\displaystyle \|{\vec {w}}\|_{2}<c}. Typical values ofc{\displaystyle c}are order of 3–4. Some papers report improvements[101]when using this form of regularization. Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.[102] An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to theretina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.[103] Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the humanvisual systemimposes coordinate frames in order to represent shapes.[104] CNNs are often used inimage recognitionsystems. In 2012, anerror rateof 0.23% on theMNIST databasewas reported.[28]Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database.[25]Subsequently, a similar CNN calledAlexNet[105]won theImageNet Large Scale Visual Recognition Challenge2012. When applied tofacial recognition, CNNs achieved a large decrease in error rate.[106]Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects".[21]CNNs were used to assessvideo qualityin an objective way after manual training; the resulting system had a very lowroot mean square error.[107] TheImageNet Large Scale Visual Recognition Challengeis a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014,[108]a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winnerGoogLeNet[109](the foundation ofDeepDream) increased the mean averageprecisionof object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans.[110]The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.[citation needed] In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.[111] Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space.[112][113]Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream.[114][115][116]Long short-term memory(LSTM)recurrentunits are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies.[117][118]Unsupervised learningschemes for training spatio-temporal features have been introduced, based on Convolutional Gated RestrictedBoltzmann Machines[119]and Independent Subspace Analysis.[120]Its application can be seen intext-to-video model.[citation needed] CNNs have also been explored fornatural language processing. CNN models are effective for various NLP problems and achieved excellent results insemantic parsing,[121]search query retrieval,[122]sentence modeling,[123]classification,[124]prediction[125]and other traditional NLP tasks.[126]Compared to traditional language processing methods such asrecurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required.[127][128][129][130] A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain.[131] CNNs have been used indrug discovery. Predicting the interaction between molecules and biologicalproteinscan identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network forstructure-based drug design.[132]The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures,[133]AtomNet discovers chemical features, such asaromaticity,sp3carbons, andhydrogen bonding. Subsequently, AtomNet was used to predict novel candidatebiomoleculesfor multiple disease targets, most notably treatments for theEbola virus[134]andmultiple sclerosis.[135] CNNs have been used in the game ofcheckers. From 1999 to 2001,Fogeland Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%.[136][137]It also earned a win against the programChinookat its "expert" level of play.[138] CNNs have been used incomputer Go. In December 2014, Clark andStorkeypublished a paper showing that a CNN trained by supervised learning from a database of human professional games could outperformGNU Goand win some games againstMonte Carlo tree searchFuego 1.1 in a fraction of the time it took Fuego to play.[139]Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a6 danhuman player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of theMonte Carlo tree searchprogram Fuego simulating ten thousand playouts (about a million positions) per move.[140] A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used byAlphaGo, the first to beat the best human player at the time.[141] Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better.[142][13]Dilated convolutions[143]might enable one-dimensional convolutional neural networks to effectively learn time series dependences.[144]Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients.[145]Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from.[146]CNNs can also be applied to further tasks in time series analysis (e.g., time series classification[147]or quantile forecasting[148]). As archaeological findings such asclay tabletswithcuneiform writingare increasingly acquired using3D scanners, benchmark datasets are becoming available, includingHeiCuBeDa[149]providing almost 2000 normalized 2-D and 3-D datasets prepared with theGigaMesh Software Framework.[150]Socurvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history.[151][152] For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoidoverfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known astransfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets.[153] End-to-end training and prediction are common practice incomputer vision. However, human interpretable explanations are required forcritical systemssuch as aself-driving cars.[154]With recent advances invisual salience,spatial attention, andtemporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.[155][156] A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network withQ-learning, a form ofreinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.[157] Preliminary results were presented in 2014, with an accompanying paper in February 2015.[158]The research described an application toAtari 2600gaming. Other deep reinforcement learning models preceded it.[159] Convolutional deep belief networks(CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training likedeep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR[160]have been obtained using CDBNs.[161] The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid[162]by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks.
https://en.wikipedia.org/wiki/Convolutional_neural_network
Instatistics, thestandard scoreorz-scoreis the number ofstandard deviationsby which the value of araw score(i.e., an observed value or data point) is above or below themeanvalue of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores. It is calculated by subtracting thepopulation meanfrom an individual raw score and then dividing the difference by thepopulationstandard deviation. This process of converting a raw score into a standard score is calledstandardizingornormalizing(however, "normalizing" can refer to many types of ratios; seeNormalizationfor more). Standard scores are most commonly calledz-scores; the two terms may be used interchangeably, as they are in this article. Other equivalent terms in use includez-value,z-statistic,normal score,standardized variableandpullinhigh energy physics.[1][2] Computing a z-score requires knowledge of the mean and standard deviation of the complete population to which a data point belongs; if one only has asampleof observations from the population, then the analogous computation using the sample mean and sample standard deviation yields thet-statistic. If the population mean and population standard deviation are known, a raw scorexis converted into a standard score by[3] where: The absolute value ofzrepresents the distance between that raw scorexand the population mean in units of the standard deviation.zis negative when the raw score is below the mean, positive when above. Calculatingzusing this formula requires use of the population mean and the population standard deviation, not the sample mean or sample deviation. However, knowing the true mean and standard deviation of a population is often an unrealistic expectation, except in cases such asstandardized testing, where the entire population is measured. When the population mean and the population standard deviation are unknown, the standard score may be estimated by using the sample mean and sample standard deviation as estimates of the population values.[4][5][6][7] In these cases, thez-score is given by where: Though it should always be stated, the distinction between use of the population and sample statistics often is not made. In either case, the numerator and denominator of the equations have the same units of measure so that the units cancel out through division andzis left as adimensionless quantity. The z-score is often used in the z-test in standardized testing – the analog of theStudent's t-testfor a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used. The standard score can be used in the calculation ofprediction intervals. A prediction interval [L,U], consisting of a lower endpoint designatedLand an upper endpoint designatedU, is an interval such that a future observationXwill lie in the interval with high probabilityγ{\displaystyle \gamma }, i.e. For the standard scoreZofXit gives:[8] By determining the quantile z such that it follows: In process control applications, the Z value provides an assessment of the degree to which a process is operating off-target. When scores are measured on different scales, they may be converted to z-scores to aid comparison. Dietz et al.[9]give the following example, comparing student scores on the (old)SATandACThigh school tests. The table shows the mean and standard deviation for total scores on the SAT and ACT. Suppose that student A scored 1800 on the SAT, and student B scored 24 on the ACT. Which student performed better relative to other test-takers? The z-score for student A isz=x−μσ=1800−1500300=1{\displaystyle z={x-\mu \over \sigma }={1800-1500 \over 300}=1} The z-score for student B isz=x−μσ=24−215=0.6{\displaystyle z={x-\mu \over \sigma }={24-21 \over 5}=0.6} Because student A has a higher z-score than student B, student A performed better compared to other test-takers than did student B. Continuing the example of ACT and SAT scores, if it can be further assumed that both ACT and SAT scores arenormally distributed(which is approximately correct), then the z-scores may be used to calculate the percentage of test-takers who received lower scores than students A and B. "For some multivariate techniques such as multidimensional scaling and cluster analysis, the concept of distance between the units in the data is often of considerable interest and importance… When the variables in a multivariate data set are on different scales, it makes more sense to calculate the distances after some form of standardization."[10] In principal components analysis, "Variables measured on different scales or on a common scale with widely differing ranges are often standardized."[11] Standardization of variables prior tomultiple regression analysisis sometimes used as an aid to interpretation.[12](page 95) state the following. "The standardized regression slope is the slope in the regression equation if X and Y are standardized … Standardization of X and Y is done by subtracting the respective means from each set of observations and dividing by the respective standard deviations … In multiple regression, where several X variables are used, the standardized regression coefficients quantify the relative contribution of each X variable." However, Kutner et al.[13](p 278) give the following caveat: "… one must be cautious about interpreting any regression coefficients, whether standardized or not. The reason is that when the predictor variables are correlated among themselves, … the regression coefficients are affected by the other predictor variables in the model … The magnitudes of the standardized regression coefficients are affected not only by the presence of correlations among the predictor variables but also by the spacings of the observations on each of these variables. Sometimes these spacings may be quite arbitrary. Hence, it is ordinarily not wise to interpret the magnitudes of standardized regression coefficients as reflecting the comparative importance of the predictor variables." Inmathematical statistics, arandom variableXisstandardizedby subtracting itsexpected valueE⁡[X]{\displaystyle \operatorname {E} [X]}and dividing the difference by itsstandard deviationσ(X)=Var⁡(X):{\displaystyle \sigma (X)={\sqrt {\operatorname {Var} (X)}}:} If the random variable under consideration is thesample meanof a random sampleX1,…,Xn{\displaystyle \ X_{1},\dots ,X_{n}}ofX: then the standardized version is In educational assessment,T-scoreis a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10.[14][15][16]It is also known ashensachiin Japanese, where the concept is much more widely known and used in the context of high school and university admissions.[17] In bone density measurements, the T-score is the standard score of the measurement compared to the population of healthy 30-year-old adults, and has the usual mean of 0 and standard deviation of 1.[18]
https://en.wikipedia.org/wiki/Standard_score
Instatisticsand applications of statistics,normalizationcan have a range of meanings.[1]In the simplest cases,normalization of ratingsmeans adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entireprobability distributionsof adjusted values into alignment. In the case ofnormalization of scoresin educational assessment, there may be an intention to align distributions to anormal distribution. A different approach to normalization of probability distributions isquantile normalization, where thequantilesof the different measures are brought into alignment. In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that thesenormalized valuesallow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in ananomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms oflevels of measurement, such ratios only make sense forratiomeasurements (where ratios of measurements are meaningful), notintervalmeasurements (where only distances are meaningful, but not ratios). In theoretical statistics, parametric normalization can often lead topivotal quantities– functions whosesampling distributiondoes not depend on the parameters – and toancillary statistics– pivotal quantities that can be computed from observations, without knowing parameters. The concept of normalization emerged alongside the study of thenormal distributionbyAbraham De Moivre,Pierre-Simon Laplace, andCarl Friedrich Gaussfrom the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, thestandard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of anydistributionordata setto have mean zero and standard deviation one.[2] While the study of normal distribution structured the process of standardization, the result of this process, also known as theZ-score, given by the difference between sample value andpopulation meandivided bypopulation standard deviationand measuring the number of standard deviations of a value from its population mean,[3]was not formalized and popularized untilRonald FisherandKarl Pearsonelaborated the concept as part of the broader framework ofstatistical inferenceandhypothesis testing[4][5]in the early 20th century. William Sealy Gossetinitiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed byGuinness Brewery, the biggest brewer inIrelandback then, and was tasked with precisequality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison.[6]This finding was later published in a Guinness internal report titledThe application of the “Law of Error” to the work of the breweryand was sent toKarl Pearsonfor further discussion, which later yielded a formal publishment titledThe probable error of a meanin the year of 1908.[7]Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed byRonald Fisherto the form that is used today,[8]and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to thetest statisticused in measuring the departure of the estimated value of aparameterfrom its hypothesized value divided by itsstandard error, popularized through Fisher’s publishment titledApplications of “Student’s” distribution.[6] The rise ofcomputersandmultivariate statisticsin mid-20th century necessitated normalization to process data with different units, hatchingfeature scaling– a method used to rescale data to a fixed range – likemin-max scalingandrobust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields includingmachine learning,pattern recognition, andneural networksin late 20th century.[9][10] Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training inneural networks.[11] There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means andstandard deviations, which are hencescale invariant– some of which may be summarized as follows. Note that in terms oflevels of measurement, these ratios only make sense forratiomeasurements (where ratios of measurements are meaningful), notintervalmeasurements (where only distances are meaningful, but not ratios). See alsoCategory:Statistical ratios. Note that some other ratios, such as thevariance-to-mean ratio(σ2μ){\textstyle \left({\frac {\sigma ^{2}}{\mu }}\right)}, are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant. Other non-dimensional normalizations that can be used with no assumptions on the distribution include:
https://en.wikipedia.org/wiki/Normalization_(statistics)
Hyperparametermay refer to:
https://en.wikipedia.org/wiki/Hyperparameter
Inmachine learningandstatistics, thelearning rateis atuning parameterin anoptimization algorithmthat determines the step size at each iteration while moving toward a minimum of aloss function.[1]Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns". In theadaptive controlliterature, the learning rate is commonly referred to asgain.[2] In setting a learning rate, there is a trade-off between the rate of convergence andovershooting. While thedescent directionis usually determined from thegradientof the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.[3] In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate.[4]The learning rate and its adjustments may also differ per parameter, in which case it is adiagonal matrixthat can be interpreted as an approximation to theinverseof theHessian matrixinNewton's method.[5]The learning rate is related to the step length determined by inexactline searchinquasi-Newton methodsand related optimization algorithms.[6][7] Initial rate can be left as system default or can be selected using a range of techniques.[8]A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters:decayandmomentum. There are many different learning rate schedules but the most common aretime-based, step-basedandexponential.[4] Decayserves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentumis analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyperparameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose.The formula for factoring in the momentumis more complex than for decay but is most often built in with deep learning libraries such asKeras. Time-basedlearning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is: ηn+1=ηn1+dn{\displaystyle \eta _{n+1}={\frac {\eta _{n}}{1+dn}}} whereη{\displaystyle \eta }is the learning rate,d{\displaystyle d}is a decay parameter andn{\displaystyle n}is the iteration step. Step-basedlearning schedules changes the learning rate according to some predefined steps. The decay application formula is here defined as: ηn=η0d⌊1+nr⌋{\displaystyle \eta _{n}=\eta _{0}d^{\left\lfloor {\frac {1+n}{r}}\right\rfloor }} whereηn{\displaystyle \eta _{n}}is the learning rate at iterationn{\displaystyle n},η0{\displaystyle \eta _{0}}is the initial learning rate,d{\displaystyle d}is how much the learning rate should change at each drop (0.5 corresponds to a halving) andr{\displaystyle r}corresponds to thedrop rate, or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). Thefloorfunction (⌊…⌋{\displaystyle \lfloor \dots \rfloor }) here drops the value of its input to 0 for all values smaller than 1. Exponentiallearning schedules are similar to step-based, but instead of steps, a decreasing exponential function is used. The mathematical formula for factoring in the decay is: ηn=η0e−dn{\displaystyle \eta _{n}=\eta _{0}e^{-dn}} whered{\displaystyle d}is a decay parameter. The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types ofadaptivegradient descent algorithms such asAdagrad, Adadelta,RMSprop, andAdam[9]which are generally built into deep learning libraries such asKeras.[10]
https://en.wikipedia.org/wiki/Learning_rate
Indeep learning,weight initializationorparameter initializationdescribes the initial step in creating aneural network. A neural network contains trainable parameters that are modified during training: weight initialization is the pre-training step of assigning initial values to these parameters. The choice of weight initialization method affects the speed of convergence, the scale of neuralactivationwithin the network, the scale of gradient signals duringbackpropagation, and the quality of the final model. Proper initialization is necessary for avoiding issues such asvanishing and exploding gradientsand activation functionsaturation. Note that even though this article is titled "weight initialization", both weights and biases are used in a neural network as trainable parameters, so this article describes how both of these are initialized. Similarly, trainable parameters inconvolutional neural networks(CNNs) are calledkernelsand biases, and this article also describes these. We discuss the main methods of initialization in the context of amultilayer perceptron(MLP). Specific strategies for initializing other network architectures are discussed in later sections. For an MLP, there are only two kinds of trainable parameters, called weights and biases. Each layerl{\displaystyle l}contains a weight matrixW(l)∈Rnl−1×nl{\displaystyle W^{(l)}\in \mathbb {R} ^{n_{l-1}\times n_{l}}}and a bias vectorb(l)∈Rnl{\displaystyle b^{(l)}\in \mathbb {R} ^{n_{l}}}, wherenl{\displaystyle n_{l}}is the number of neurons in that layer. A weight initialization method is an algorithm for setting the initial values forW(l),b(l){\displaystyle W^{(l)},b^{(l)}}for each layerl{\displaystyle l}. The simplest form iszero initialization:W(l)=0,b(l)=0{\displaystyle W^{(l)}=0,b^{(l)}=0}Zero initialization is usually used for initializing biases, but it is not used for initializing weights, as it leads tosymmetryin the network, causing all neurons to learn the same features. In this page, we assumeb=0{\displaystyle b=0}unless otherwise stated. Recurrent neural networks typically use activation functions with bounded range, such as sigmoid and tanh, since unbounded activation may cause exploding values. (Le, Jaitly, Hinton, 2015)[1]suggested initializing weights in the recurrent parts of the network to identity and zero bias, similar to the idea ofresidual connectionsandLSTMwith no forget gate. In most cases, the biases are initialized to zero, though some situations can use a nonzero initialization. For example, in multiplicative units, such as the forget gate ofLSTM, the bias can be initialized to 1 to allow good gradient signal through the gate.[2]For neurons withReLUactivation, one can initialize the bias to a small positive value like 0.1, so that the gradient is likely nonzero at initialization, avoiding the dying ReLU problem.[3]: 305[4] Random initializationmeans sampling the weights from anormal distributionor auniform distribution, usuallyindependently. LeCun initialization, popularized in (LeCun et al., 1998),[5]is designed to preserve the variance of neural activations during the forward pass. It samples each entry inW(l){\displaystyle W^{(l)}}independently from a distribution with mean 0 and variance1/nl−1{\displaystyle 1/n_{l-1}}. For example, if the distribution is acontinuous uniform distribution, then the distribution isU(±3/nl−1){\displaystyle {\mathcal {U}}(\pm {\sqrt {3/n_{l-1}}})}. Glorot initialization(orXavier initialization) was proposed by Xavier Glorot andYoshua Bengio.[6]It was designed as a compromise between two goals: to preserve activation variance during the forward pass and to preserve gradient variance during the backward pass. For uniform initialization, it samples each entry inW(l){\displaystyle W^{(l)}}independently and identicallyfromU(±6/(nl+1+nl−1)){\displaystyle {\mathcal {U}}(\pm {\sqrt {6/(n_{l+1}+n_{l-1})}})}. In the context,nl−1{\displaystyle n_{l-1}}is also called the "fan-in", andnl+1{\displaystyle n_{l+1}}the "fan-out". When the fan-in and fan-out are equal, then Glorot initialization is the same as LeCun initialization. As Glorot initialization performs poorly for ReLU activation,[7]He initialization(orKaiming initialization) was proposed byKaiming Heet al.[8]for networks withReLUactivation. It samples each entry inW(l){\displaystyle W^{(l)}}fromN(0,2/nl−1){\displaystyle {\mathcal {N}}(0,{\sqrt {2/n_{l-1}}})}. (Saxe et al. 2013)[9]proposedorthogonal initialization: initializing weight matrices as uniformly random (according to theHaar measure)semi-orthogonal matrices, multiplied by a factor that depends on the activation function of the layer. It was designed so that if one initializes adeep linear networkthis way, then its training time until convergence is independent of depth.[10] Sampling a uniformly random semi-orthogonal matrix can be done by initializingX{\displaystyle X}by IID sampling its entries from a standard normal distribution, then calculate(XX⊤)−1/2X{\displaystyle \left(XX^{\top }\right)^{-1/2}X}or its transpose, depending on whetherX{\displaystyle X}is tall or wide.[11] For CNN kernels with odd widths and heights, orthogonal initialization is done this way: initialize the central point by a semi-orthogonal matrix, and fill the other entries with zero. As an illustration, a kernelK{\displaystyle K}of shape3×3×c×c′{\displaystyle 3\times 3\times c\times c'}is initialized by fillingK[2,2,:,:]{\displaystyle K[2,2,:,:]}with the entries of a random semi-orthogonal matrix of shapec×c′{\displaystyle c\times c'}, and the other entries with zero. (Balduzzi et al., 2017)[12]used it with stride 1 and zero-padding. This is sometimes called theOrthogonal Delta initialization.[11][13] Related to this approach,unitary initializationproposes to parameterize the weight matrices to beunitary matrices, with the result that at initialization they are random unitary matrices (and throughout training, they remain unitary). This is found to improve long-sequence modelling in LSTM.[14][15] Orthogonal initialization has been generalized tolayer-sequential unit-variance (LSUV) initialization. It is a data-dependent initialization method, and can be used inconvolutional neural networks. It first initializes weights of each convolution or fully connected layer with orthonormal matrices. Then, proceeding from the first to the last layer, it runs a forward pass on a random minibatch, and divides the layer's weights by the standard deviation of its output, so that its output has variance approximately 1.[16][17] In 2015, the introduction ofresidual connectionsallowed very deep neural networks to be trained, much deeper than the ~20 layers of the previous state of the art (such as theVGG-19). Residual connections gave rise to their own weight initialization problems and strategies. These are sometimes called "normalization-free" methods, since using residual connection could stabilize the training of a deep neural network so much thatnormalizationsbecome unnecessary. Fixup initializationis designed specifically for networks withresidual connectionsand without batch normalization, as follows:[18] Similarly,T-Fixup initializationis designed forTransformerswithoutlayer normalization.[19]: 9 Instead of initializing all weights with random values on the order ofO(1/n){\displaystyle O(1/{\sqrt {n}})},sparseinitializationinitialized only a small subset of the weights with larger random values, and the other weights zero, so that the total variance is still on the order ofO(1){\displaystyle O(1)}.[20] Random walk initializationwas designed for MLP so that during backpropagation, the L2 norm of gradient at each layer performs an unbiased random walk as one moves from the last layer to the first.[21] Looks linear initializationwas designed to allow the neural network to behave like a deep linear network at initialization, sinceWReLU(x)−WReLU(−x)=Wx{\displaystyle W\;\mathrm {ReLU} (x)-W\;\mathrm {ReLU} (-x)=Wx}. It initializes a matrixW{\displaystyle W}of shapeRn2×m{\displaystyle \mathbb {R} ^{{\frac {n}{2}}\times m}}by any method, such as orthogonal initialization, then let theRn×m{\displaystyle \mathbb {R} ^{n\times m}}weight matrix to be the concatenation ofW,−W{\displaystyle W,-W}.[22] Forhyperbolic tangentactivation function, a particular scaling is sometimes used:1.7159tanh⁡(2x/3){\displaystyle 1.7159\tanh(2x/3)}. This was sometimes called "LeCun's tanh". It was designed so that it maps the interval[−1,+1]{\displaystyle [-1,+1]}to itself, thus ensuring that the overall gain is around 1 in "normal operating conditions", and that|f″(x)|{\displaystyle |f''(x)|}is at maximum whenx=−1,+1{\displaystyle x=-1,+1}, which improves convergence at the end of training.[23][5] Inself-normalizing neural networks, theSELU activation functionSELU(x)=λ{xifx>0αex−αifx≤0{\displaystyle \mathrm {SELU} (x)=\lambda {\begin{cases}x&{\text{if }}x>0\\\alpha e^{x}-\alpha &{\text{if }}x\leq 0\end{cases}}}with parametersλ≈1.0507,α≈1.6733{\displaystyle \lambda \approx 1.0507,\alpha \approx 1.6733}makes it such that the mean and variance of the output of each layer has(0,1){\displaystyle (0,1)}as an attracting fixed-point. This makes initialization less important, though they recommend initializing weights randomly with variance1/nl−1{\displaystyle 1/n_{l-1}}.[24] Random weight initialization was used sinceFrank Rosenblatt'sperceptrons. An early work that described weight initialization specifically was (LeCun et al., 1998).[5] Before the 2010s era of deep learning, it was common to initialize models by "generative pre-training" using an unsupervised learning algorithm that is not backpropagation, as it was difficult to directly train deep neural networks by backpropagation.[25][26]For example, adeep belief networkwas trained by usingcontrastive divergencelayer by layer, starting from the bottom.[27] (Martens, 2010)[20]proposed Hessian-free Optimization, aquasi-Newton methodto directly train deep networks. The work generated considerable excitement that initializing networks without pre-training phase was possible.[28]However, a 2013 paper demonstrated that with well-chosen hyperparameters,momentum gradient descentwith weight initialization was sufficient for training neural networks, without needing either quasi-Newton method or generative pre-training, a combination that is still in use as of 2024.[29] Since then, the impact of initialization on tuning the variance has become less important, with methods developed to automatically tune variance, likebatch normalizationtuning the variance of the forward pass,[30]andmomentum-based optimizerstuning the variance of the backward pass.[31] There is a tension between using careful weight initialization to decrease the need for normalization, and using normalization to decrease the need for careful weight initialization, with each approach having its tradeoffs. For example, batch normalization causes training examples in the minibatch to become dependent, an undesirable trait, while weight initialization is architecture-dependent.[32]
https://en.wikipedia.org/wiki/Weight_initialization
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Deep_learning
Instatistics, themean squared error(MSE)[1]ormean squared deviation(MSD) of anestimator(of a procedure for estimating an unobserved quantity) measures theaverageof the squares of theerrors—that is, the average squared difference between the estimated values and thetrue value. MSE is arisk function, corresponding to theexpected valueof thesquared error loss.[2]The fact that MSE is almost always strictly positive (and not zero) is because ofrandomnessor because the estimatordoes not account for informationthat could produce a more accurate estimate.[3]Inmachine learning, specificallyempirical risk minimization, MSE may refer to theempiricalrisk (the average loss on an observed data set), as an estimate of the true MSE (the true risk: the average loss on the actual population distribution). The MSE is a measure of the quality of an estimator. As it is derived from the square ofEuclidean distance, it is always a positive value that decreases as the error approaches zero. The MSE is the secondmoment(about the origin) of the error, and thus incorporates both thevarianceof the estimator (how widely spread the estimates are from onedata sampleto another) and itsbias(how far off the average estimated value is from the true value).[citation needed]For anunbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. In an analogy tostandard deviation, taking the square root of MSE yields theroot-mean-square errororroot-mean-square deviation(RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of thevariance, known as thestandard error. The MSE either assesses the quality of apredictor(i.e., a function mapping arbitrary inputs to a sample of values of somerandom variable), or of anestimator(i.e., amathematical functionmapping asampleof data to an estimate of aparameterof thepopulationfrom which the data is sampled). In the context of prediction, understanding theprediction intervalcan also be useful as it provides a range within which a future observation will fall, with a certain probability. The definition of an MSE differs according to whether one is describing a predictor or an estimator. If a vector ofn{\displaystyle n}predictions is generated from a sample ofn{\displaystyle n}data points on all variables, andY{\displaystyle Y}is the vector of observed values of the variable being predicted, withY^{\displaystyle {\hat {Y}}}being the predicted values (e.g. as from aleast-squares fit), then the within-sample MSE of the predictor is computed as In other words, the MSE is themean(1n∑i=1n){\textstyle \left({\frac {1}{n}}\sum _{i=1}^{n}\right)}of thesquares of the errors(Yi−Yi^)2{\textstyle \left(Y_{i}-{\hat {Y_{i}}}\right)^{2}}. This is an easily computable quantity for a particular sample (and hence is sample-dependent). Inmatrixnotation, whereei{\displaystyle e_{i}}is(Yi−Yi^){\displaystyle (Y_{i}-{\hat {Y_{i}}})}ande{\displaystyle \mathbf {e} }is an×1{\displaystyle n\times 1}column vector. The MSE can also be computed onqdata points that were not used in estimating the model, either because they were held back for this purpose, or because these data have been newly obtained. Within this process, known ascross-validation, the MSE is often called thetest MSE,[4]and is computed as The MSE of an estimatorθ^{\displaystyle {\hat {\theta }}}with respect to an unknown parameterθ{\displaystyle \theta }is defined as[1] This definition depends on the unknown parameter, therefore the MSE is apriori propertyof an estimator. The MSE could be a function of unknown parameters, in which case anyestimatorof the MSE based on estimates of these parameters would be a function of the data (and thus a random variable). If the estimatorθ^{\displaystyle {\hat {\theta }}}is derived as a sample statistic and is used to estimate some population parameter, then the expectation is with respect to thesampling distributionof the sample statistic. The MSE can be written as the sum of thevarianceof the estimator and the squaredbiasof the estimator, providing a useful way to calculate the MSE and implying that in the case of unbiased estimators, the MSE and variance are equivalent.[5] MSE⁡(θ^)=Eθ⁡[(θ^−θ)2]=Eθ⁡[(θ^−Eθ⁡[θ^]+Eθ⁡[θ^]−θ)2]=Eθ⁡[(θ^−Eθ⁡[θ^])2+2(θ^−Eθ⁡[θ^])(Eθ⁡[θ^]−θ)+(Eθ⁡[θ^]−θ)2]=Eθ⁡[(θ^−Eθ⁡[θ^])2]+Eθ⁡[2(θ^−Eθ⁡[θ^])(Eθ⁡[θ^]−θ)]+Eθ⁡[(Eθ⁡[θ^]−θ)2]=Eθ⁡[(θ^−Eθ⁡[θ^])2]+2(Eθ⁡[θ^]−θ)Eθ⁡[θ^−Eθ⁡[θ^]]+(Eθ⁡[θ^]−θ)2Eθ⁡[θ^]−θ=constant=Eθ⁡[(θ^−Eθ⁡[θ^])2]+2(Eθ⁡[θ^]−θ)(Eθ⁡[θ^]−Eθ⁡[θ^])+(Eθ⁡[θ^]−θ)2Eθ⁡[θ^]=constant=Eθ⁡[(θ^−Eθ⁡[θ^])2]+(Eθ⁡[θ^]−θ)2=Varθ⁡(θ^)+Biasθ⁡(θ^,θ)2{\displaystyle {\begin{aligned}\operatorname {MSE} ({\hat {\theta }})&=\operatorname {E} _{\theta }\left[({\hat {\theta }}-\theta )^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]+\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}+2\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+\operatorname {E} _{\theta }\left[2\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\right]+\operatorname {E} _{\theta }\left[\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+2\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\operatorname {E} _{\theta }\left[{\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right]+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}&&\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta ={\text{constant}}\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+2\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}&&\operatorname {E} _{\theta }[{\hat {\theta }}]={\text{constant}}\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\\&=\operatorname {Var} _{\theta }({\hat {\theta }})+\operatorname {Bias} _{\theta }({\hat {\theta }},\theta )^{2}\end{aligned}}} An even shorter proof can be achieved using the well-known formula that for a random variableX{\textstyle X},E(X2)=Var⁡(X)+(E(X))2{\textstyle \mathbb {E} (X^{2})=\operatorname {Var} (X)+(\mathbb {E} (X))^{2}}. By substitutingX{\textstyle X}with,θ^−θ{\textstyle {\hat {\theta }}-\theta }, we have But in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty (seeBias–variance tradeoff). According to the relationship, the MSE of the estimators could be simply used for theefficiencycomparison, which includes the information of estimator variance and bias. This is called MSE criterion. Inregression analysis, plotting is a more natural way to view the overall trend of the whole data. The mean of the distance from each point to the predicted regression model can be calculated, and shown as the mean squared error. The squaring is critical to reduce the complexity with negative signs. To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is theleast squares method—which evaluates appropriateness of linear regression model to modelbivariate dataset,[6]but whose limitation is related to known distribution of the data. The termmean squared erroris sometimes used to refer to the unbiased estimate of error variance: theresidual sum of squaresdivided by the number ofdegrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n−p) forpregressorsor (n−p−1) if an intercept is used (seeerrors and residuals in statisticsfor more details).[7]Although the MSE (as defined in this article) is not an unbiased estimator of the error variance, it isconsistent, given the consistency of the predictor. In regression analysis, "mean squared error", often referred to asmean squared prediction erroror "out-of-sample mean squared error", can also refer to the mean value of thesquared deviationsof the predictions from the true values, over an out-of-sampletest space, generated by a model estimated over aparticular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space. In the context ofgradient descentalgorithms, it is common to introduce a factor of1/2{\displaystyle 1/2}to the MSE for ease of computation after taking the derivative. So a value which is technically half the mean of squared errors may be called the MSE. Suppose we have a random sample of sizen{\displaystyle n}from a population,X1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}. Suppose the sample units were chosenwith replacement. That is, then{\displaystyle n}units are selected one at a time, and previously selected units are still eligible for selection for alln{\displaystyle n}draws. The usual estimator for the population meanμ{\displaystyle \mu }is the sample average which has an expected value equal to the true meanμ{\displaystyle \mu }(so it is unbiased) and a mean squared error of whereσ2{\displaystyle \sigma ^{2}}is thepopulation variance. For aGaussian distributionthis is thebest unbiased estimatorof the population mean, that is the one with the lowest MSE (and hence variance) among all unbiased estimators. One can check that the MSE above equals the inverse of theFisher information(seeCramér–Rao bound). But the same sample mean is not the best estimator of the population mean, say, for auniform distribution. The usual estimator for the variance is thecorrectedsample variance: This is unbiased (its expected value isσ2{\displaystyle \sigma ^{2}}), hence also called theunbiased sample variance,and its MSE is[8] whereμ4{\displaystyle \mu _{4}}is the fourthcentral momentof the distribution or population, andγ2=μ4/σ4−3{\displaystyle \gamma _{2}=\mu _{4}/\sigma ^{4}-3}is theexcess kurtosis. However, one can use other estimators forσ2{\displaystyle \sigma ^{2}}which are proportional toSn−12{\displaystyle S_{n-1}^{2}}, and an appropriate choice can always give a lower mean squared error. If we define then we calculate: This is minimized when For aGaussian distribution, whereγ2=0{\displaystyle \gamma _{2}=0}, this means that the MSE is minimized when dividing the sum bya=n+1{\displaystyle a=n+1}. The minimum excess kurtosis isγ2=−2{\displaystyle \gamma _{2}=-2},[a]which is achieved by aBernoulli distributionwithp= 1/2 (a coin flip), and the MSE is minimized fora=n−1+2n.{\displaystyle a=n-1+{\tfrac {2}{n}}.}Hence regardless of the kurtosis, we get a "better" estimate (in the sense of having a lower MSE) by scaling down the unbiased estimator a little bit; this is a simple example of ashrinkage estimator: one "shrinks" the estimator towards zero (scales down the unbiased estimator). Further, while the corrected sample variance is thebest unbiased estimator(minimum mean squared error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian, then even among unbiased estimators, the best unbiased estimator of the variance may not beSn−12.{\displaystyle S_{n-1}^{2}.} The following table gives several estimators of the true parameters of the population, μ and σ2, for the Gaussian case.[9] An MSE of zero, meaning that the estimatorθ^{\displaystyle {\hat {\theta }}}predicts observations of the parameterθ{\displaystyle \theta }with perfect accuracy, is ideal (but typically not possible). Values of MSE may be used for comparative purposes. Two or morestatistical modelsmay be compared using their MSEs—as a measure of how well they explain a given set of observations: An unbiased estimator (estimated from a statistical model) with the smallest variance among all unbiased estimators is thebest unbiased estimatoror MVUE (Minimum-Variance Unbiased Estimator). Bothanalysis of varianceandlinear regressiontechniques estimate the MSE as part of the analysis and use the estimated MSE to determine thestatistical significanceof the factors or predictors under study. The goal ofexperimental designis to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects. Inone-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE. MSE is also used in severalstepwise regressiontechniques as part of the determination as to how many predictors from a candidate set to include in a model for a given set of observations. Minimizing MSE is a key criterion in selecting estimators; seeminimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is theminimum variance unbiased estimator. However, a biased estimator may have lower MSE; seeestimator bias. Instatistical modellingthe MSE can represent the difference between the actual observations and the observation values predicted by the model. In this context, it is used to determine the extent to which the model fits the data as well as whether removing some explanatory variables is possible without significantly harming the model's predictive ability. Inforecastingandprediction, theBrier scoreis a measure offorecast skillbased on MSE. Squared error loss is one of the most widely usedloss functionsin statistics, though its widespread use stems more from mathematical convenience than considerations of actual loss in applications.Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[3]The mathematical benefits of mean squared error are particularly evident in its use at analyzing the performance oflinear regression, as it allows one to partition the variation in a dataset into variation explained by the model and variation explained by randomness. The use of mean squared error without question has been criticized by thedecision theoristJames Berger. Mean squared error is the negative of the expected value of one specificutility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.[10] Likevariance, mean squared error has the disadvantage of heavily weightingoutliers.[11]This is a result of the squaring of each term, which effectively weights large errors more heavily than small ones. This property, undesirable in many applications, has led researchers to use alternatives such as themean absolute error, or those based on themedian.
https://en.wikipedia.org/wiki/Mean_squared_error
Inmachine learning,support vector machines(SVMs, alsosupport vector networks[1]) aresupervisedmax-marginmodels with associated learningalgorithmsthat analyze data forclassificationandregression analysis. Developed atAT&T Bell Laboratories,[1][2]SVMs are one of the most studied models, being based on statistical learning frameworks ofVC theoryproposed byVapnik(1982, 1995) andChervonenkis(1974). In addition to performinglinear classification, SVMs can efficiently perform non-linear classification using thekernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensionalfeature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed.[3]Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). SVMs can also be used forregressiontasks, where the objective becomesϵ{\displaystyle \epsilon }-sensitive. The support vector clustering[4]algorithm, created byHava SiegelmannandVladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.[citation needed]These data sets requireunsupervised learningapproaches, which attempt to find naturalclustering of the datainto groups, and then to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, and their flexibility in being applied to a wide variety of tasks, includingstructured predictionproblems. It is not clear that SVMs have better predictive performance than other linear models, such aslogistic regressionandlinear regression.[5] Classifying datais a common task inmachine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class anewdata pointwill be in. In the case of support vector machines, a data point is viewed as ap{\displaystyle p}-dimensional vector (a list ofp{\displaystyle p}numbers), and we want to know whether we can separate such points with a(p−1){\displaystyle (p-1)}-dimensionalhyperplane. This is called alinear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, ormargin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplaneand the linear classifier it defines is known as amaximum-margin classifier; or equivalently, theperceptron of optimal stability.[6] More formally, a support vector machine constructs ahyperplaneor set of hyperplanes in a high or infinite-dimensional space, which can be used forclassification,regression, or other tasks like outliers detection.[7]Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower thegeneralization errorof the classifier.[8]A lowergeneralization errormeans that the implementer is less likely to experienceoverfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are notlinearly separablein that space. For this reason, it was proposed[9]that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure thatdot productsof pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of akernel functionk(x,y){\displaystyle k(x,y)}selected to suit the problem.[10]The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parametersαi{\displaystyle \alpha _{i}}of images offeature vectorsxi{\displaystyle x_{i}}that occur in the data base. With this choice of a hyperplane, the pointsx{\displaystyle x}in thefeature spacethat are mapped into the hyperplane are defined by the relation∑iαik(xi,x)=constant.{\displaystyle \textstyle \sum _{i}\alpha _{i}k(x_{i},x)={\text{constant}}.}Note that ifk(x,y){\displaystyle k(x,y)}becomes small asy{\displaystyle y}grows further away fromx{\displaystyle x}, each term in the sum measures the degree of closeness of the test pointx{\displaystyle x}to the corresponding data base pointxi{\displaystyle x_{i}}. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of pointsx{\displaystyle x}mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space. SVMs can be used to solve various real-world problems: The original SVM algorithm was invented byVladimir N. VapnikandAlexey Ya. Chervonenkisin 1964.[citation needed]In 1992, Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trickto maximum-margin hyperplanes.[9]The "soft margin" incarnation, as is commonly used in software packages, was proposed byCorinna Cortesand Vapnik in 1993 and published in 1995.[1] We are given a training dataset ofn{\displaystyle n}points of the form(x1,y1),…,(xn,yn),{\displaystyle (\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n}),}where theyi{\displaystyle y_{i}}are either 1 or −1, each indicating the class to which the pointxi{\displaystyle \mathbf {x} _{i}}belongs. Eachxi{\displaystyle \mathbf {x} _{i}}is ap{\displaystyle p}-dimensionalrealvector. We want to find the "maximum-margin hyperplane" that divides the group of pointsxi{\displaystyle \mathbf {x} _{i}}for whichyi=1{\displaystyle y_{i}=1}from the group of points for whichyi=−1{\displaystyle y_{i}=-1}, which is defined so that the distance between the hyperplane and the nearest pointxi{\displaystyle \mathbf {x} _{i}}from either group is maximized. Anyhyperplanecan be written as the set of pointsx{\displaystyle \mathbf {x} }satisfyingwTx−b=0,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} -b=0,}wherew{\displaystyle \mathbf {w} }is the (not necessarily normalized)normal vectorto the hyperplane. This is much likeHesse normal form, except thatw{\displaystyle \mathbf {w} }is not necessarily a unit vector. The parameterb‖w‖{\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}}determines the offset of the hyperplane from the origin along the normal vectorw{\displaystyle \mathbf {w} }. Warning: most of the literature on the subject defines the bias so thatwTx+b=0.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} +b=0.} If the training data islinearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations and Geometrically, the distance between these two hyperplanes is2‖w‖{\displaystyle {\tfrac {2}{\|\mathbf {w} \|}}},[21]so to maximize the distance between the planes we want to minimize‖w‖{\displaystyle \|\mathbf {w} \|}. The distance is computed using thedistance from a point to a planeequation. We also have to prevent data points from falling into the margin, we add the following constraint: for eachi{\displaystyle i}eitherwTxi−b≥1,ifyi=1,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\geq 1\,,{\text{ if }}y_{i}=1,}orwTxi−b≤−1,ifyi=−1.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\leq -1\,,{\text{ if }}y_{i}=-1.}These constraints state that each data point must lie on the correct side of the margin. This can be rewritten as We can put this together to get the optimization problem: minimizew,b12‖w‖2subject toyi(w⊤xi−b)≥1∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b}{\operatorname {minimize} }}&&{\frac {1}{2}}\|\mathbf {w} \|^{2}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thew{\displaystyle \mathbf {w} }andb{\displaystyle b}that solve this problem determine the final classifier,x↦sgn⁡(wTx−b){\displaystyle \mathbf {x} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)}, wheresgn⁡(⋅){\displaystyle \operatorname {sgn}(\cdot )}is thesign function. An important consequence of this geometric description is that the max-margin hyperplane is completely determined by thosexi{\displaystyle \mathbf {x} _{i}}that lie nearest to it (explained below). Thesexi{\displaystyle \mathbf {x} _{i}}are calledsupport vectors. To extend SVM to cases in which the data are not linearly separable, thehinge lossfunction is helpfulmax(0,1−yi(wTxi−b)).{\displaystyle \max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right).} Note thatyi{\displaystyle y_{i}}is thei-th target (i.e., in this case, 1 or −1), andwTxi−b{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b}is thei-th output. This function is zero if the constraint in(1)is satisfied, in other words, ifxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin. The goal of the optimization then is to minimize: ‖w‖2+C[1n∑i=1nmax(0,1−yi(wTxi−b))],{\displaystyle \lVert \mathbf {w} \rVert ^{2}+C\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right],} where the parameterC>0{\displaystyle C>0}determines the trade-off between increasing the margin size and ensuring that thexi{\displaystyle \mathbf {x} _{i}}lie on the correct side of the margin (Note we can add a weight to either term in the equation above). By deconstructing the hinge loss, this optimization problem can be formulated into the following: minimizew,b,ζ‖w‖22+C∑i=1nζisubject toyi(w⊤xi−b)≥1−ζi,ζi≥0∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b,\;\mathbf {\zeta } }{\operatorname {minimize} }}&&\|\mathbf {w} \|_{2}^{2}+C\sum _{i=1}^{n}\zeta _{i}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1-\zeta _{i},\quad \zeta _{i}\geq 0\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thus, for large values ofC{\displaystyle C}, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed alinear classifier. However, in 1992,Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trick(originally proposed by Aizerman et al.[22]) to maximum-margin hyperplanes.[9]The kernel trick, wheredot productsare replaced by kernels, is easily derived in the dual representation of the SVM problem. This allows the algorithm to fit the maximum-margin hyperplane in a transformedfeature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. It is noteworthy that working in a higher-dimensional feature space increases thegeneralization errorof support vector machines, although given enough samples the algorithm still performs well.[23] Some common kernels include: The kernel is related to the transformφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}by the equationk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. The valuewis also in the transformed space, withw=∑iαiyiφ(xi){\textstyle \mathbf {w} =\sum _{i}\alpha _{i}y_{i}\varphi (\mathbf {x} _{i})}. Dot products withwfor classification can again be computed by the kernel trick, i.e.w⋅φ(x)=∑iαiyik(xi,x){\textstyle \mathbf {w} \cdot \varphi (\mathbf {x} )=\sum _{i}\alpha _{i}y_{i}k(\mathbf {x} _{i},\mathbf {x} )}. Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value forλ{\displaystyle \lambda }yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing(2)to aquadratic programmingproblem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed. Minimizing(2)can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}we introduce a variableζi=max(0,1−yi(wTxi−b)){\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)}. Note thatζi{\displaystyle \zeta _{i}}is the smallest nonnegative number satisfyingyi(wTxi−b)≥1−ζi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\geq 1-\zeta _{i}.} Thus we can rewrite the optimization problem as follows minimize1n∑i=1nζi+λ‖w‖2subject toyi(wTxi−b)≥1−ζiandζi≥0,for alli.{\displaystyle {\begin{aligned}&{\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|\mathbf {w} \|^{2}\\[0.5ex]&{\text{subject to }}y_{i}\left(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\right)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.\end{aligned}}} This is called theprimalproblem. By solving for theLagrangian dualof the above problem, one obtains the simplified problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xiTxj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\mathbf {x} _{i}^{\mathsf {T}}\mathbf {x} _{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} This is called thedualproblem. Since the dual maximization problem is a quadratic function of theci{\displaystyle c_{i}}subject to linear constraints, it is efficiently solvable byquadratic programmingalgorithms. Here, the variablesci{\displaystyle c_{i}}are defined such that w=∑i=1nciyixi.{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\mathbf {x} _{i}.} Moreover,ci=0{\displaystyle c_{i}=0}exactly whenxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin, and0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}whenxi{\displaystyle \mathbf {x} _{i}}lies on the margin's boundary. It follows thatw{\displaystyle \mathbf {w} }can be written as a linear combination of the support vectors. The offset,b{\displaystyle b}, can be recovered by finding anxi{\displaystyle \mathbf {x} _{i}}on the margin's boundary and solvingyi(wTxi−b)=1⟺b=wTxi−yi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)=1\iff b=\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-y_{i}.} (Note thatyi−1=yi{\displaystyle y_{i}^{-1}=y_{i}}sinceyi=±1{\displaystyle y_{i}=\pm 1}.) Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data pointsφ(xi).{\displaystyle \varphi (\mathbf {x} _{i}).}Moreover, we are given a kernel functionk{\displaystyle k}which satisfiesk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. We know the classification vectorw{\displaystyle \mathbf {w} }in the transformed space satisfies w=∑i=1nciyiφ(xi),{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\varphi (\mathbf {x} _{i}),} where, theci{\displaystyle c_{i}}are obtained by solving the optimization problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(φ(xi)⋅φ(xj))yjcj=∑i=1nci−12∑i=1n∑j=1nyicik(xi,xj)yjcjsubject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k(\mathbf {x} _{i},\mathbf {x} _{j})y_{j}c_{j}\\{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}&=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} The coefficientsci{\displaystyle c_{i}}can be solved for using quadratic programming, as before. Again, we can find some indexi{\displaystyle i}such that0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}, so thatφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}lies on the boundary of the margin in the transformed space, and then solve b=wTφ(xi)−yi=[∑j=1ncjyjφ(xj)⋅φ(xi)]−yi=[∑j=1ncjyjk(xj,xi)]−yi.{\displaystyle {\begin{aligned}b=\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {x} _{i})-y_{i}&=\left[\sum _{j=1}^{n}c_{j}y_{j}\varphi (\mathbf {x} _{j})\cdot \varphi (\mathbf {x} _{i})\right]-y_{i}\\&=\left[\sum _{j=1}^{n}c_{j}y_{j}k(\mathbf {x} _{j},\mathbf {x} _{i})\right]-y_{i}.\end{aligned}}} Finally, z↦sgn⁡(wTφ(z)−b)=sgn⁡([∑i=1nciyik(xi,z)]−b).{\displaystyle \mathbf {z} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {z} )-b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k(\mathbf {x} _{i},\mathbf {z} )\right]-b\right).} Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high. Sub-gradient descentalgorithms for the SVM work directly with the expression f(w,b)=[1n∑i=1nmax(0,1−yi(wTxi−b))]+λ‖w‖2.{\displaystyle f(\mathbf {w} ,b)=\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} Note thatf{\displaystyle f}is aconvex functionofw{\displaystyle \mathbf {w} }andb{\displaystyle b}. As such, traditionalgradient descent(orSGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function'ssub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale withn{\displaystyle n}, the number of data points.[24] Coordinate descentalgorithms for the SVM work from the dual problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xi⋅xj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}, iteratively, the coefficientci{\displaystyle c_{i}}is adjusted in the direction of∂f/∂ci{\displaystyle \partial f/\partial c_{i}}. Then, the resulting vector of coefficients(c1′,…,cn′){\displaystyle (c_{1}',\,\ldots ,\,c_{n}')}is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.[25] The soft-margin support vector machine described above is an example of anempirical risk minimization(ERM) algorithm for thehinge loss. Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties. In supervised learning, one is given a set of training examplesX1…Xn{\displaystyle X_{1}\ldots X_{n}}with labelsy1…yn{\displaystyle y_{1}\ldots y_{n}}, and wishes to predictyn+1{\displaystyle y_{n+1}}givenXn+1{\displaystyle X_{n+1}}. To do so one forms ahypothesis,f{\displaystyle f}, such thatf(Xn+1){\displaystyle f(X_{n+1})}is a "good" approximation ofyn+1{\displaystyle y_{n+1}}. A "good" approximation is usually defined with the help of aloss function,ℓ(y,z){\displaystyle \ell (y,z)}, which characterizes how badz{\displaystyle z}is as a prediction ofy{\displaystyle y}. We would then like to choose a hypothesis that minimizes theexpected risk: ε(f)=E[ℓ(yn+1,f(Xn+1))].{\displaystyle \varepsilon (f)=\mathbb {E} \left[\ell (y_{n+1},f(X_{n+1}))\right].} In most cases, we don't know the joint distribution ofXn+1,yn+1{\displaystyle X_{n+1},\,y_{n+1}}outright. In these cases, a common strategy is to choose the hypothesis that minimizes theempirical risk: ε^(f)=1n∑k=1nℓ(yk,f(Xk)).{\displaystyle {\hat {\varepsilon }}(f)={\frac {1}{n}}\sum _{k=1}^{n}\ell (y_{k},f(X_{k})).} Under certain assumptions about the sequence of random variablesXk,yk{\displaystyle X_{k},\,y_{k}}(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk asn{\displaystyle n}grows large. This approach is calledempirical risk minimization,or ERM. In order for the minimization problem to have a well-defined solution, we have to place constraints on the setH{\displaystyle {\mathcal {H}}}of hypotheses being considered. IfH{\displaystyle {\mathcal {H}}}is anormed space(as is the case for SVM), a particularly effective technique is to consider only those hypothesesf{\displaystyle f}for which‖f‖H<k{\displaystyle \lVert f\rVert _{\mathcal {H}}<k}. This is equivalent to imposing aregularization penaltyR(f)=λk‖f‖H{\displaystyle {\mathcal {R}}(f)=\lambda _{k}\lVert f\rVert _{\mathcal {H}}}, and solving the new optimization problem f^=argminf∈Hε^(f)+R(f).{\displaystyle {\hat {f}}=\mathrm {arg} \min _{f\in {\mathcal {H}}}{\hat {\varepsilon }}(f)+{\mathcal {R}}(f).} This approach is calledTikhonov regularization. More generally,R(f){\displaystyle {\mathcal {R}}(f)}can be some measure of the complexity of the hypothesisf{\displaystyle f}, so that simpler hypotheses are preferred. Recall that the (soft-margin) SVM classifierw^,b:x↦sgn⁡(w^Tx−b){\displaystyle {\hat {\mathbf {w} }},b:\mathbf {x} \mapsto \operatorname {sgn}({\hat {\mathbf {w} }}^{\mathsf {T}}\mathbf {x} -b)}is chosen to minimize the following expression: [1n∑i=1nmax(0,1−yi(wTx−b))]+λ‖w‖2.{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is thehinge loss ℓ(y,z)=max(0,1−yz).{\displaystyle \ell (y,z)=\max \left(0,1-yz\right).} From this perspective, SVM is closely related to other fundamentalclassification algorithmssuch asregularized least-squaresandlogistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with thesquare-loss,ℓsq(y,z)=(y−z)2{\displaystyle \ell _{sq}(y,z)=(y-z)^{2}}; logistic regression employs thelog-loss, ℓlog(y,z)=ln⁡(1+e−yz).{\displaystyle \ell _{\log }(y,z)=\ln(1+e^{-yz}).} The difference between the hinge loss and these other loss functions is best stated in terms oftarget functions -the function that minimizes expected risk for a given pair of random variablesX,y{\displaystyle X,\,y}. In particular, letyx{\displaystyle y_{x}}denotey{\displaystyle y}conditional on the event thatX=x{\displaystyle X=x}. In the classification setting, we have: yx={1with probabilitypx−1with probability1−px{\displaystyle y_{x}={\begin{cases}1&{\text{with probability }}p_{x}\\-1&{\text{with probability }}1-p_{x}\end{cases}}} The optimal classifier is therefore: f∗(x)={1ifpx≥1/2−1otherwise{\displaystyle f^{*}(x)={\begin{cases}1&{\text{if }}p_{x}\geq 1/2\\-1&{\text{otherwise}}\end{cases}}} For the square-loss, the target function is the conditional expectation function,fsq(x)=E[yx]{\displaystyle f_{sq}(x)=\mathbb {E} \left[y_{x}\right]}; For the logistic loss, it's the logit function,flog(x)=ln⁡(px/(1−px)){\displaystyle f_{\log }(x)=\ln \left(p_{x}/({1-p_{x}})\right)}. While both of these target functions yield the correct classifier, assgn⁡(fsq)=sgn⁡(flog)=f∗{\displaystyle \operatorname {sgn}(f_{sq})=\operatorname {sgn}(f_{\log })=f^{*}}, they give us more information than we need. In fact, they give us enough information to completely describe the distribution ofyx{\displaystyle y_{x}}. On the other hand, one can check that the target function for the hinge loss isexactlyf∗{\displaystyle f^{*}}. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms ofR{\displaystyle {\mathcal {R}}}) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.[26] SVMs belong to a family of generalizedlinear classifiersand can be interpreted as an extension of theperceptron.[27]They can also be considered a special case ofTikhonov regularization. A special property is that they simultaneously minimize the empiricalclassification errorand maximize thegeometric margin; hence they are also known asmaximummargin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.[28] The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameterλ{\displaystyle \lambda }. A common choice is a Gaussian kernel, which has a single parameterγ{\displaystyle \gamma }. The best combination ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }is often selected by agrid searchwith exponentially growing sequences ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, for example,λ∈{2−5,2−3,…,213,215}{\displaystyle \lambda \in \{2^{-5},2^{-3},\dots ,2^{13},2^{15}\}};γ∈{2−15,2−13,…,21,23}{\displaystyle \gamma \in \{2^{-15},2^{-13},\dots ,2^{1},2^{3}\}}. Typically, each combination of parameter choices is checked usingcross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work inBayesian optimizationcan be used to selectλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[29] Potential drawbacks of the SVM include the following aspects: Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the singlemulticlass probleminto multiplebinary classificationproblems.[30]Common methods for such reduction include:[30][31] Crammer and Singer proposed a multiclass SVM method which casts themulticlass classificationproblem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[34]See also Lee, Lin and Wahba[35][36]and Van den Burg and Groenen.[37] Transductive support vector machines extend SVMs in that they could also treat partially labeled data insemi-supervised learningby following the principles oftransduction. Here, in addition to the training setD{\displaystyle {\mathcal {D}}}, the learner is also given a set D⋆={xi⋆∣xi⋆∈Rp}i=1k{\displaystyle {\mathcal {D}}^{\star }=\{\mathbf {x} _{i}^{\star }\mid \mathbf {x} _{i}^{\star }\in \mathbb {R} ^{p}\}_{i=1}^{k}} of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:[38] Minimize (inw,b,y⋆{\displaystyle \mathbf {w} ,b,\mathbf {y} ^{\star }}) 12‖w‖2{\displaystyle {\frac {1}{2}}\|\mathbf {w} \|^{2}} subject to (for anyi=1,…,n{\displaystyle i=1,\dots ,n}and anyj=1,…,k{\displaystyle j=1,\dots ,k}) yi(w⋅xi−b)≥1,yj⋆(w⋅xj⋆−b)≥1,{\displaystyle {\begin{aligned}&y_{i}(\mathbf {w} \cdot \mathbf {x} _{i}-b)\geq 1,\\&y_{j}^{\star }(\mathbf {w} \cdot \mathbf {x} _{j}^{\star }-b)\geq 1,\end{aligned}}} and yj⋆∈{−1,1}.{\displaystyle y_{j}^{\star }\in \{-1,1\}.} Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. Structured support-vector machine is an extension of the traditional SVM model. While the SVM model is primarily designed for binary classification, multiclass classification, and regression tasks, structured SVM broadens its application to handle general structured output labels, for example parse trees, classification with taxonomies, sequence alignment and many more.[39] A version of SVM forregressionwas proposed in 1996 byVladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola.[40]This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known asleast-squares support vector machine(LS-SVM) has been proposed by Suykens and Vandewalle.[41] Training the original SVR means solving[42] wherexi{\displaystyle x_{i}}is a training sample with target valueyi{\displaystyle y_{i}}. The inner product plus intercept⟨w,xi⟩+b{\displaystyle \langle w,x_{i}\rangle +b}is the prediction for that sample, andε{\displaystyle \varepsilon }is a free parameter that serves as a threshold: all predictions have to be within anε{\displaystyle \varepsilon }range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible. In 2011 it was shown by Polson and Scott that the SVM admits aBayesianinterpretation through the technique ofdata augmentation.[43]In this approach the SVM is viewed as agraphical model(where the parameters are connected via probability distributions). This extended view allows the application ofBayesiantechniques to SVMs, such as flexible feature modeling, automatichyperparametertuning, andpredictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed byFlorian Wenzel, enabling the application of Bayesian SVMs tobig data.[44]Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM.[45] The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving thequadratic programming(QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks. Another approach is to use aninterior-point methodthat usesNewton-like iterations to find a solution of theKarush–Kuhn–Tucker conditionsof the primal and dual problems.[46]Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick. Another common method is Platt'ssequential minimal optimization(SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems.[47] The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin,logistic regression; this class of algorithms includessub-gradient descent(e.g., PEGASOS[48]) andcoordinate descent(e.g., LIBLINEAR[49]). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have aQ-linear convergenceproperty, making the algorithm extremely fast. The general kernel SVMs can also be solved more efficiently usingsub-gradient descent(e.g. P-packSVM[50]), especially whenparallelizationis allowed. Kernel SVMs are available in many machine-learning toolkits, includingLIBSVM,MATLAB,SAS, SVMlight,kernlab,scikit-learn,Shogun,Weka,Shark,JKernelMachines,OpenCVand others. Preprocessing of data (standardization) is highly recommended to enhance accuracy of classification.[51]There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score.[52]Subtraction of mean and division by variance of each feature is usually used for SVM.[53]
https://en.wikipedia.org/wiki/Support_vector_machine
Inmathematics,statistics,finance,[1]andcomputer science, particularly inmachine learningandinverse problems,regularizationis a process that converts theanswer to a problemto a simpler one. It is often used in solvingill-posed problemsor to preventoverfitting.[2] Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement, and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. Inmachine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce thegeneralization error, i.e. the error score with the trained model on the evaluation set (testing data) and not the training data.[3] One of the earliest uses of regularization isTikhonov regularization(ridge regression), related to the method of least squares. Inmachine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressingoverfitting—where a model memorizes training data details but cannot generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques likeearly stopping, L1 andL2 regularization, anddropoutare designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization.[4] Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data.[4] Adds penalty terms to the cost function to discourage complex models: In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization.[4] Empirical learning of classifiers (from a finite data set) is always anunderdeterminedproblem, because it attempts to infer a function of anyx{\displaystyle x}given only examplesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}. A regularization term (or regularizer)R(f){\displaystyle R(f)}is added to aloss function:minf∑i=1nV(f(xi),yi)+λR(f){\displaystyle \min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)}whereV{\displaystyle V}is an underlying loss function that describes the cost of predictingf(x){\displaystyle f(x)}when the label isy{\displaystyle y}, such as thesquare lossorhinge loss; andλ{\displaystyle \lambda }is a parameter which controls the importance of the regularization term.R(f){\displaystyle R(f)}is typically chosen to impose a penalty on the complexity off{\displaystyle f}. Concrete notions of complexity used include restrictions forsmoothnessand bounds on thevector space norm.[5][page needed] A theoretical justification for regularization is that it attempts to imposeOccam's razoron the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From aBayesianpoint of view, many regularization techniques correspond to imposing certainpriordistributions on model parameters.[6] Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure[clarification needed]into the learning problem. The same idea arose in many fields ofscience. A simple form of regularization applied tointegral equations(Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, includingtotal variation regularization, have become popular. Regularization can be motivated as a technique to improve the generalizability of a learned model. The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a functionfn{\displaystyle f_{n}}is:I[fn]=∫X×YV(fn(x),y)ρ(x,y)dxdy{\displaystyle I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy}whereX{\displaystyle X}andY{\displaystyle Y}are the domains of input datax{\displaystyle x}and their labelsy{\displaystyle y}respectively. Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over theN{\displaystyle N}available samples:IS[fn]=1n∑i=1NV(fn(x^i),y^i){\displaystyle I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})}Without bounds on the complexity of the function space (formally, thereproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. ofxi{\displaystyle x_{i}}) were made with noise, this model may suffer fromoverfittingand display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization. These techniques are named forAndrey Nikolayevich Tikhonov, who applied regularization tointegral equationsand made important contributions in many other areas. When learning a linear functionf{\displaystyle f}, characterized by an unknownvectorw{\displaystyle w}such thatf(x)=w⋅x{\displaystyle f(x)=w\cdot x}, one can add theL2{\displaystyle L_{2}}-norm of the vectorw{\displaystyle w}to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as:minw∑i=1nV(x^i⋅w,y^i)+λ‖w‖22,{\displaystyle \min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \left\|w\right\|_{2}^{2},}where(x^i,y^i),1≤i≤n,{\displaystyle ({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,}would represent samples used for training. In the case of a general function, the norm of the function in itsreproducing kernel Hilbert spaceis:minf∑i=1nV(f(x^i),y^i)+λ‖f‖H2{\displaystyle \min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \left\|f\right\|_{\mathcal {H}}^{2}} As theL2{\displaystyle L_{2}}norm isdifferentiable, learning can be advanced bygradient descent. The learning problem with theleast squaresloss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimalw{\displaystyle w}is the one for which the gradient of the loss function with respect tow{\displaystyle w}is 0.minw1n(X^w−Y)T(X^w−Y)+λ‖w‖22{\displaystyle \min _{w}{\frac {1}{n}}\left({\hat {X}}w-Y\right)^{\mathsf {T}}\left({\hat {X}}w-Y\right)+\lambda \left\|w\right\|_{2}^{2}}∇w=2nX^T(X^w−Y)+2λw{\displaystyle \nabla _{w}={\frac {2}{n}}{\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+2\lambda w}0=X^T(X^w−Y)+nλw{\displaystyle 0={\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+n\lambda w}w=(X^TX^+λnI)−1(X^TY){\displaystyle w=\left({\hat {X}}^{\mathsf {T}}{\hat {X}}+\lambda nI\right)^{-1}\left({\hat {X}}^{\mathsf {T}}Y\right)}where the third statement is afirst-order condition. By construction of the optimization problem, other values ofw{\displaystyle w}give larger values for the loss function. This can be verified by examining thesecond derivative∇ww{\displaystyle \nabla _{ww}}. During training, this algorithm takesO(d3+nd2){\displaystyle O(d^{3}+nd^{2})}time. The terms correspond to the matrix inversion and calculatingXTX{\displaystyle X^{\mathsf {T}}X}, respectively. Testing takesO(nd){\displaystyle O(nd)}time. Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization. Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set. Consider the finite approximation ofNeumann seriesfor an invertible matrixAwhere‖I−A‖<1{\displaystyle \left\|I-A\right\|<1}:∑i=0T−1(I−A)i≈A−1{\displaystyle \sum _{i=0}^{T-1}\left(I-A\right)^{i}\approx A^{-1}} This can be used to approximate the analytical solution of unregularized least squares, ifγis introduced to ensure the norm is less than one.wT=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}} The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limitingT, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization. The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical riskIs[w]=12n‖X^w−Y^‖Rn2{\displaystyle I_{s}[w]={\frac {1}{2n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|_{\mathbb {R} ^{n}}^{2}}with the gradient descent update:w0=0wt+1=(I−γnX^TX^)wt+γnX^TY^{\displaystyle {\begin{aligned}w_{0}&=0\\[1ex]w_{t+1}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} The base case is trivial. The inductive case is proved as follows:wT=(I−γnX^TX^)γn∑i=0T−2(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=1T−1(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle {\begin{aligned}w_{T}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right){\frac {\gamma }{n}}\sum _{i=0}^{T-2}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}} Assume that a dictionaryϕj{\displaystyle \phi _{j}}with dimensionp{\displaystyle p}is given such that a function in the function space can be expressed as:f(x)=∑j=1pϕj(x)wj{\displaystyle f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}} Enforcing a sparsity constraint onw{\displaystyle w}can lead to simpler and more interpretable models. This is useful in many real-life applications such ascomputational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power. A sensible sparsity constraint is theL0{\displaystyle L_{0}}norm‖w‖0{\displaystyle \|w\|_{0}}, defined as the number of non-zero elements inw{\displaystyle w}. Solving aL0{\displaystyle L_{0}}regularized learning problem, however, has been demonstrated to beNP-hard.[7] TheL1{\displaystyle L_{1}}norm(see alsoNorms) can be used to approximate the optimalL0{\displaystyle L_{0}}norm via convex relaxation. It can be shown that theL1{\displaystyle L_{1}}norm induces sparsity. In the case of least squares, this problem is known asLASSOin statistics andbasis pursuitin signal processing.minw∈Rp1n‖X^w−Y^‖2+λ‖w‖1{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left\|w\right\|_{1}} L1{\displaystyle L_{1}}regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combiningL1{\displaystyle L_{1}}withL2{\displaystyle L_{2}}regularization inelastic net regularization, which takes the following form:minw∈Rp1n‖X^w−Y^‖2+λ(α‖w‖1+(1−α)‖w‖22),α∈[0,1]{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left(\alpha \left\|w\right\|_{1}+(1-\alpha )\left\|w\right\|_{2}^{2}\right),\alpha \in [0,1]} Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights. Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries. While theL1{\displaystyle L_{1}}norm does not result in an NP-hard problem, theL1{\displaystyle L_{1}}norm is convex but is not strictly differentiable due to the kink at x = 0.Subgradient methodswhich rely on thesubderivativecan be used to solveL1{\displaystyle L_{1}}regularized learning problems. However, faster convergence can be achieved through proximal methods. For a problemminw∈HF(w)+R(w){\displaystyle \min _{w\in H}F(w)+R(w)}such thatF{\displaystyle F}is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), andR{\displaystyle R}is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define theproximal operatorproxR⁡(v)=argminw∈RD⁡{R(w)+12‖w−v‖2},{\displaystyle \operatorname {prox} _{R}(v)=\mathop {\operatorname {argmin} } _{w\in \mathbb {R} ^{D}}\left\{R(w)+{\frac {1}{2}}\left\|w-v\right\|^{2}\right\},}and then iteratewk+1=proxγ,R⁡(wk−γ∇F(wk)){\displaystyle w_{k+1}=\mathop {\operatorname {prox} } _{\gamma ,R}\left(w_{k}-\gamma \nabla F(w_{k})\right)} The proximal method iteratively performs gradient descent and then projects the result back into the space permitted byR{\displaystyle R}. WhenR{\displaystyle R}is theL1regularizer, the proximal operator is equivalent to the soft-thresholding operator,Sλ(v)f(n)={vi−λ,ifvi>λ0,ifvi∈[−λ,λ]vi+λ,ifvi<−λ{\displaystyle S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}} This allows for efficient computation. Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem. In the case of a linear model with non-overlapping known groups, a regularizer can be defined:R(w)=∑g=1G‖wg‖2,{\displaystyle R(w)=\sum _{g=1}^{G}\left\|w_{g}\right\|_{2},}where‖wg‖2=∑j=1|Gg|(wgj)2{\displaystyle \|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}\left(w_{g}^{j}\right)^{2}}}} This can be viewed as inducing a regularizer over theL2{\displaystyle L_{2}}norm over members of each group followed by anL1{\displaystyle L_{1}}norm over groups. This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function: proxλ,R,g⁡(wg)={(1−λ‖wg‖2)wg,if‖wg‖2>λ0,if‖wg‖2≤λ{\displaystyle \operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}\left(1-{\dfrac {\lambda }{\left\|w_{g}\right\|_{2}}}\right)w_{g},&{\text{if }}\left\|w_{g}\right\|_{2}>\lambda \\[1ex]0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}} The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements. If it is desired to preserve the group structure, a new regularizer can be defined:R(w)=inf{∑g=1G‖wg‖2:w=∑g=1Gw¯g}{\displaystyle R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}} For eachwg{\displaystyle w_{g}},w¯g{\displaystyle {\bar {w}}_{g}}is defined as the vector such that the restriction ofw¯g{\displaystyle {\bar {w}}_{g}}to the groupg{\displaystyle g}equalswg{\displaystyle w_{g}}and all other entries ofw¯g{\displaystyle {\bar {w}}_{g}}are zero. The regularizer finds the optimal disintegration ofw{\displaystyle w}into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration. When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrixW{\displaystyle W}is given, a regularizer can be defined:R(f)=∑i,jwij(f(xi)−f(xj))2{\displaystyle R(f)=\sum _{i,j}w_{ij}\left(f(x_{i})-f(x_{j})\right)^{2}} IfWij{\displaystyle W_{ij}}encodes the result of some distance metric for pointsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, it is desirable thatf(xi)≈f(xj){\displaystyle f(x_{i})\approx f(x_{j})}. This regularizer captures this intuition, and is equivalent to:R(f)=f¯TLf¯{\displaystyle R(f)={\bar {f}}^{\mathsf {T}}L{\bar {f}}}whereL=D−W{\displaystyle L=D-W}is theLaplacian matrixof the graph induced byW{\displaystyle W}. The optimization problemminf∈RmR(f),m=u+l{\displaystyle \min _{f\in \mathbb {R} ^{m}}R(f),m=u+l}can be solved analytically if the constraintf(xi)=yi{\displaystyle f(x_{i})=y_{i}}is applied for all supervised samples. The labeled part of the vectorf{\displaystyle f}is therefore obvious. The unlabeled part off{\displaystyle f}is solved for by:minfu∈RufTLf=minfu∈Ru{fuTLuufu+flTLlufu+fuTLulfl}{\displaystyle \min _{f_{u}\in \mathbb {R} ^{u}}f^{\mathsf {T}}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\left\{f_{u}^{\mathsf {T}}L_{uu}f_{u}+f_{l}^{\mathsf {T}}L_{lu}f_{u}+f_{u}^{\mathsf {T}}L_{ul}f_{l}\right\}}∇fu=2Luufu+2LulY{\displaystyle \nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y}fu=Luu†(LulY){\displaystyle f_{u}=L_{uu}^{\dagger }\left(L_{ul}Y\right)}The pseudo-inverse can be taken becauseLul{\displaystyle L_{ul}}has the same range asLuu{\displaystyle L_{uu}}. In the case of multitask learning,T{\displaystyle T}problems are considered simultaneously, each related in some way. The goal is to learnT{\displaystyle T}functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrixW:T×D{\displaystyle W:T\times D}. R(w)=∑i=1D‖W‖2,1{\displaystyle R(w)=\sum _{i=1}^{D}\left\|W\right\|_{2,1}} This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods. R(w)=‖σ(W)‖1{\displaystyle R(w)=\left\|\sigma (W)\right\|_{1}}whereσ(W){\displaystyle \sigma (W)}is theeigenvaluesin thesingular value decompositionofW{\displaystyle W}. R(f1⋯fT)=∑t=1T‖ft−1T∑s=1Tfs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\left\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\right\|_{H_{k}}^{2}} This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual. R(f1⋯fT)=∑r=1C∑t∈I(r)‖ft−1I(r)∑s∈I(r)fs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\left\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\right\|_{H_{k}}^{2}}whereI(r){\displaystyle I(r)}is a cluster of tasks. This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predictNetflixrecommendations. A cluster would correspond to a group of people who share similar preferences. More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.R(f1⋯fT)=∑t,s=1,t≠sT‖ft−fs‖2Mts{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{\mathsf {T}}\left\|f_{t}-f_{s}\right\|^{2}M_{ts}}for a given symmetricsimilarity matrixM{\displaystyle M}. Bayesian learningmethods make use of aprior probabilitythat (usually) gives lower probability to more complex models. Well-known model selection techniques include theAkaike information criterion(AIC),minimum description length(MDL), and theBayesian information criterion(BIC). Alternative methods of controlling overfitting not involving regularization includecross-validation. Examples of applications of different methods of regularization to thelinear modelare:
https://en.wikipedia.org/wiki/Regularization_(mathematics)
Instatistics,multicollinearityorcollinearityis a situation where thepredictorsin aregression modelarelinearly dependent. Perfect multicollinearityrefers to a situation where thepredictive variableshave anexactlinear relationship. When there is perfect collinearity, thedesign matrixX{\displaystyle X}has less than fullrank, and therefore themoment matrixXTX{\displaystyle X^{\mathsf {T}}X}cannot beinverted. In this situation, theparameter estimatesof the regression are not well-defined, as the system of equations hasinfinitely many solutions. Imperfect multicollinearityrefers to a situation where thepredictive variableshave anearlyexact linear relationship. Contrary to popular belief, neither theGauss–Markov theoremnor the more commonmaximum likelihoodjustification forordinary least squaresrelies on any kind of correlation structure between dependent predictors[1][2][3](although perfect collinearity can cause problems with some software). There is no justification for the practice of removing collinear variables as part of regression analysis,[1][4][5][6][7]and doing so may constitutescientific misconduct. Including collinear variables does not reduce the predictive power orreliabilityof the model as a whole,[6]and does not reduce the accuracy of coefficient estimates.[1] High collinearity indicates that it is exceptionally important to include all collinear variables, as excluding any will cause worse coefficient estimates, strongconfounding, and downward-biased estimates ofstandard errors.[2] To address the high collinearity of a dataset,variance inflation factorcan be used to identify the collinearity of the predictor variables. Perfect multicollinearity refers to a situation where the predictors arelinearly dependent(one can be written as an exact linear function of the others).[8]Ordinary least squaresrequires inverting the matrixXTX{\displaystyle X^{\mathsf {T}}X}, where is anN×(k+1){\displaystyle N\times (k+1)}matrix, whereN{\displaystyle N}is the number of observations,k{\displaystyle k}is the number of explanatory variables, andN≥k+1{\displaystyle N\geq k+1}. If there is an exact linear relationship among the independent variables, then at least one of the columns ofX{\displaystyle X}is a linear combination of the others, and so therankofX{\displaystyle X}(and therefore ofXTX{\displaystyle X^{\mathsf {T}}X}) is less thank+1{\displaystyle k+1}, and the matrixXTX{\displaystyle X^{\mathsf {T}}X}will not be invertible. Perfect collinearity is typically caused by including redundant variables in a regression. For example, a dataset may include variables for income, expenses, and savings. However, because income is equal to expenses plus savings by definition, it is incorrect to include all 3 variables in a regression simultaneously. Similarly, including adummy variablefor every category (e.g., summer, autumn, winter, and spring) as well as an intercept term will result in perfect collinearity. This is known as the dummy variable trap.[9] The other common cause of perfect collinearity is attempting to useordinary least squareswhen working with very wide datasets (those with more variables than observations). These require more advanced data analysis techniques likeBayesian hierarchical modelingto produce meaningful results.[citation needed] Sometimes, the variablesXj{\displaystyle X_{j}}are nearly collinear. In this case, the matrixXTX{\displaystyle X^{\mathsf {T}}X}has an inverse, but it isill-conditioned. A computer algorithm may or may not be able to compute an approximate inverse; even if it can, the resulting inverse may have largerounding errors. The standard measure ofill-conditioningin a matrix is the condition index. This determines if the inversion of the matrix is numerically unstable with finite-precision numbers, indicating the potential sensitivity of the computed inverse to small changes in the original matrix. The condition number is computed by finding the maximumsingular valuedivided by the minimum singular value of thedesign matrix.[10]In the context of collinear variables, thevariance inflation factoris the condition number for a particular coefficient. Numerical problems in estimating can be solved by applying standard techniques fromlinear algebrato estimate the equations more precisely: In addition to causing numerical problems, imperfect collinearity makes precise estimation of variables difficult. In other words, highly correlated variables lead to poor estimates and large standard errors. As an example, say that we notice Alice wears her boots whenever it is raining and that there are only puddles when it rains. Then, we cannot tell whether she wears boots to keep the rain from landing on her feet, or to keep her feet dry if she steps in a puddle. The problem with trying to identify how much each of the two variables matters is that they areconfoundedwith each other: our observations are explained equally well by either variable, so we do not know which one of them causes the observed correlations. There are two ways to discover this information: This confounding becomes substantially worse when researchersattempt to ignore or suppress itby excluding these variables from the regression (see#Misuse). Excluding multicollinear variables from regressions will invalidatecausal inferenceand produce worse estimates by removing important confounders. There are many ways to prevent multicollinearity from affecting results by planning ahead of time. However, these methods all require a researcher to decide on a procedure and analysisbeforedata has been collected (seepost hoc analysisandMulticollinearity § Misuse). Many regression methods are naturally "robust" to multicollinearity and generally perform better thanordinary least squaresregression, even when variables are independent.Regularized regressiontechniques such asridge regression,LASSO,elastic net regression, orspike-and-slab regressionare less sensitive to including "useless" predictors, a common cause of collinearity. These techniques can detect and remove these predictors automatically to avoid problems.Bayesian hierarchical models(provided by software likeBRMS) can perform such regularization automatically, learning informative priors from the data. Often, problems caused by the use offrequentist estimationare misunderstood or misdiagnosed as being related to multicollinearity.[3]Researchers are often frustrated not by multicollinearity, but by their inability to incorporate relevantprior informationin regressions. For example, complaints that coefficients have "wrong signs" or confidence intervals that "include unrealistic values" indicate there is important prior information that is not being incorporated into the model. When this is information is available, it should be incorporated into the prior usingBayesian regressiontechniques.[3] Stepwise regression(the procedure of excluding "collinear" or "insignificant" variables) is especially vulnerable to multicollinearity, and is one of the few procedures wholly invalidated by it (with any collinearity resulting in heavily biased estimates and invalidated p-values).[2] When conducting experiments where researchers have control over the predictive variables, researchers can often avoid collinearity by choosing anoptimal experimental designin consultation with a statistician. While the above strategies work in some situations, estimates using advanced techniques may still produce large standard errors. In such cases, the correct response to multicollinearity is to "do nothing".[1]Thescientific processoften involvesnullor inconclusive results; not every experiment will be "successful" in the sense of decisively confirmation of the researcher's original hypothesis. Edward Leamer notes that "The solution to the weak evidence problem is more and better data. Within the confines of the given data set there is nothing that can be done about weak evidence".[3]Leamer notes that "bad" regression results that are often misattributed to multicollinearity instead indicate the researcher has chosen an unrealisticprior probability(generally theflat priorused inOLS).[3] Damodar Gujaratiwrites that "we should rightly accept [our data] are sometimes not very informative about parameters of interest".[1]Olivier Blanchardquips that "multicollinearity is God's will, not a problem withOLS";[7]in other words, when working withobservational data, researchers cannot "fix" multicollinearity, only accept it. Variance inflation factors are often misused as criteria instepwise regression(i.e. for variable inclusion/exclusion), a use that "lacks any logical basis but also is fundamentally misleading as a rule-of-thumb".[2] Excluding collinear variables leads to artificially small estimates for standard errors, but does not reduce the true (not estimated) standard errors for regression coefficients.[1]Excluding variables with a highvariance inflation factoralso invalidates the calculated standard errors and p-values, by turning the results of the regression into apost hoc analysis.[14] Because collinearity leads to large standard errors and p-values, which can make publishing articles more difficult, some researchers will try tosuppress inconvenient databy removing strongly-correlated variables from their regression. This procedure falls into the broader categories ofp-hacking,data dredging, andpost hoc analysis. Dropping (useful) collinear predictors will generally worsen the accuracy of the model and coefficient estimates. Similarly, trying many different models or estimation procedures (e.g.ordinary least squares, ridge regression, etc.) until finding one that can "deal with" the collinearity creates aforking paths problem. P-values and confidence intervals derived frompost hoc analysesare invalidated by ignoring the uncertainty in themodel selectionprocedure. It is reasonable to exclude unimportant predictors if they are known ahead of time to have little or no effect on the outcome; for example, local cheese production should not be used to predict the height of skyscrapers. However, this must be done when first specifying the model, prior to observing any data, and potentially-informative variables should always be included.
https://en.wikipedia.org/wiki/Multicollinearity#Degenerate_features
Instatistics,linear regressionis amodelthat estimates the relationship between ascalarresponse (dependent variable) and one or more explanatory variables (regressororindependent variable). A model with exactly one explanatory variable is asimple linear regression; a model with two or more explanatory variables is amultiple linear regression.[1]This term is distinct frommultivariate linear regression, which predicts multiplecorrelateddependent variables rather than a single dependent variable.[2] In linear regression, the relationships are modeled usinglinear predictor functionswhose unknown modelparametersareestimatedfrom thedata. Most commonly, theconditional meanof the response given the values of the explanatory variables (or predictors) is assumed to be anaffine functionof those values; less commonly, the conditionalmedianor some otherquantileis used. Like all forms ofregression analysis, linear regression focuses on theconditional probability distributionof the response given the values of the predictors, rather than on thejoint probability distributionof all of these variables, which is the domain ofmultivariate analysis. Linear regression is also a type ofmachine learningalgorithm, more specifically asupervisedalgorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets.[3] Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4]This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine. Linear regression has many practical uses. Most applications fall into one of the following two broad categories: Linear regression models are often fitted using theleast squaresapproach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some othernorm(as withleast absolute deviationsregression), or by minimizing a penalized version of the least squarescost functionas inridge regression(L2-norm penalty) andlasso(L1-norm penalty). Use of theMean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many largeoutliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous. Given adata set{yi,xi1,…,xip}i=1n{\displaystyle \{y_{i},\,x_{i1},\ldots ,x_{ip}\}_{i=1}^{n}}ofnstatistical units, a linear regression model assumes that the relationship between the dependent variableyand the vector of regressorsxislinear. This relationship is modeled through adisturbance termorerror variableε—an unobservedrandom variablethat adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the formyi=β0+β1xi1+⋯+βpxip+εi=xiTβ+εi,i=1,…,n,{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},\qquad i=1,\ldots ,n,}whereTdenotes thetranspose, so thatxiTβis theinner productbetweenvectorsxiandβ. Often thesenequations are stacked together and written inmatrix notationas where Fitting a linear model to a given data set usually requires estimating the regression coefficientsβ{\displaystyle {\boldsymbol {\beta }}}such that the error termε=y−Xβ{\displaystyle {\boldsymbol {\varepsilon }}=\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}is minimized. For example, it is common to use the sum of squared errors‖ε‖22{\displaystyle \|{\boldsymbol {\varepsilon }}\|_{2}^{2}}as a measure ofε{\displaystyle {\boldsymbol {\varepsilon }}}for minimization. Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascenthiat various moments in timeti. Physics tells us that, ignoring thedrag, the relationship can be modeled as whereβ1determines the initial velocity of the ball,β2is proportional to thestandard gravity, andεiis due to measurement errors. Linear regression can be used to estimate the values ofβ1andβ2from the measured data. This model is non-linear in the time variable, but it is linear in the parametersβ1andβ2; if we take regressorsxi= (xi1,xi2)  = (ti,ti2), the model takes on the standard form Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.[citation needed] The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g.ordinary least squares): Violations of these assumptions can result in biased estimations ofβ, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods: A fitted linear regression model can be used to identify the relationship between a single predictor variablexjand the response variableywhen all the other predictor variables in the model are "held fixed". Specifically, the interpretation ofβjis theexpectedchange inyfor a one-unit change inxjwhen the other covariates are held fixed—that is, the expected value of thepartial derivativeofywith respect toxj. This is sometimes called theunique effectofxjony. In contrast, themarginal effectofxjonycan be assessed using acorrelation coefficientorsimple linear regressionmodel relating onlyxjtoy; this effect is thetotal derivativeofywith respect toxj. Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such asdummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "holdtifixed" and at the same time change the value ofti2). It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information inxj, so that once that variable is in the model, there is no contribution ofxjto the variation iny. Conversely, the unique effect ofxjcan be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation ofy, but they mainly explain variation in a way that is complementary to what is captured byxj. In this case, including the other variables in the model reduces the part of the variability ofythat is unrelated toxj, thereby strengthening the apparent relationship withxj. The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in anobservational study. The notion of a "unique effect" is appealing when studying acomplex systemwhere multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[9] Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed. The simplest case of a singlescalarpredictor variablexand a single scalar response variableyis known assimple linear regression. The extension to multiple and/orvector-valued predictor variables (denoted with a capitalX) is known asmultiple linear regression, also known asmultivariable linear regression(not to be confused withmultivariate linear regression).[10] Multiple linear regression is a generalization ofsimple linear regressionto the case of more than one independent variable, and aspecial caseof general linear models, restricted to one dependent variable. The basic model for multiple linear regression is for each observationi=1,…,n{\textstyle i=1,\ldots ,n}. In the formula above we considernobservations of one dependent variable andpindependent variables. Thus,Yiis theithobservation of the dependent variable,Xijisithobservation of thejthindependent variable,j= 1, 2, ...,p. The valuesβjrepresent parameters to be estimated, andεiis theithindependent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each ofm> 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: for all observations indexed asi= 1, ... ,nand for all dependent variables indexed asj = 1, ... ,m. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variableyis still a scalar. Another term,multivariate linear regression, refers to cases whereyis a vector, i.e., the same asgeneral linear regression. Model Assumptions to Check: 1. Linearity: Relationship between each predictor and outcome must be linear 2. Normality of residuals: Residuals should follow a normal distribution 3. Homoscedasticity: Constant variance of residuals across predicted values 4. Independence: Observations should be independent (not repeated measures) SPSS: Use partial plots, histograms, P-P plots, residual vs. predicted plots Thegeneral linear modelconsiders the situation when the response variable is not a scalar (for each observation) but a vector,yi. Conditional linearity ofE(y∣xi)=xiTB{\displaystyle E(\mathbf {y} \mid \mathbf {x} _{i})=\mathbf {x} _{i}^{\mathsf {T}}B}is still assumed, with a matrixBreplacing the vectorβof the classical linear regression model. Multivariate analogues ofordinary least squares(OLS) andgeneralized least squares(GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models"). Various models have been created that allow forheteroscedasticity, i.e. the errors for different response variables may have differentvariances. For example,weighted least squaresis a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See alsoWeighted linear least squares, andGeneralized least squares.)Heteroscedasticity-consistent standard errorsis an improved method for use with uncorrelated but potentially heteroscedastic errors. TheGeneralized linear model(GLM) is a framework for modeling response variables that are bounded or discrete. This is used, for example: Generalized linear models allow for an arbitrarylink function,g, that relates themeanof the response variable(s) to the predictors:E(Y)=g−1(XB){\displaystyle E(Y)=g^{-1}(XB)}. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the(−∞,∞){\displaystyle (-\infty ,\infty )}range of the linear predictor and the range of the response variable. Some common examples of GLMs are: Single index models[clarification needed]allow some degree of nonlinearity in the relationship betweenxandy, while preserving the central role of the linear predictorβ′xas in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimateβup to a proportionality constant.[11] Hierarchical linear models(ormultilevel regression) organizes the data into a hierarchy of regressions, for example whereAis regressed onB, andBis regressed onC. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels. Errors-in-variables models(or "measurement error models") extend the traditional linear regression model to allow the predictor variablesXto be observed with error. This error causes standard estimators ofβto become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero. In a multiple linear regression model parameterβj{\displaystyle \beta _{j}}of predictor variablexj{\displaystyle x_{j}}represents the individual effect ofxj{\displaystyle x_{j}}. It has an interpretation as the expected change in the response variabley{\displaystyle y}whenxj{\displaystyle x_{j}}increases by one unit with other predictor variables held constant. Whenxj{\displaystyle x_{j}}is strongly correlated with other predictor variables, it is improbable thatxj{\displaystyle x_{j}}can increase by one unit with other variables held constant. In this case, the interpretation ofβj{\displaystyle \beta _{j}}becomes problematic as it is based on an improbable condition, and the effect ofxj{\displaystyle x_{j}}cannot be evaluated in isolation. For a group of predictor variables, say,{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}, a group effectξ(w){\displaystyle \xi (\mathbf {w} )}is defined as a linear combination of their parameters wherew=(w1,w2,…,wq)⊺{\displaystyle \mathbf {w} =(w_{1},w_{2},\dots ,w_{q})^{\intercal }}is a weight vector satisfying∑j=1q|wj|=1{\textstyle \sum _{j=1}^{q}|w_{j}|=1}. Because of the constraint onwj{\displaystyle {w_{j}}},ξ(w){\displaystyle \xi (\mathbf {w} )}is also referred to as a normalized group effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}has an interpretation as the expected change iny{\displaystyle y}when variables in the groupx1,x2,…,xq{\displaystyle x_{1},x_{2},\dots ,x_{q}}change by the amountw1,w2,…,wq{\displaystyle w_{1},w_{2},\dots ,w_{q}}, respectively, at the same time with other variables (not in the group) held constant. It generalizes the individual effect of a variable to a group of variables in that (i{\displaystyle i}) ifq=1{\displaystyle q=1}, then the group effect reduces to an individual effect, and (ii{\displaystyle ii}) ifwi=1{\displaystyle w_{i}=1}andwj=0{\displaystyle w_{j}=0}forj≠i{\displaystyle j\neq i}, then the group effect also reduces to an individual effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}is said to be meaningful if the underlying simultaneous changes of theq{\displaystyle q}variables(x1,x2,…,xq)⊺{\displaystyle (x_{1},x_{2},\dots ,x_{q})^{\intercal }}is probable. Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by theleast squares regressiondue to themulticollinearityproblem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize allp{\displaystyle p}predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Lety′{\displaystyle y'}be the centredy{\displaystyle y}andxj′{\displaystyle x_{j}'}be the standardizedxj{\displaystyle x_{j}}. Then, the standardized linear regression model is Parametersβj{\displaystyle \beta _{j}}in the original model, includingβ0{\displaystyle \beta _{0}}, are simple functions ofβj′{\displaystyle \beta _{j}'}in the standardized model. The standardization of variables does not change their correlations, so{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is and its minimum-variance unbiased linear estimator is whereβ^j′{\displaystyle {\hat {\beta }}_{j}'}is the least squares estimator ofβj′{\displaystyle \beta _{j}'}. In particular, the average group effect of theq{\displaystyle q}standardized variables is which has an interpretation as the expected change iny′{\displaystyle y'}when allxj′{\displaystyle x_{j}'}in the strongly correlated group increase by(1/q){\displaystyle (1/q)}th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effectξA{\displaystyle \xi _{A}}is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimatorξ^A=1q(β^1′+β^2′+⋯+β^q′){\textstyle {\hat {\xi }}_{A}={\frac {1}{q}}({\hat {\beta }}_{1}'+{\hat {\beta }}_{2}'+\dots +{\hat {\beta }}_{q}')}, even when individually none of theβj′{\displaystyle \beta _{j}'}can be accurately estimated byβ^j′{\displaystyle {\hat {\beta }}_{j}'}. Not all group effects are meaningful or can be accurately estimated. For example,β1′{\displaystyle \beta _{1}'}is a special group effect with weightsw1=1{\displaystyle w_{1}=1}andwj=0{\displaystyle w_{j}=0}forj≠1{\displaystyle j\neq 1}, but it cannot be accurately estimated byβ^1′{\displaystyle {\hat {\beta }}'_{1}}. It is also not a meaningful effect. In general, for a group ofq{\displaystyle q}strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectorsw{\displaystyle \mathbf {w} }are at or near the centre of the simplex∑j=1qwj=1{\textstyle \sum _{j=1}^{q}w_{j}=1}(wj≥0{\displaystyle w_{j}\geq 0}) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated. Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of theq{\displaystyle q}variables via testingH0:ξA=0{\displaystyle H_{0}:\xi _{A}=0}versusH1:ξA≠0{\displaystyle H_{1}:\xi _{A}\neq 0}, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate. A group effect of the original variables{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}can be expressed as a constant times a group effect of the standardized variables{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.[12] InDempster–Shafer theory, or alinear belief functionin particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models. A large number of procedures have been developed forparameterestimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of aclosed-form solution,robustnesswith respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such asconsistencyand asymptoticefficiency. Some of the more common estimation techniques for linear regression are summarized below. Assuming that the independent variables arexi→=[x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}and the model's parameters areβ→=[β0,β1,…,βm]{\displaystyle {\vec {\beta }}=\left[\beta _{0},\beta _{1},\ldots ,\beta _{m}\right]}, then the model's prediction would be Ifxi→{\displaystyle {\vec {x_{i}}}}is extended toxi→=[1,x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[1,x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}thenyi{\displaystyle y_{i}}would become adot productof the parameter and the independent vectors, i.e. In the least-squares setting, the optimum parameter vector is defined as such that minimizes the sum of mean squared loss: Now putting the independent and dependent variables in matricesX{\displaystyle X}andY{\displaystyle Y}respectively, the loss function can be rewritten as: As the loss function isconvex, the optimum solution lies atgradientzero. The gradient of the loss function is (usingDenominator layout convention): Setting the gradient to zero produces the optimum parameter: Note:Theβ^{\displaystyle {\hat {\beta }}}obtained may indeed be the local minimum, one needs to differentiate once more to obtain theHessian matrixand show that it is positive definite. This is provided by theGauss–Markov theorem. Linear least squaresmethods include mainly: Maximum likelihood estimationcan be performed when the distribution of the error terms is known to belong to a certain parametric familyƒθofprobability distributions.[15]Whenfθis a normal distribution with zeromeanand variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a knowncovariance matrix. Let's denote each data point by(xi→,yi){\displaystyle ({\vec {x_{i}}},y_{i})}and the regression parameters asβ→{\displaystyle {\vec {\beta }}}, and the set of all data byD{\displaystyle D}and the cost function byL(D,β→)=∑i(yi−β→⋅xi→)2{\displaystyle L(D,{\vec {\beta }})=\sum _{i}(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}})^{2}}. As shown below the same optimal parameter that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}achieves maximum likelihood too.[16]Here the assumption is that the dependent variabley{\displaystyle y}is a random variable that follows aGaussian distribution, where the standard deviation is fixed and the mean is a linear combination ofx→{\displaystyle {\vec {x}}}:H(D,β→)=∏i=1nPr(yi|xi→;β→,σ)=∏i=1n12πσexp⁡(−(yi−β→⋅xi→)22σ2){\displaystyle {\begin{aligned}H(D,{\vec {\beta }})&=\prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\end{aligned}}} Now, we need to look for a parameter that maximizes this likelihood function. Since the logarithmic function is strictly increasing, instead of maximizing this function, we can also maximize its logarithm and find the optimal parameter that way.[16] I(D,β→)=log⁡∏i=1nPr(yi|xi→;β→,σ)=log⁡∏i=1n12πσexp⁡(−(yi−β→⋅xi→)22σ2)=nlog⁡12πσ−12σ2∑i=1n(yi−β→⋅xi→)2{\displaystyle {\begin{aligned}I(D,{\vec {\beta }})&=\log \prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\log \prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\\&=n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\end{aligned}}} The optimal parameter is thus equal to:[16] arg maxβ→I(D,β→)=arg maxβ→(nlog⁡12πσ−12σ2∑i=1n(yi−β→⋅xi→)2)=arg minβ→∑i=1n(yi−β→⋅xi→)2=arg minβ→L(D,β→)=β^→{\displaystyle {\begin{aligned}{\underset {\vec {\beta }}{\mbox{arg max}}}\,I(D,{\vec {\beta }})&={\underset {\vec {\beta }}{\mbox{arg max}}}\left(n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\right)\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\,L(D,{\vec {\beta }})\\&={\vec {\hat {\beta }}}\end{aligned}}} In this way, the parameter that maximizesH(D,β→){\displaystyle H(D,{\vec {\beta }})}is the same as the one that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}. This means that in linear regression, the result of the least squares method is the same as the result of the maximum likelihood estimation method.[16] Ridge regression[17][18][19]and other forms of penalized estimation, such asLasso regression,[5]deliberately introducebiasinto the estimation ofβin order to reduce thevariabilityof the estimate. The resulting estimates generally have lowermean squared errorthan the OLS estimates, particularly whenmulticollinearityis present or whenoverfittingis a problem. They are generally used when the goal is to predict the value of the response variableyfor values of the predictorsxthat have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias. Least absolute deviation(LAD) regression is arobust estimationtechnique in that it is less sensitive to the presence of outliers than OLS (but is lessefficientthan OLS when no outliers are present). It is equivalent to maximum likelihood estimation under aLaplace distributionmodel forε.[20] If we assume that error terms areindependentof the regressors,εi⊥xi{\displaystyle \varepsilon _{i}\perp \mathbf {x} _{i}}, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[21] Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines. Atrend linerepresents a trend, the long-term movement intime seriesdata after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line. Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data. Early evidence relatingtobacco smokingto mortality andmorbiditycame fromobservational studiesemploying regression analysis. In order to reducespurious correlationswhen analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those othersocio-economic factors. However, it is never possible to include all possibleconfoundingvariables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason,randomized controlled trialsare often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such asinstrumental variablesregression may be used to attempt to estimate causal relationships from observational data. Thecapital asset pricing modeluses linear regression as well as the concept ofbetafor analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets. Linear regression is the predominant empirical tool ineconomics. For example, it is used to predictconsumption spending,[24]fixed investmentspending,inventory investment, purchases of a country'sexports,[25]spending onimports,[25]thedemand to hold liquid assets,[26]labor demand,[27]andlabor supply.[27] Linear regression finds application in a wide range of environmental science applications such asland use,[28]infectious diseases,[29]andair pollution.[30]For example, linear regression can be used to predict the changing effects of car pollution.[31]One notable example of this application in infectious diseases is theflattening the curvestrategy emphasized early in the COVID-19 pandemic, where public health officials dealt with sparse data on infected individuals and sophisticated models of disease transmission to characterize the spread of COVID-19.[32] Linear regression is commonly used inbuilding sciencefield studies to derive characteristics of building occupants. In athermal comfortfield study, building scientists usually ask occupants' thermal sensation votes, which range from -3 (feeling cold) to 0 (neutral) to +3 (feeling hot), and measure occupants' surrounding temperature data. A neutral or comfort temperature can be calculated based on a linear regression between the thermal sensation vote and indoor temperature, and setting the thermal sensation vote as zero. However, there has been a debate on the regression direction: regressing thermal sensation votes (y-axis) against indoor temperature (x-axis) or the opposite: regressing indoor temperature (y-axis) against thermal sensation votes (x-axis).[33] Linear regression plays an important role in the subfield ofartificial intelligenceknown asmachine learning. The linear regression algorithm is one of the fundamentalsupervised machine-learningalgorithms due to its relative simplicity and well-known properties.[34] Isaac Newtonis credited with inventing "a certain technique known today aslinear regression analysis" in his work on equinoxes in 1700, and wrote down the first of the two normal equations of theordinary least squaresmethod.[35][36]The Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed byLegendre(1805) andGauss(1809) for the prediction of planetary movement.Queteletwas responsible for making the procedure well-known and for using it extensively in the social sciences.[37]
https://en.wikipedia.org/wiki/Linear_regression
Ininformation theory, thecross-entropybetween twoprobability distributionsp{\displaystyle p}andq{\displaystyle q}, over the same underlying set of events, measures the average number ofbitsneeded to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distributionq{\displaystyle q}, rather than the true distributionp{\displaystyle p}. The cross-entropy of the distributionq{\displaystyle q}relative to a distributionp{\displaystyle p}over a given set is defined as follows: H(p,q)=−Ep⁡[log⁡q],{\displaystyle H(p,q)=-\operatorname {E} _{p}[\log q],} whereEp⁡[⋅]{\displaystyle \operatorname {E} _{p}[\cdot ]}is theexpected valueoperator with respect to the distributionp{\displaystyle p}. The definition may be formulated using theKullback–Leibler divergenceDKL(p∥q){\displaystyle D_{\mathrm {KL} }(p\parallel q)}, divergence ofp{\displaystyle p}fromq{\displaystyle q}(also known as therelative entropyofp{\displaystyle p}with respect toq{\displaystyle q}). H(p,q)=H(p)+DKL(p∥q),{\displaystyle H(p,q)=H(p)+D_{\mathrm {KL} }(p\parallel q),} whereH(p){\displaystyle H(p)}is theentropyofp{\displaystyle p}. Fordiscreteprobability distributionsp{\displaystyle p}andq{\displaystyle q}with the samesupportX{\displaystyle {\mathcal {X}}}, this means H(p,q)=−∑x∈Xp(x)log⁡q(x).{\displaystyle H(p,q)=-\sum _{x\in {\mathcal {X}}}p(x)\,\log q(x).}(Eq.1) The situation forcontinuousdistributions is analogous. We have to assume thatp{\displaystyle p}andq{\displaystyle q}areabsolutely continuouswith respect to some referencemeasurer{\displaystyle r}(usuallyr{\displaystyle r}is aLebesgue measureon aBorelσ-algebra). LetP{\displaystyle P}andQ{\displaystyle Q}be probability density functions ofp{\displaystyle p}andq{\displaystyle q}with respect tor{\displaystyle r}. Then −∫XP(x)log⁡Q(x)dx=Ep⁡[−log⁡Q],{\displaystyle -\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x=\operatorname {E} _{p}[-\log Q],} and therefore H(p,q)=−∫XP(x)log⁡Q(x)dx.{\displaystyle H(p,q)=-\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x.}(Eq.2) NB: The notationH(p,q){\displaystyle H(p,q)}is also used for a different concept, thejoint entropyofp{\displaystyle p}andq{\displaystyle q}. Ininformation theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilities{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}can be seen as representing an implicit probability distributionq(xi)=(12)ℓi{\displaystyle q(x_{i})=\left({\frac {1}{2}}\right)^{\ell _{i}}}over{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distributionq{\displaystyle q}is assumed while the data actually follows a distributionp{\displaystyle p}. That is why the expectation is taken over the true probability distributionp{\displaystyle p}and notq.{\displaystyle q.}Indeed the expected message-length under the true distributionp{\displaystyle p}is Ep⁡[ℓ]=−Ep⁡[ln⁡q(x)ln⁡(2)]=−Ep⁡[log2⁡q(x)]=−∑xip(xi)log2⁡q(xi)=−∑xp(x)log2⁡q(x)=H(p,q).{\displaystyle {\begin{aligned}\operatorname {E} _{p}[\ell ]&=-\operatorname {E} _{p}\left[{\frac {\ln {q(x)}}{\ln(2)}}\right]\\[1ex]&=-\operatorname {E} _{p}\left[\log _{2}{q(x)}\right]\\[1ex]&=-\sum _{x_{i}}p(x_{i})\,\log _{2}q(x_{i})\\[1ex]&=-\sum _{x}p(x)\,\log _{2}q(x)=H(p,q).\end{aligned}}} There are many situations where cross-entropy needs to be measured but the distribution ofp{\displaystyle p}is unknown. An example islanguage modeling, where a model is created based on a training setT{\displaystyle T}, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,p{\displaystyle p}is the true distribution of words in any corpus, andq{\displaystyle q}is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula: H(T,q)=−∑i=1N1Nlog2⁡q(xi){\displaystyle H(T,q)=-\sum _{i=1}^{N}{\frac {1}{N}}\log _{2}q(x_{i})} whereN{\displaystyle N}is the size of the test set, andq(x){\displaystyle q(x)}is the probability of eventx{\displaystyle x}estimated from the training set. In other words,q(xi){\displaystyle q(x_{i})}is the probability estimate of the model that the i-th word of the text isxi{\displaystyle x_{i}}. The sum is averaged over theN{\displaystyle N}words of the test. This is aMonte Carlo estimateof the true cross-entropy, where the test set is treated as samples fromp(x){\displaystyle p(x)}.[citation needed] The cross entropy arises in classification problems when introducing a logarithm in the guise of thelog-likelihoodfunction. The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions byqθ{\displaystyle q_{\theta }}, withθ{\displaystyle \theta }subject to the optimization effort. Consider a given finite sequence ofN{\displaystyle N}valuesxi{\displaystyle x_{i}}from a training set, obtained fromconditionally independentsampling. The likelihood assigned to any considered parameterθ{\displaystyle \theta }of the model is then given by the product over all probabilitiesqθ(X=xi){\displaystyle q_{\theta }(X=x_{i})}. Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal toxi{\displaystyle x_{i}}(for some indexi{\displaystyle i}) is denoted by#xi{\displaystyle \#x_{i}}, then the frequency of that value equals#xi/N{\displaystyle \#x_{i}/N}. Denote the latter byp(X=xi){\displaystyle p(X=x_{i})}, as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote byPP:=eH(p,qθ){\displaystyle PP:={\mathrm {e} }^{H(p,q_{\theta })}}theperplexity, which can be seen to equal∏xiqθ(X=xi)−p(X=xi){\textstyle \prod _{x_{i}}q_{\theta }(X=x_{i})^{-p(X=x_{i})}}by thecalculation rules for the logarithm, and where the product is over the values without double counting. SoL(θ;x)=∏iqθ(X=xi)=∏xiqθ(X=xi)#xi=PP−N=e−N⋅H(p,qθ){\displaystyle {\mathcal {L}}(\theta ;{\mathbf {x} })=\prod _{i}q_{\theta }(X=x_{i})=\prod _{x_{i}}q_{\theta }(X=x_{i})^{\#x_{i}}=PP^{-N}={\mathrm {e} }^{-N\cdot H(p,q_{\theta })}}orlog⁡L(θ;x)=−N⋅H(p,qθ).{\displaystyle \log {\mathcal {L}}(\theta ;{\mathbf {x} })=-N\cdot H(p,q_{\theta }).}Since the logarithm is amonotonically increasing function, it does not affect extremization. So observe that thelikelihood maximizationamounts to minimization of the cross-entropy. Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distributionq{\displaystyle q}against a fixed reference distributionp{\displaystyle p}, cross-entropy andKL divergenceare identical up to an additive constant (sincep{\displaystyle p}is fixed): According to theGibbs' inequality, both take on their minimal values whenp=q{\displaystyle p=q}, which is0{\displaystyle 0}for KL divergence, andH(p){\displaystyle \mathrm {H} (p)}for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called thePrinciple of Minimum Cross-Entropy(MCE), orMinxent. However, as discussed in the articleKullback–Leibler divergence, sometimes the distributionq{\displaystyle q}is the fixed prior reference distribution, and the distributionp{\displaystyle p}is optimized to be as close toq{\displaystyle q}as possible, subject to some constraint. In this case the two minimizations arenotequivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to beDKL(p∥q){\displaystyle D_{\mathrm {KL} }(p\parallel q)}, rather thanH(p,q){\displaystyle H(p,q)}. In fact, cross-entropy is another name forrelative entropy; see Cover and Thomas[1]and Good.[2]On the other hand,H(p,q){\displaystyle H(p,q)}does not agree with the literature and can be misleading. Cross-entropy can be used to define a loss function inmachine learningandoptimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions toadversarial learning.[3]The true probabilitypi{\displaystyle p_{i}}is the true label, and the given distributionqi{\displaystyle q_{i}}is the predicted value of the current model. This is also known as thelog loss(orlogarithmic loss[4]orlogistic loss);[5]the terms "log loss" and "cross-entropy loss" are used interchangeably.[6] More specifically, consider abinary regressionmodel which can be used to classify observations into two possible classes (often simply labelled0{\displaystyle 0}and1{\displaystyle 1}). The output of the model for a given observation, given a vector of input featuresx{\displaystyle x}, can be interpreted as a probability, which serves as the basis for classifying the observation. Inlogistic regression, the probability is modeled using thelogistic functiong(z)=1/(1+e−z){\displaystyle g(z)=1/(1+e^{-z})}wherez{\displaystyle z}is some function of the input vectorx{\displaystyle x}, commonly just a linear function. The probability of the outputy=1{\displaystyle y=1}is given byqy=1=y^≡g(w⋅x)=11+e−w⋅x,{\displaystyle q_{y=1}={\hat {y}}\equiv g(\mathbf {w} \cdot \mathbf {x} )={\frac {1}{1+e^{-\mathbf {w} \cdot \mathbf {x} }}},}where the vector of weightsw{\displaystyle \mathbf {w} }is optimized through some appropriate algorithm such asgradient descent. Similarly, the complementary probability of finding the outputy=0{\displaystyle y=0}is simply given byqy=0=1−y^.{\displaystyle q_{y=0}=1-{\hat {y}}.} Having set up our notation,p∈{y,1−y}{\displaystyle p\in \{y,1-y\}}andq∈{y^,1−y^}{\displaystyle q\in \{{\hat {y}},1-{\hat {y}}\}}, we can use cross-entropy to get a measure of dissimilarity betweenp{\displaystyle p}andq{\displaystyle q}:H(p,q)=−∑ipilog⁡qi=−ylog⁡y^−(1−y)log⁡(1−y^).{\displaystyle {\begin{aligned}H(p,q)&=-\sum _{i}p_{i}\log q_{i}\\[1ex]&=-y\log {\hat {y}}-(1-y)\log(1-{\hat {y}}).\end{aligned}}} Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy.[7]For example, suppose we haveN{\displaystyle N}samples with each sample indexed byn=1,…,N{\displaystyle n=1,\dots ,N}. Theaverageof the loss function is then given by: J(w)=1N∑n=1NH(pn,qn)=−1N∑n=1N[ynlog⁡y^n+(1−yn)log⁡(1−y^n)],{\displaystyle {\begin{aligned}J(\mathbf {w} )&={\frac {1}{N}}\sum _{n=1}^{N}H(p_{n},q_{n})\\&=-{\frac {1}{N}}\sum _{n=1}^{N}\ \left[y_{n}\log {\hat {y}}_{n}+(1-y_{n})\log(1-{\hat {y}}_{n})\right],\end{aligned}}} wherey^n≡g(w⋅xn)=1/(1+e−w⋅xn){\displaystyle {\hat {y}}_{n}\equiv g(\mathbf {w} \cdot \mathbf {x} _{n})=1/(1+e^{-\mathbf {w} \cdot \mathbf {x} _{n}})}, withg(z){\displaystyle g(z)}the logistic function as before. The logistic loss is sometimes called cross-entropy loss. It is also known as log loss.[duplication?](In this case, the binary label is often denoted by {−1,+1}.[8]) Remark:The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss forlinear regression. That is, define XT=(1x11…x1p1x21⋯x2p⋮⋮⋮1xn1⋯xnp)∈Rn×(p+1),{\displaystyle X^{\mathsf {T}}={\begin{pmatrix}1&x_{11}&\dots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\\vdots &\vdots &&\vdots \\1&x_{n1}&\cdots &x_{np}\\\end{pmatrix}}\in \mathbb {R} ^{n\times (p+1)},}yi^=f^(xi1,…,xip)=11+exp⁡(−β0−β1xi1−⋯−βpxip),{\displaystyle {\hat {y_{i}}}={\hat {f}}(x_{i1},\dots ,x_{ip})={\frac {1}{1+\exp(-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})}},}L(β)=−∑i=1N[yilog⁡y^i+(1−yi)log⁡(1−y^i)].{\displaystyle L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}\left[y_{i}\log {\hat {y}}_{i}+(1-y_{i})\log(1-{\hat {y}}_{i})\right].} Then we have the result ∂∂βL(β)=XT(Y^−Y).{\displaystyle {\frac {\partial }{\partial {\boldsymbol {\beta }}}}L({\boldsymbol {\beta }})=X^{T}({\hat {Y}}-Y).} The proof is as follows. For anyy^i{\displaystyle {\hat {y}}_{i}}, we have ∂∂β0ln⁡11+e−β0+k0=e−β0+k01+e−β0+k0,{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln {\frac {1}{1+e^{-\beta _{0}+k_{0}}}}={\frac {e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}},}∂∂β0ln⁡(1−11+e−β0+k0)=−11+e−β0+k0,{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln \left(1-{\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right)={\frac {-1}{1+e^{-\beta _{0}+k_{0}}}},}∂∂β0L(β)=−∑i=1N[yi⋅e−β0+k01+e−β0+k0−(1−yi)11+e−β0+k0]=−∑i=1N[yi−y^i]=∑i=1N(y^i−yi),{\displaystyle {\begin{aligned}{\frac {\partial }{\partial \beta _{0}}}L({\boldsymbol {\beta }})&=-\sum _{i=1}^{N}\left[{\frac {y_{i}\cdot e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}}-(1-y_{i}){\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right]\\&=-\sum _{i=1}^{N}\left[y_{i}-{\hat {y}}_{i}\right]=\sum _{i=1}^{N}({\hat {y}}_{i}-y_{i}),\end{aligned}}}∂∂β1ln⁡11+e−β1xi1+k1=xi1ek1eβ1xi1+ek1,{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln {\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}={\frac {x_{i1}e^{k_{1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}∂∂β1ln⁡[1−11+e−β1xi1+k1]=−xi1eβ1xi1eβ1xi1+ek1,{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln \left[1-{\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}\right]={\frac {-x_{i1}e^{\beta _{1}x_{i1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}∂∂β1L(β)=−∑i=1Nxi1(yi−y^i)=∑i=1Nxi1(y^i−yi).{\displaystyle {\frac {\partial }{\partial \beta _{1}}}L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}x_{i1}(y_{i}-{\hat {y}}_{i})=\sum _{i=1}^{N}x_{i1}({\hat {y}}_{i}-y_{i}).} In a similar way, we eventually obtain the desired result. It may be beneficial to train an ensemble of models that have diversity, such that when they are combined, their predictive accuracy is augmented.[9][10]Assuming a simple ensemble ofK{\displaystyle K}classifiers is assembled via averaging the outputs, then the amended cross-entropy is given byek=H(p,qk)−λK∑j≠kH(qj,qk){\displaystyle e^{k}=H(p,q^{k})-{\frac {\lambda }{K}}\sum _{j\neq k}H(q^{j},q^{k})}whereek{\displaystyle e^{k}}is the cost function of thekth{\displaystyle k^{th}}classifier,qk{\displaystyle q^{k}}is the output probability of thekth{\displaystyle k^{th}}classifier,p{\displaystyle p}is the true probability to be estimated, andλ{\displaystyle \lambda }is a parameter between 0 and 1 that defines the 'diversity' that we would like to establish among the ensemble. Whenλ=0{\displaystyle \lambda =0}we want each classifier to do its best regardless of the ensemble and whenλ=1{\displaystyle \lambda =1}we would like the classifier to be as diverse as possible.
https://en.wikipedia.org/wiki/Cross_entropy