text
stringlengths
16
172k
source
stringlengths
32
122
Inmathematicsandstatistics,deviationserves as a measure to quantify the disparity between anobserved valueof a variable and another designated value, frequently the mean of that variable. Deviations with respect to thesample meanand thepopulation mean(or "true value") are callederrorsandresiduals, respectively. Thesignof the deviation reports the direction of that difference: the deviation is positive when the observed value exceeds the reference value. Theabsolute valueof the deviation indicates the size or magnitude of the difference. In a givensample, there are as many deviations assample points.Summary statisticscan be derived from a set of deviations, such as thestandard deviationand themean absolute deviation, measures ofdispersion, and themean signed deviation, a measure ofbias.[1] The deviation of each data point is calculated by subtracting the mean of the data set from the individual data point. Mathematically, the deviationdof a data pointxin a data set with respect to the meanmis given by the difference: This calculation represents the "distance" of a data point from the mean and provides information about how much individual values vary from the average. Positive deviations indicate values above the mean, while negative deviations indicate values below the mean.[1] The sum of squared deviations is a key component in the calculation ofvariance, another measure of the spread or dispersion of a data set. Variance is calculated by averaging the squared deviations. Deviation is a fundamental concept in understanding the distribution and variability of data points in statistical analysis.[1] A deviation that is a difference between an observed value and thetrue valueof a quantity of interest (wheretrue valuedenotes the Expected Value, such as the population mean) is an error.[2] A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean) is aresidual. These concepts are applicable for data at theintervalandratiolevels of measurement.[3] Di=|xi−m(X)|,{\displaystyle D_{i}=|x_{i}-m(X)|,}where The average absolute deviation (AAD) in statistics is a measure of the dispersion or spread of a set of data points around a central value, usually the mean or median. It is calculated by taking the average of the absolute differences between each data point and the chosen central value. AAD provides a measure of the typical magnitude of deviations from the central value in a dataset, giving insights into the overall variability of the data.[5] Least absolute deviation (LAD) is a statistical method used inregression analysisto estimate the coefficients of a linear model. Unlike the more common least squares method, which minimizes the sum of squared vertical distances (residuals) between the observed and predicted values, the LAD method minimizes the sum of the absolute vertical distances. In the context of linear regression, if (x1,y1), (x2,y2), ... are the data points, andaandbare the coefficients to be estimated for the linear model y=b+(a∗x){\displaystyle y=b+(a*x)} the least absolute deviation estimates (aandb) are obtained by minimizing the sum. The LAD method is less sensitive to outliers compared to the least squares method, making it a robust regression technique in the presence of skewed or heavy-tailed residual distributions.[6] For anunbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero. Mean Signed Deviation is a statistical measure used to assess the average deviation of a set of values from a central point, usually the mean. It is calculated by taking the arithmetic mean of the signed differences between each data point and the mean of the dataset. The term "signed" indicates that the deviations are considered with their respective signs, meaning whether they are above or below the mean. Positive deviations (above the mean) and negative deviations (below the mean) are included in the calculation. The mean signed deviation provides a measure of the average distance and direction of data points from the mean, offering insights into the overall trend and distribution of the data.[3] Statistics of the distribution of deviations are used as measures ofstatistical dispersion. Deviations, which measure the difference between observed values and some reference point, inherently carry units corresponding to the measurement scale used. For example, if lengths are being measured, deviations would be expressed in units like meters or feet. To make deviations unitless and facilitate comparisons across different datasets, one cannondimensionalize. One common method involves dividing deviations by a measure of scale(statistical dispersion), with the population standard deviation used for standardizing or the sample standard deviation forstudentizing(e.g.,Studentized residual). Another approach to nondimensionalization focuses on scaling by location rather than dispersion. The percent deviation offers an illustration of this method, calculated as the difference between the observed value and the accepted value, divided by the accepted value, and then multiplied by 100%. By scaling the deviation based on the accepted value, this technique allows for expressing deviations in percentage terms, providing a clear perspective on the relative difference between the observed and accepted values. Both methods of nondimensionalization serve the purpose of making deviations comparable and interpretable beyond the specific measurement units.[10] In one example, a series of measurements of the speed are taken of sound in a particular medium. The accepted or expected value for the speed of sound in this medium, based on theoretical calculations, is 343 meters per second. Now, during an experiment, multiple measurements are taken by different researchers. Researcher A measures the speed of sound as 340 meters per second, resulting in a deviation of −3 meters per second from the expected value. Researcher B, on the other hand, measures the speed as 345 meters per second, resulting in a deviation of +2 meters per second. In this scientific context, deviation helps quantify how individual measurements differ from the theoretically predicted or accepted value. It provides insights into theaccuracy and precisionof experimental results, allowing researchers to assess the reliability of their data and potentially identify factors contributing to discrepancies. In another example, suppose a chemical reaction is expected to yield 100 grams of a specific compound based on stoichiometry. However, in an actual laboratory experiment, several trials are conducted with different conditions. In Trial 1, the actual yield is measured to be 95 grams, resulting in a deviation of −5 grams from the expected yield. In Trial 2, the actual yield is measured to be 102 grams, resulting in a deviation of +2 grams. These deviations from the expected value provide valuable information about the efficiency and reproducibility of the chemical reaction under different conditions. Scientists can analyze these deviations to optimize reaction conditions, identify potential sources of error, and improve the overall yield and reliability of the process. The concept of deviation is crucial in assessing the accuracy of experimental results and making informed decisions to enhance the outcomes of scientific experiments.
https://en.wikipedia.org/wiki/Absolute_deviation
Instatistics, themedian absolute deviation(MAD) is arobustmeasure of thevariabilityof aunivariatesample ofquantitative data. It can also refer to thepopulationparameterthat isestimatedby the MAD calculated from a sample.[1] For a univariate data setX1,X2, ...,Xn, the MAD is defined as themedianof theabsolute deviationsfrom the data's medianX~=median⁡(X){\displaystyle {\tilde {X}}=\operatorname {median} (X)}: that is, starting with theresiduals(deviations) from the data's median, the MAD is themedianof theirabsolute values. Consider the data (1, 1, 2,2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1,1, 2, 4, 7)). So the median absolute deviation for this data is 1. The median absolute deviation is a measure ofstatistical dispersion. Moreover, the MAD is arobust statistic, being more resilient to outliers in a data set than thestandard deviation. In the standard deviation, the distances from themeanare squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant. Because the MAD is a more robust estimator of scale than the samplevarianceorstandard deviation, it works better with distributions without a mean or variance, such as theCauchy distribution. The MAD may be used similarly to how one would use the deviation for the average. In order to use the MAD as aconsistent estimatorfor theestimationof thestandard deviationσ{\displaystyle \sigma }, one takes wherek{\displaystyle k}is a constantscale factor, which depends on the distribution.[2] Fornormally distributeddatak{\displaystyle k}is taken to be i.e., thereciprocalof thequantile functionΦ−1{\displaystyle \Phi ^{-1}}(also known as the inverse of thecumulative distribution function) for thestandard normal distributionZ=(X−μ)/σ{\displaystyle Z=(X-\mu )/\sigma }.[3][4] The argument 3/4 is such that±MAD{\displaystyle \pm \operatorname {MAD} }covers 50% (between 1/4 and 3/4) of the standard normalcumulative distribution function, i.e. Therefore, we must have that Noticing that we have thatMAD⁡/σ=Φ−1(3/4)=0.67449{\displaystyle \operatorname {MAD} /\sigma =\Phi ^{-1}(3/4)=0.67449}, from which we obtain the scale factork=1/Φ−1(3/4)=1.4826{\displaystyle k=1/\Phi ^{-1}(3/4)=1.4826}. Another way of establishing the relationship is noting that MAD equals thehalf-normal distributionmedian: This form is used in, e.g., theprobable error. In the case ofcomplexvalues (X+iY), the relation of MAD to the standard deviation is unchanged for normally distributed data. Analogously to how themediangeneralizes to thegeometric median(GM) inmultivariate data, MAD can be generalized to themedian of distances to GM(MADGM) inndimensions. This is done by replacing the absolute differences in one dimension byEuclidean distancesof the data points to the geometric median inndimensions.[5]This gives the identical result as the univariate MAD in one dimension and generalizes to any number of dimensions. MADGM needs the geometric median to be found, which is done by an iterative process. The population MAD is defined analogously to the sample MAD, but is based on the complete population rather than on a sample. For a symmetric distribution with zero mean, the population MAD is the 75thpercentileof the distribution. Unlike thevariance, which may be infinite or undefined, the population MAD is always a finite number. For example, the standardCauchy distributionhas undefined variance, but its MAD is 1. The earliest known mention of the concept of the MAD occurred in 1816, in a paper byCarl Friedrich Gausson the determination of the accuracy of numerical observations.[6][7]
https://en.wikipedia.org/wiki/Median_absolute_deviation
Inrobust statistics,robust regressionseeks to overcome some limitations of traditionalregression analysis. A regression analysis models the relationship between one or moreindependent variablesand adependent variable. Standard types of regression, such asordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results otherwise (i.e. are notrobustto assumption violations). Robust regression methods are designed to limit the effect that violations of assumptions by the underlying data-generating process have on regression estimates. For example,least squaresestimates forregression modelsare highly sensitive tooutliers: an outlier with twice the error magnitude of a typical observation contributes four (two squared) times as much to the squared errorloss, and therefore has moreleverageover the regression estimates. TheHuber lossfunction is a robust alternative to standard square error loss that reduces outliers' contributions to the squared error loss, thereby limiting their impact on regression estimates. One instance in which robust estimation should be considered is when there is a strong suspicion ofheteroscedasticity. In thehomoscedasticmodel, it is assumed that the variance of the error term is constant for all values ofx. Heteroscedasticity allows the variance to be dependent onx, which is more accurate for many real scenarios. For example, the variance of expenditure is often larger for individuals with higher income than for individuals with lower incomes. Software packages usually default to a homoscedastic model, even though such a model may be less accurate than a heteroscedastic model. One simple approach (Tofallis, 2008) is to apply least squares to percentage errors, as this reduces the influence of the larger values of the dependent variable compared to ordinary least squares. Another common situation in which robust estimation is used occurs when the data contain outliers. In the presence of outliers that do not come from the same data-generating process as the rest of the data, least squares estimation isinefficientand can be biased. Because the least squares predictions are dragged towards the outliers, and because the variance of the estimates is artificially inflated, the result is that outliers can be masked. (In many situations, including some areas ofgeostatisticsand medical statistics, it is precisely the outliers that are of interest.) Although it is sometimes claimed that least squares (or classical statistical methods in general) are robust, they are only robust in the sense that thetype I error ratedoes not increase under violations of the model. In fact, the type I error rate tends to be lower than the nominal level when outliers are present, and there is often a dramatic increase in thetype II error rate. The reduction of the type I error rate has been labelled as theconservatismof classical methods. Despite their superior performance over least squares estimation in many situations, robust methods for regression are still not widely used. Several reasons may help explain their unpopularity (Hampel et al. 1986, 2005). One possible reason is that there are several competing methods[citation needed]and the field got off to many false starts. Also, robust estimates are much more computationally intensive than least squares estimation[citation needed]; in recent years, however, this objection has become less relevant, as computing power has increased greatly. Another reason may be that some popular statistical software packages failed to implement the methods (Stromberg, 2004). Perhaps the most important reason for the unpopularity of robust regression methods is that when the error variance is quite large or does not exist, for any given dataset, any estimate of the regression coefficients, robust or otherwise, will likely be practically worthless unless the sample is quite large. Although uptake of robust methods has been slow, modern mainstream statistics text books often include discussion of these methods (for example,the books by Seber and Lee, and by Faraway[vague]; for a good general description of how the various robust regression methods developed from one another seeAndersen's book[vague]). Also, modern statistical software packages such asR,SAS, Statsmodels,StataandS-PLUSinclude considerable functionality for robust estimation (see, for example,the books by Venables and Ripley, and by Maronna et al.[vague]). The simplest methods of estimating parameters in a regression model that are less sensitive to outliers than the least squares estimates, is to useleast absolute deviations. Even then, gross outliers can still have a considerable impact on the model, motivating research into even more robust approaches. In 1964, Huber introducedM-estimationfor regression. The M in M-estimation stands for "maximum likelihood type". The method is robust to outliers in the response variable, but turned out not to be resistant to outliers in theexplanatory variables(leveragepoints). In fact, when there are outliers in the explanatory variables, the method has no advantage over least squares. In the 1980s, several alternatives to M-estimation were proposed as attempts to overcome the lack of resistance. Seethe book byRousseeuwand Leroy[vague]for a very practical review.Least trimmed squares(LTS) is a viable alternative and is currently (2007) the preferred choice of Rousseeuw and Ryan (1997, 2008). TheTheil–Sen estimatorhas a lower breakdown point than LTS but is statistically efficient and popular. Another proposed solution was S-estimation. This method finds a line (plane or hyperplane) that minimizes a robust estimate of the scale (from which the method gets the S in its name) of the residuals. This method is highly resistant to leverage points and is robust to outliers in the response. However, this method was also found to be inefficient. MM-estimationattempts to retain the robustness and resistance of S-estimation, whilst gaining the efficiency of M-estimation. The method proceeds by finding a highly robust and resistant S-estimate that minimizes an M-estimate of the scale of the residuals (the first M in the method's name). The estimated scale is then held constant whilst a close by M-estimate of the parameters is located (the second M). Another approach to robust estimation of regression models is to replace the normal distribution with a heavy-tailed distribution. At-distributionwith 4–6 degrees of freedom has been reported to be a good choice in various practical situations. Bayesian robust regression, being fully parametric, relies heavily on such distributions. Under the assumption oft-distributed residuals, the distribution is a location-scale family. That is,x←(x−μ)/σ{\displaystyle x\leftarrow (x-\mu )/\sigma }. The degrees of freedom of thet-distribution is sometimes called thekurtosis parameter. Lange, Little and Taylor (1989) discuss this model in some depth from a non-Bayesian point of view. A Bayesian account appears in Gelman et al. (2003). An alternative parametric approach is to assume that the residuals follow amixtureof normal distributions (Daemi et al. 2019); in particular, acontaminated normal distributionin which the majority of observations are from a specified normal distribution, but a small proportion are from a normal distribution with much higher variance. That is, residuals have probability1−ε{\displaystyle 1-\varepsilon }of coming from a normal distribution with varianceσ2{\displaystyle \sigma ^{2}}, whereε{\displaystyle \varepsilon }is small, and probabilityε{\displaystyle \varepsilon }of coming from a normal distribution with variancecσ2{\displaystyle c\sigma ^{2}}for somec>1{\displaystyle c>1}: Typically,ε<0.1{\displaystyle \varepsilon <0.1}. This is sometimes called theε{\displaystyle \varepsilon }-contamination model. Parametric approaches have the advantage that likelihood theory provides an "off-the-shelf" approach to inference (although for mixture models such as theε{\displaystyle \varepsilon }-contamination model, the usual regularity conditions might not apply), and it is possible to build simulation models from the fit. However, such parametric models still assume that the underlying model is literally true. As such, they do not account for skewed residual distributions or finite observation precisions. Another robust method is the use ofunit weights(Wainer& Thissen, 1976), a method that can be applied when there are multiple predictors of a single outcome.Ernest Burgess(1928) used unit weights to predict success on parole. He scored 21 positive factors as present (e.g., "no prior arrest" = 1) or absent ("prior arrest" = 0), then summed to yield a predictor score, which was shown to be a useful predictor of parole success.Samuel S. Wilks(1938) showed that nearly all sets of regression weights sum to composites that are very highly correlated with one another, including unit weights, a result referred to asWilks' theorem(Ree, Carretta, & Earles, 1998).Robyn Dawes(1979) examined decision making in applied settings, showing that simple models with unit weights often outperformed human experts. Bobko, Roth, and Buster (2007) reviewed the literature on unit weights and concluded that decades of empirical studies show that unit weights perform similar to ordinary regression weights on cross validation. TheBUPAliver data have been studied by various authors, including Breiman (2001). The data can be found at theclassic data setspage, and there is some discussion in the article on theBox–Cox transformation. A plot of the logs of ALT versus the logs of γGT appears below. The two regression lines are those estimated by ordinary least squares (OLS) and by robust MM-estimation. The analysis was performed inRusing software made available by Venables and Ripley (2002). The two regression lines appear to be very similar (and this is not unusual in a data set of this size). However, the advantage of the robust approach comes to light when the estimates of residual scale are considered. For ordinary least squares, the estimate of scale is 0.420, compared to 0.373 for the robust method. Thus, the relative efficiency of ordinary least squares to MM-estimation in this example is 1.266. This inefficiency leads to loss of power in hypothesis tests and to unnecessarily wide confidence intervals on estimated parameters. Another consequence of the inefficiency of theordinary least squaresfit is that several outliers are masked because the estimate of residual scale is inflated; the scaled residuals are pushed closer to zero than when a more appropriate estimate of scale is used. The plots of the scaled residuals from the two models appear below. The variable on thexaxis is just the observation number as it appeared in the data set. Rousseeuw and Leroy (1986) contains many such plots. The horizontal reference lines are at 2 and −2, so that any observed scaled residual beyond these boundaries can be considered to be an outlier. Clearly, the least squares method leads to many interesting observations being masked. Whilst in one or two dimensions outlier detection using classical methods can be performed manually, with large data sets and in high dimensions the problem of masking can make identification of many outliers impossible. Robust methods automatically detect these observations, offering a serious advantage over classical methods when outliers are present.
https://en.wikipedia.org/wiki/Robust_regression
Inmathematics,Chebyshev distance(orTchebychev distance),maximum metric, orL∞metric[1]is ametricdefined on areal coordinate spacewhere thedistancebetween twopointsis the greatest of their differences along any coordinate dimension.[2]It is named afterPafnuty Chebyshev. It is also known aschessboard distance, since in the game ofchessthe minimum number of moves needed by akingto go from one square on achessboardto another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.[3]For example, the Chebyshev distance between f6 and e2 equals 4. The Chebyshev distance between two vectors or pointsxandy, with standard coordinatesxi{\displaystyle x_{i}}andyi{\displaystyle y_{i}}, respectively, is This equals the limit of theLpmetrics: hence it is also known as the L∞metric. Mathematically, the Chebyshev distance is ametricinduced by thesupremum normoruniform norm. It is an example of aninjective metric. In two dimensions, i.e.plane geometry, if the pointspandqhaveCartesian coordinates(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}, their Chebyshev distance is Under this metric, acircleofradiusr, which is the set of points with Chebyshev distancerfrom a center point, is a square whose sides have the length 2rand are parallel to the coordinate axes. On a chessboard, where one is using adiscreteChebyshev distance, rather than a continuous one, the circle of radiusris a square of side lengths 2r,measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square. In one dimension, all Lpmetrics are equal – they are just the absolute value of the difference. The two dimensionalManhattan distancehas "circles" i.e.level setsin the form of squares, with sides of length√2r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. alinear transformationof) the planar Manhattan distance. However, this geometric equivalence between L1and L∞metrics does not generalize to higher dimensions. Asphereformed using the Chebyshev distance as a metric is acubewith each face perpendicular to one of the coordinate axes, but a sphere formed usingManhattan distanceis anoctahedron: these aredual polyhedra, but among cubes, only the square (and 1-dimensional line segment) areself-dualpolytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1and L∞metrics are mathematically dual to each other. On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are theMoore neighborhoodof that point. The Chebyshev distance is the limiting case of the order-p{\displaystyle p}Minkowski distance, whenp{\displaystyle p}reachesinfinity. The Chebyshev distance is sometimes used inwarehouselogistics,[4]as it effectively measures the time anoverhead cranetakes to move an object (as the crane can move on the x and y axes at the same time but at the same speed along each axis). It is also widely used in electroniccomputer-aided manufacturing(CAM) applications, in particular, in optimization algorithms for these. For thesequence spaceof infinite-length sequences of real or complex numbers, the Chebyshev distance generalizes to theℓ∞{\displaystyle \ell ^{\infty }}-norm; this norm is sometimes called the Chebyshev norm. For the space of (real or complex-valued) functions, the Chebyshev distance generalizes to theuniform norm.
https://en.wikipedia.org/wiki/Chebyshev_distance
Incoding theory, theLee distanceis adistancebetween twostringsx1x2…xn{\displaystyle x_{1}x_{2}\dots x_{n}}andy1y2…yn{\displaystyle y_{1}y_{2}\dots y_{n}}of equal lengthnover theq-aryalphabet{0, 1, …,q− 1} of sizeq≥ 2. It is ametric[1]defined as∑i=1nmin(|xi−yi|,q−|xi−yi|).{\displaystyle \sum _{i=1}^{n}\min(|x_{i}-y_{i}|,\,q-|x_{i}-y_{i}|).}Ifq= 2orq= 3the Lee distance coincides with theHamming distance, because both distances are 0 for two single equal symbols and 1 for two single non-equal symbols. Forq> 3this is not the case anymore; the Lee distance between single letters can become bigger than 1. However, there exists aGray isometry(weight-preserving bijection) betweenZ4{\displaystyle \mathbb {Z} _{4}}with the Lee weight andZ22{\displaystyle \mathbb {Z} _{2}^{2}}with theHamming weight.[2] Considering the alphabet as the additive groupZq, the Lee distance between two single lettersx{\displaystyle x}andy{\displaystyle y}is the length of shortest path in theCayley graph(which is circular since the group is cyclic) between them.[3]More generally, the Lee distance between two strings of lengthnis the length of the shortest path between them in the Cayley graph ofZqn{\displaystyle \mathbf {Z} _{q}^{n}}. This can also be thought of as thequotient metricresulting from reducingZnwith theManhattan distancemodulo thelatticeqZn. The analogous quotient metric on a quotient ofZnmodulo an arbitrary lattice is known as aMannheim metricorMannheim distance.[4][5] Themetric spaceinduced by the Lee distance is a discrete analog of theelliptic space.[1] Ifq= 6, then the Lee distance between 3140 and 2543 is1 + 2 + 0 + 3 = 6. The Lee distance is named after William Chi Yuan Lee (李始元). It is applied for phasemodulationwhile the Hamming distance is used in case of orthogonal modulation. TheBerlekamp codeis an example of code in the Lee metric.[6]Other significant examples are thePreparata codeandKerdock code; these codes are non-linear when considered over a field, but arelinear over a ring.[2]
https://en.wikipedia.org/wiki/Lee_distance
Ingeometry, a setK⊂Rdis defined to beorthogonally convexif, for everylineLthat is parallel to one ofstandard basisvectors, theintersectionofKwithLis empty, a point, or a singlesegment. The term "orthogonal" refers to correspondingCartesianbasis and coordinates inEuclidean space, where different basis vectors areperpendicular, as well as corresponding lines. Unlike ordinaryconvex sets, an orthogonally convex set is not necessarilyconnected. Theorthogonal convex hullof a setK⊂Rdis the intersection of all connected orthogonally convex supersets ofK. These definitions are made by analogy with the classical theory of convexity, in whichKisconvexif, for every lineL, the intersection ofKwithLis empty, a point, or a single segment. Orthogonal convexity restricts the lines for which this property is required to hold, so every convex set is orthogonally convex but not vice versa. For the same reason, the orthogonal convex hull itself is a subset of theconvex hullof the same point set. A pointpbelongs to the orthogonal convex hull ofKif and only ifeach of the closed axis-alignedorthantshavingpas apex has a nonempty intersection withK. The orthogonal convex hull is also known as therectilinear convex hull, or, intwo dimensions, thex-yconvex hull. The figure shows a set of 16 points in the plane and the orthogonal convex hull of these points. As can be seen in the figure, the orthogonal convex hull is apolygonwith some degenerate edges connecting extreme vertices in each coordinate direction. For a discrete point set such as this one, all orthogonal convex hull edges are horizontal or vertical. In this example, the orthogonal convex hull is connected. In contrast with the classical convexity where there exist several equivalent definitions of the convex hull, definitions of the orthogonal convex hull made by analogy to those of the convex hull result in different geometric objects. So far, researchers have explored the following four definitions of the orthogonal convex hull of a setK⊂Rd{\displaystyle K\subset \mathbb {R} ^{d}}: In the figures on the right, the top figure shows a set of six points in the plane. The classical orthogonal convex hull of the point set is the point set itself. From top to bottom, the second to the fourth figures show respectively, the maximal, the connected, and the functional orthogonal convex hull of the point set. As can be seen, the orthogonal convex hull is apolygonwith some degenerate "edges", namely, orthogonally convex alternatingpolygonal chainswith interior angle90∘{\displaystyle 90^{\circ }}connecting extreme vertices. The classical orthogonal convex hull can be equivalently defined as the smallest orthogonally convex superset of a setK⊂R2{\displaystyle K\subset \mathbb {R} ^{2}}, by analogy to the following definition of the convex hull:the convex hull ofK{\displaystyle K}is the smallest convex superset ofK{\displaystyle K}. The classical orthogonal convex hull might be disconnected. If a point set has no pair of points on a line parallel to one of the standard basis vectors, the classical orthogonal convex hull of such point set is equal to the point set itself. A well known property of convex hulls is derived from theCarathéodory's theorem: A pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}is in the interior of the convex hull of a point setK⊂Rd{\displaystyle K\subset \mathbb {R} ^{d}}if, and only if, it is already in the convex hull ofd+1{\displaystyle d+1}or fewer points ofK{\displaystyle K}. This property is also valid for classical orthogonal convex hulls. By definition, the connected orthogonal convex hull is always connected. However, it is not unique. Consider for example a pair of points in the plane not lying on an horizontal or a vertical line. The connected orthogonal convex hull of such points is an orthogonally convex alternating polygonal chain with interior angle90∘{\displaystyle 90^{\circ }}connecting the points. Any such polygonal chain has the same length, so there are infinitely many connected orthogonal convex hulls for the point set. For point sets in the plane, the connected orthogonal convex hull can be easily obtained from the maximal orthogonal convex hull. If the maximal orthogonal convex hull of a point setK⊂R2{\displaystyle K\subset \mathbb {R} ^{2}}is connected, then it is equal to the connected orthogonal convex hull ofK{\displaystyle K}. If this is not the case, then there are infinitely many connected orthogonal convex hulls forK{\displaystyle K}, and each one can be obtained by joining the connected components of the maximal orthogonal convex hull ofK{\displaystyle K}with orthogonally convex alternating polygonal chains with interior angle90∘{\displaystyle 90^{\circ }}. The functional orthogonal convex hull is not defined using properties of sets, but properties of functions about sets. Namely, it restricts the notion ofconvex functionas follows. A functionf:Rd→R{\displaystyle f:\mathbb {R} ^{d}\rightarrow \mathbb {R} }is called orthogonally convex if its restriction to each line parallel to a non-zero of the standard basis vectors is a convex function. Several authors have studied algorithms for constructing orthogonal convex hulls:Montuno & Fournier (1982);Nicholl et al. (1983);Ottmann, Soisalon-Soininen & Wood (1984);Karlsson & Overmars (1988). By the results of these authors, the orthogonal convex hull ofnpoints in the plane may be constructed in timeO(nlogn), or possibly faster using integer searching data structures for points withintegercoordinates. It is natural to generalize orthogonal convexity torestricted-orientation convexity, in which a setKis defined to be convex if all lines having one of afinite setof slopes must intersectKin connected subsets; see e.g.Rawlins (1987), Rawlins and Wood (1987,1988), or Fink and Wood (1996,1998). In addition, thetight spanof a finitemetric spaceis closely related to the orthogonal convex hull. If a finite point set in the plane has a connected orthogonal convex hull, that hull is the tight span for theManhattan distanceon the point set. However, orthogonal hulls and tight spans differ for point sets with disconnected orthogonal hulls, or in higher-dimensionalLpspaces. O'Rourke (1993)describes several other results about orthogonal convexity and orthogonalvisibility.
https://en.wikipedia.org/wiki/Orthogonal_convex_hull
Inmathematical analysis, thestaircase paradoxis apathological exampleshowing thatlimitsof curves do not necessarily preserve theirlength.[1]It consists of a sequence of "staircase"polygonal chainsin aunit square, formed from horizontal and verticalline segmentsof decreasing length, so that these staircasesconverge uniformlyto the diagonal of the square.[2]However, each staircase has length two, while the length of the diagonal is thesquare root of 2, so the sequence of staircase lengths does not converge to the length of the diagonal.[3][4]Martin Gardnercalls this "an ancient geometrical paradox".[5]It shows that, for curves under uniform convergence, the length of a curve is not a continuous function of the curve.[6] For anysmooth curve, polygonal chains with segment lengths decreasing to zero, connecting consecutive vertices along the curve, always converge to thearc length. The failure of the staircase curves to converge to the correct length can be explained by the fact that some of their vertices do not lie on the diagonal.[7]In higher dimensions, theSchwarz lanternprovides an analogous example showing that polyhedral surfaces thatconverge pointwiseto a curved surface do not necessarily converge to its area, even when the vertices all lie on the surface.[8] As well as highlighting the need for careful definitions of arc length in mathematics education,[9]the paradox has applications indigital geometry, where it motivates methods of estimating the perimeter of pixelated shapes that do not merely sum the lengths of boundaries between pixels.[10]
https://en.wikipedia.org/wiki/Staircase_paradox
In anyquantitative science, the termsrelative changeandrelative differenceare used to compare twoquantitieswhile taking into account the "sizes" of the things being compared, i.e. dividing by astandardorreferenceorstartingvalue.[1]The comparison is expressed as aratioand is aunitlessnumber. By multiplying these ratios by 100 they can be expressed aspercentagesso the termspercentage change,percent(age) difference, orrelative percentage differenceare also commonly used. The terms "change" and "difference" are used interchangeably.[2] Relative change is often used as a quantitative indicator ofquality assuranceandquality controlfor repeated measurements where the outcomes are expected to be the same. A special case of percent change (relative change expressed as a percentage) calledpercent erroroccurs in measuring situations where the reference value is the accepted or actual value (perhaps theoretically determined) and the value being compared to it is experimentally determined (by measurement). The relative change formula is not well-behaved under many conditions. Various alternative formulas, calledindicators of relative change, have been proposed in the literature. Several authors have foundlog changeandlog pointsto be satisfactory indicators, but these have not seen widespread use.[3] Given two numerical quantities,vrefandvwithvrefsomereference value,theiractual change,actual difference, orabsolute changeis The termabsolute differenceis sometimes also used even though the absolute value is not taken; the sign ofΔtypically is uniform, e.g. across an increasing data series. If the relationship of the value with respect to the reference value (that is, larger or smaller) does not matter in a particular application, the absolute value may be used in place of the actual change in the above formula to produce a value for the relative change which is always non-negative. The actual difference is not usually a good way to compare the numbers, in particular because it depends on the unit of measurement. For instance,1mis the same as100cm, but the absolute difference between2 and 1 mis 1 while the absolute difference between200 and 100 cmis 100, giving the impression of a larger difference.[4]But even with constant units, the relative change helps judge the importance of the respective change. For example, an increase in price of$100of a valuable is considered big if changing from$50 to 150but rather small when changing from$10,000 to 10,100. We can adjust the comparison to take into account the "size" of the quantities involved, by defining, for positive values ofvref: relative change(vref,v)=actual changereference value=Δvvref=vvref−1.{\displaystyle {\text{relative change}}(v_{\text{ref}},v)={\frac {\text{actual change}}{\text{reference value}}}={\frac {\Delta v}{v_{\text{ref}}}}={\frac {v}{v_{\text{ref}}}}-1.} The relative change is independent of the unit of measurement employed; for example, the relative change from2 to 1mis−50%, the same as for200 to 100 cm. The relative change is not defined if the reference value (vref) is zero, and gives negative values for positive increases ifvrefis negative, hence it is not usually defined for negative reference values either. For example, we might want to calculate the relative change of −10 to −6. The above formula gives⁠(−6) − (−10)/−10⁠=⁠4/−10⁠= −0.4, indicating a decrease, yet in fact the reading increased. Measures of relative change areunitlessnumbers expressed as afraction. Corresponding values of percent change would be obtained by multiplying these values by 100 (and appending the % sign to indicate that the value is a percentage). The domain restriction of relative change to positive numbers often poses a constraint. To avoid this problem it is common to take the absolute value, so that the relative change formula works correctly for all nonzero values ofvref: Relative change(vref,v)=v−vref|vref|.{\displaystyle {\text{Relative change}}(v_{\text{ref}},v)={\frac {v-v_{\text{ref}}}{|v_{\text{ref}}|}}.} This still does not solve the issue when the reference is zero. It is common to instead use an indicator of relative change, and take the absolute values of bothvandvreference{\displaystyle v_{\text{reference}}}. Then the only problematic case isv=vreference=0{\displaystyle v=v_{\text{reference}}=0}, which can usually be addressed by appropriately extending the indicator. For example, for arithmetic mean this formula may be used:[5]dr(x,y)=|x−y|(|x|+|y|)/2,dr(0,0)=0{\displaystyle d_{r}(x,y)={\frac {|x-y|}{(|x|+|y|)/2}},\ d_{r}(0,0)=0} Apercentage changeis a way to express a change in a variable. It represents the relative change between the old value and the new one.[6] For example, if a house is worth $100,000 today and the year after its value goes up to $110,000, the percentage change of its value can be expressed as110000−100000100000=0.1=10%.{\displaystyle {\frac {110000-100000}{100000}}=0.1=10\%.} It can then be said that the worth of the house went up by 10%. More generally, ifV1represents the old value andV2the new one,Percentage change=ΔVV1=V2−V1V1×100%.{\displaystyle {\text{Percentage change}}={\frac {\Delta V}{V_{1}}}={\frac {V_{2}-V_{1}}{V_{1}}}\times 100\%.} Some calculators directly support this via a%CHorΔ%function. When the variable in question is a percentage itself, it is better to talk about its change by usingpercentage points, to avoid confusion betweenrelative differenceandabsolute difference. Thepercent erroris a special case of the percentage form of relative change calculated from the absolute change between the experimental (measured) and theoretical (accepted) values, and dividing by the theoretical (accepted) value. %Error=|Experimental−Theoretical||Theoretical|×100.{\displaystyle \%{\text{ Error}}={\frac {|{\text{Experimental}}-{\text{Theoretical}}|}{|{\text{Theoretical}}|}}\times 100.} The terms "Experimental" and "Theoretical" used in the equation above are commonly replaced with similar terms. Other terms used forexperimentalcould be "measured," "calculated," or "actual" and another term used fortheoreticalcould be "accepted." Experimental value is what has been derived by use of calculation and/or measurement and is having its accuracy tested against the theoretical value, a value that is accepted by the scientific community or a value that could be seen as a goal for a successful result. Although it is common practice to use the absolute value version of relative change when discussing percent error, in some situations, it can be beneficial to remove the absolute values to provide more information about the result. Thus, if an experimental value is less than the theoretical value, the percent error will be negative. This negative result provides additional information about the experimental result. For example, experimentally calculating the speed of light and coming up with a negative percent error says that the experimental value is a velocity that is less than the speed of light. This is a big difference from getting a positive percent error, which means the experimental value is a velocity that is greater than the speed of light (violating thetheory of relativity) and is a newsworthy result. The percent error equation, when rewritten by removing the absolute values, becomes:%Error=Experimental−Theoretical|Theoretical|×100.{\displaystyle \%{\text{ Error}}={\frac {{\text{Experimental}}-{\text{Theoretical}}}{|{\text{Theoretical}}|}}\times 100.} It is important to note that the two values in thenumeratordo notcommute. Therefore, it is vital to preserve the order as above: subtract the theoretical value from the experimental value and not vice versa. Suppose that carMcosts $50,000 and carLcosts $40,000. We wish to compare these costs.[7]With respect to carL, the absolute difference is$10,000 = $50,000 − $40,000. That is, carMcosts $10,000 more than carL. The relative difference is,$10,000$40,000=0.25=25%,{\displaystyle {\frac {\$10,000}{\$40,000}}=0.25=25\%,}and we say that carMcosts 25%more thancarL. It is also common to express the comparison as a ratio, which in this example is,$50,000$40,000=1.25=125%,{\displaystyle {\frac {\$50,000}{\$40,000}}=1.25=125\%,}and we say that carMcosts 125%ofthe cost of carL. In this example the cost of carLwas considered the reference value, but we could have made the choice the other way and considered the cost of carMas the reference value. The absolute difference is now−$10,000 = $40,000 − $50,000since carLcosts $10,000 less than carM. The relative difference,−$10,000$50,000=−0.20=−20%{\displaystyle {\frac {-\$10,000}{\$50,000}}=-0.20=-20\%}is also negative since carLcosts 20%less thancarM. The ratio form of the comparison,$40,000$50,000=0.8=80%{\displaystyle {\frac {\$40,000}{\$50,000}}=0.8=80\%}says that carLcosts 80%ofwhat carMcosts. It is the use of the words "of" and "less/more than" that distinguish between ratios and relative differences.[8] If a bank were to raise the interest rate on a savings account from 3% to 4%, the statement that "the interest rate was increased by 1%" would be incorrect and misleading. The absolute change in this situation is 1 percentage point (4% − 3%), but the relative change in the interest rate is:4%−3%3%=0.333…=3313%.{\displaystyle {\frac {4\%-3\%}{3\%}}=0.333\ldots =33{\frac {1}{3}}\%.} In general, the term "percentage point(s)" indicates an absolute change or difference of percentages, while the percent sign or the word "percentage" refers to the relative change or difference.[9] The (classical) relative change above is but one of the possible measures/indicators of relative change. Anindicator of relative changefromx(initial or reference value) toy(new value)R(x,y){\displaystyle R(x,y)}is a binary real-valued function defined for the domain of interest which satisfies the following properties:[10] The normalization condition is motivated by the observation thatRscaled by a constantc>0{\displaystyle c>0}still satisfies the other conditions besides normalization. Furthermore, due to the independence condition, everyRcan be written as a single argument functionHof the ratioy/x{\displaystyle y/x}.[11]The normalization condition is then thatH′(1)=1{\displaystyle H'(1)=1}. This implies all indicators behave like the classical one wheny/x{\displaystyle y/x}is close to1. Usually the indicator of relative change is presented as the actual change Δ scaled by some function of the valuesxandy, sayf(x,y).[2] Relative change(x,y)=Actual changeΔf(x,y)=y−xf(x,y).{\displaystyle {\text{Relative change}}(x,y)={\frac {{\text{Actual change}}\,\Delta }{f(x,y)}}={\frac {y-x}{f(x,y)}}.} As with classical relative change, the general relative change is undefined iff(x,y)is zero. Various choices for the functionf(x,y)have been proposed:[12] As can be seen in the table, all but the first two indicators have, as denominator amean. One of the properties of a mean functionm(x,y){\displaystyle m(x,y)}is:[12]m(x,y)=m(y,x){\displaystyle m(x,y)=m(y,x)}, which means that all such indicators have a "symmetry" property that the classical relative change lacks:R(x,y)=−R(y,x){\displaystyle R(x,y)=-R(y,x)}. This agrees with intuition that a relative change fromxtoyshould have the same magnitude as a relative change in the opposite direction,ytox, just like the relationyx=1xy{\displaystyle {\frac {y}{x}}={\frac {1}{\frac {x}{y}}}}suggests. Maximum mean change has been recommended when comparingfloating pointvalues inprogramming languagesforequalitywith a certain tolerance.[13]Another application is in the computation ofapproximation errorswhen the relative error of a measurement is required.[citation needed]Minimum mean change has been recommended for use in econometrics.[14][15]Logarithmic change has been recommended as a general-purpose replacement for relative change and is discussed more below. Tenhunen defines a general relative difference function fromL(reference value) toK:[16]H(K,L)={∫1K/Ltc−1dtwhenK>L−∫K/L1tc−1dtwhenK<L{\displaystyle H(K,L)={\begin{cases}\int _{1}^{K/L}t^{c-1}dt&{\text{when }}K>L\\-\int _{K/L}^{1}t^{c-1}dt&{\text{when }}K<L\end{cases}}} which leads to H(K,L)={1c⋅((K/L)c−1)c≠0ln⁡(K/L)c=0,K>0,L>0{\displaystyle H(K,L)={\begin{cases}{\frac {1}{c}}\cdot ((K/L)^{c}-1)&c\neq 0\\\ln(K/L)&c=0,K>0,L>0\end{cases}}} In particular for the special casesc=±1{\displaystyle c=\pm 1}, H(K,L)={(K−L)/Kc=−1(K−L)/Lc=1{\displaystyle H(K,L)={\begin{cases}(K-L)/K&c=-1\\(K-L)/L&c=1\end{cases}}} Of these indicators of relative change, the most natural arguably is thenatural logarithm(ln) of the ratio of the two numbers (final and initial), calledlog change.[2]Indeed, when|V1−V0V0|≪1{\displaystyle \left|{\frac {V_{1}-V_{0}}{V_{0}}}\right|\ll 1}, the following approximation holds:ln⁡V1V0=∫V0V1dVV≈∫V0V1dVV0=V1−V0V0=classical relative change{\displaystyle \ln {\frac {V_{1}}{V_{0}}}=\int _{V_{0}}^{V_{1}}{\frac {{\mathrm {d} }V}{V}}\approx \int _{V_{0}}^{V_{1}}{\frac {{\mathrm {d} }V}{V_{0}}}={\frac {V_{1}-V_{0}}{V_{0}}}={\text{classical relative change}}} In the same way that relative change is scaled by 100 to get percentages,ln⁡V1V0{\displaystyle \ln {\frac {V_{1}}{V_{0}}}}can be scaled by 100 to get what is commonly calledlog points.[17]Log points are equivalent to the unitcentinepers(cNp) when measured for root-power quantities.[18][19]This quantity has also been referred to as a log percentage and denotedL%.[2]Since the derivative of the natural log at 1 is 1, log points are approximately equal to percent change for small differences – for example an increase of 1% equals an increase of 0.995 cNp, and a 5% increase gives a 4.88 cNp increase. This approximation property does not hold for other choices of logarithm base, which introduce a scaling factor due to the derivative not being 1. Log points can thus be used as a replacement for percent change.[20][18] Using log change has the advantages of additivity compared to relative change.[2][18]Specifically, when using log change, the total change after a series of changes equals the sum of the changes. With percent, summing the changes is only an approximation, with larger error for larger changes.[18]For example: Note that in the above table, sincerelative change 0(respectivelyrelative change 1) has the same numerical value aslog change 0(respectivelylog change 1), it does not correspond to the same variation. The conversion between relative and log changes may be computed aslog change=ln⁡(1+relative change){\displaystyle {\text{log change}}=\ln(1+{\text{relative change}})}. By additivity,ln⁡V1V0+ln⁡V0V1=0{\displaystyle \ln {\frac {V_{1}}{V_{0}}}+\ln {\frac {V_{0}}{V_{1}}}=0}, and therefore additivity implies a sort of symmetry property, namelyln⁡V1V0=−ln⁡V0V1{\displaystyle \ln {\frac {V_{1}}{V_{0}}}=-\ln {\frac {V_{0}}{V_{1}}}}and thus themagnitudeof a change expressed in log change is the same whetherV0orV1is chosen as the reference.[18]In contrast, for relative change,V1−V0V0≠−V0−V1V1{\displaystyle {\frac {V_{1}-V_{0}}{V_{0}}}\neq -{\frac {V_{0}-V_{1}}{V_{1}}}}, with the difference(V1−V0)2V0V1{\displaystyle {\frac {(V_{1}-V_{0})^{2}}{V_{0}V_{1}}}}becoming larger asV1orV0approaches 0 while the other remains fixed. For example: Here 0+means taking thelimit from abovetowards 0. The log change is the unique two-variable function that is additive, and whose linearization matches relative change. There is a family of additive difference functionsFλ(x,y){\displaystyle F_{\lambda }(x,y)}for anyλ∈R{\displaystyle \lambda \in \mathbb {R} }, such that absolute change isF0{\displaystyle F_{0}}and log change isF1{\displaystyle F_{1}}.[21]
https://en.wikipedia.org/wiki/Relative_change_and_difference
Theroot mean square deviation(RMSD) orroot mean square error(RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or anestimatoron the other. Thedeviationis typically simply a differences ofscalars; it can also be generalized to thevector lengthsof adisplacement, as in thebioinformaticsconcept ofroot mean square deviation of atomic positions. The RMSD of asampleis thequadratic meanof the differences between the observed values and predicted ones. Thesedeviationsare calledresidualswhen the calculations are performed over the data sample that was used for estimation (and are therefore always in reference to an estimate) and are callederrors(or prediction errors) when computed out-of-sample (aka on the full set, referencing a true value rather than an estimate). The RMSD serves to aggregate the magnitudes of the errors in predictions for various data points into a single measure of predictive power. RMSD is a measure ofaccuracy, to compare forecasting errors of different models for a particular dataset and not between datasets, as it is scale-dependent.[1] RMSD is always non-negative, and a value of 0 (almost never achieved in practice) would indicate a perfect fit to the data. In general, a lower RMSD is better than a higher one. However, comparisons across different types of data would be invalid because the measure is dependent on the scale of the numbers used. RMSD is the square root of the average of squared errors. The effect of each error on RMSD is proportional to the size of the squared error; thus larger errors have a disproportionately large effect on RMSD. Consequently, RMSD is sensitive tooutliers.[2][3] The RMSD of anestimatorθ^{\displaystyle {\hat {\theta }}}with respect to an estimated parameterθ{\displaystyle \theta }is defined as the square root of themean squared error: For anunbiased estimator, the RMSD is the square root of thevariance, known as thestandard deviation. IfX1, ...,Xnis a sample of a population with true mean valuex0{\displaystyle x_{0}}, then the RMSD of the sample is The RMSD of predicted valuesy^t{\displaystyle {\hat {y}}_{t}}for timestof aregression'sdependent variableyt,{\displaystyle y_{t},}with variables observed overTtimes, is computed forTdifferent predictions as the square root of the mean of the squares of the deviations: (For regressions oncross-sectional data, the subscripttis replaced byiandTis replaced byn.) In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the "standard". For example, when measuring the average difference between two time seriesx1,t{\displaystyle x_{1,t}}andx2,t{\displaystyle x_{2,t}}, the formula becomes Normalizing the RMSD facilitates the comparison between datasets or models with different scales. Though there is no consistent means of normalization in the literature, common choices are the mean or the range (defined as the maximum value minus the minimum value) of the measured data:[4] This value is commonly referred to as thenormalized root mean square deviationorerror(NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance. This is also calledCoefficient of VariationorPercent RMS. In many cases, especially for smaller samples, the sample range is likely to be affected by the size of sample which would hamper comparisons. Another possible method to make the RMSD a more useful comparison measure is to divide the RMSD by theinterquartile range(IQR). When dividing the RMSD with the IQR the normalized value gets less sensitive for extreme values in the target variable. withQ1=CDF−1(0.25){\displaystyle Q_{1}={\text{CDF}}^{-1}(0.25)}andQ3=CDF−1(0.75),{\displaystyle Q_{3}={\text{CDF}}^{-1}(0.75),}where CDF−1is thequantile function. When normalizing by the mean value of the measurements, the termcoefficient of variation of the RMSD, CV(RMSD)may be used to avoid ambiguity.[5]This is analogous to thecoefficient of variationwith the RMSD taking the place of thestandard deviation. Some researchers[who?]have recommended[where?]the use of themean absolute error(MAE) instead of the root mean square deviation. MAE possesses advantages in interpretability over RMSD. MAE is the average of the absolute values of the errors. MAE is fundamentally easier to understand than the square root of the average of squared errors. Furthermore, each error influences MAE in direct proportion to the absolute value of the error, which is not the case for RMSD.[2]
https://en.wikipedia.org/wiki/Root_mean_squared_error
Insignal processingand related disciplines,aliasingis a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the original signal, there are components at frequency exceeding a certain frequency calledNyquist frequency,fs/2{\textstyle f_{s}/2}, wherefs{\textstyle f_{s}}is the sampling frequency (undersampling). This is because typical reconstruction methods use low frequency components while there are a number of frequency components, called aliases, which sampling result in the identical sample. It also often refers to thedistortionorartifactthat results when a signal reconstructed from samples is different from the original continuous signal. Aliasing can occur in signals sampled in time, for instance indigital audioor thestroboscopic effect, and is referred to astemporal aliasing. Aliasing in spatially sampled signals (e.g.,moiré patternsindigital images) is referred to asspatial aliasing. Aliasing is generally avoided by applyinglow-pass filtersoranti-aliasing filters(AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitablereconstruction filteringshould then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. Forspatial anti-aliasing, the types of anti-aliasing includefast approximate anti-aliasing(FXAA),multisample anti-aliasing, andsupersampling. When a digital image is viewed, areconstructionis performed by a display or printer device, and by the eyes and the brain. If the image data is processed incorrectly during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen. An example of spatial aliasing is themoiré patternobserved in a poorly pixelized image of a brick wall.Spatial anti-aliasingtechniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasingprealiasingand reconstruction aliasingpostaliasing.[1] Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000samples per second(Hz), any frequency components at or above 16,000Hz(theNyquist frequencyfor this sampling rate) will cause aliasing when the music is reproduced by adigital-to-analog converter(DAC). The high frequencies in the analog signal will appear as lower frequencies (wrong alias) in the recorded digital sample and, hence, cannot be reproduced by the DAC. To prevent this, ananti-aliasing filteris used to remove components above the Nyquist frequency prior to sampling. In video or cinematography, temporal aliasing results from the limited frame rate, and causes thewagon-wheel effect, whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as anegative frequency. Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming.[2][unreliable source?] Like the video camera, most sampling schemes are periodic; that is, they have a characteristicsampling frequencyin time or in space. Digital cameras provide a certain number of samples (pixels) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled (digitized) with ananalog-to-digital converter, which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content. Actual signals have a finite duration and their frequency content, as defined by theFourier transform, has no upper bound. Some amount of aliasing always occurs when such continuous functions over time are sampled. Functions whose frequency content is bounded (bandlimited) have an infinite duration in the time domain. If sampled at a high enough rate, determined by thebandwidth, the original function can, in theory, be perfectly reconstructed from the infinite set of samples. Sometimes aliasing is used intentionally on signals with no low-frequency content, calledbandpasssignals.Undersampling, which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers exploit aliasing in this way for computational efficiency.[3](SeeSampling (signal processing),Nyquist rate (relative to sampling), andFilter bank.) Sinusoidsare an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with aFourier seriesortransform). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum. When sampling a function at frequencyfs(i.e., the sampling interval is1/fs), the following functions of time(t)yield identical sets of samples if the sampling starts fromt=0{\textstyle t=0}such thatt=1fsn{\displaystyle t={\frac {1}{f_{s}}}n}wheren=0,1,2,3{\textstyle n=0,1,2,3}, and so on: {sin⁡(2π(f+Nfs)t+φ),N=0,±1,±2,±3,…}.{\displaystyle \{\sin(2\pi (f+Nf_{s})t+\varphi ),N=0,\pm 1,\pm 2,\pm 3,\ldots \}.} Afrequency spectrumof the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So, the functions and their frequencies are said to bealiasesof each other. Noting the sine functions as odd functions: thus, we can write all the alias frequencies as positive values:fN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}. For example, a snapshot of the lower right frame of Fig.2 shows a component at the actual frequencyf{\displaystyle f}and another component at aliasf−1(f){\displaystyle f_{_{-1}}(f)}. Asf{\displaystyle f}increases during the animation,f−1(f){\displaystyle f_{_{-1}}(f)}decreases. The point at which they are equal(f=fs/2){\displaystyle (f=f_{s}/2)}is an axis of symmetry called thefolding frequency, also known asNyquist frequency. Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of thefN(f){\displaystyle f_{_{N}}(f)}frequencies. So, it is usually important thatf0(f){\displaystyle f_{0}(f)}be the unique minimum. A necessary and sufficient condition for that isfs/2>|f|,{\displaystyle f_{s}/2>|f|,}called theNyquist condition. The lower left frame of Fig.2 depicts the typical reconstruction result of the available samples. Untilf{\displaystyle f}exceeds the Nyquist frequency, the reconstruction matches the actual waveform (upper left frame). After that, it is the low frequency alias of the upper frame. The figures below offer additional depictions of aliasing, due to sampling. A graph of amplitude vs frequency (not time) for a single sinusoid at frequency0.6fsand some of its aliases at0.4fs,1.4fs,and1.6fswould look like the 4 black dots in Fig.3. The red lines depict the paths (loci) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (betweenfs/2andfs). No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 andfs.Folding is often observed in practice when viewing thefrequency spectrumof real-valued samples, such as Fig.4. Complex sinusoidsare waveforms whose samples arecomplex numbers(z=Aeiθ=A(cos⁡θ+isin⁡θ){\textstyle z=Ae^{i\theta }=A(\cos \theta +i\sin \theta )}), and the concept ofnegative frequencyis necessary to distinguish them. In that case, the frequencies of the aliases are given by just:fN(f) =f+N fs.(In real sinusoids, as shown in the above, all alias frequencies can be written as positive frequenciesfN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}because of sine functions as odd functions.) Therefore, asfincreases from0tofs,f−1(f)also increases (from–fsto 0). Consequently, complex sinusoids do not exhibitfolding. When the conditionfs/2 >fis met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called theNyquist criterion. That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called ananti-aliasing filter. The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via theWhittaker–Shannon interpolation formula) is a customary measure of the effectiveness of sampling. Historically the termaliasingevolved from radio engineering because of the action ofsuperheterodyne receivers. When the receiver shifts multiple signals down to lower frequencies, fromRFtoIFbyheterodyning, an unwanted signal, from an RF frequency equally far from thelocal oscillator(LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as animageoraliasof the desired signal. The first written use of the terms "alias" and "aliasing" in signal processing appears to be in a 1949 unpublished Bell Laboratories technical memorandum[4]byJohn TukeyandRichard Hamming. That paper includes an example of frequency aliasing dating back to 1922. The firstpublisheduse of the term "aliasing" in this context is due toBlackmanand Tukey in 1958.[5]In their preface to the Dover reprint[6]of this paper, they point out that the idea of aliasing had been illustrated graphically by Stumpf[7]ten years prior. The 1949 Bell technical report refers to aliasing as though it is a well-known concept, but does not offer a source for the term.Gwilym JenkinsandMaurice Priestleycredit Tukey with introducing it in this context,[8]though ananalogous concept of aliasinghad been introduced a few years earlier[9]infractional factorial designs. While Tukey did significant work in factorial experiments[10]and was certainly aware of aliasing in fractional designs,[11]it cannot be determined whether his use of "aliasing" in signal processing was consciously inspired by such designs. Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity. Spatial aliasing, particular of angular frequency, can occur when reproducing alight fieldor sound field with discrete elements, as in3D displaysorwave field synthesisof sound.[12] This aliasing is visible in images such as posters withlenticular printing: if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field. The lack ofparallaxon viewer movement in 2D images and in3-D filmproduced bystereoscopicglasses (in 3D films the effect is called "yawing", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant). The qualitative effects of aliasing can be heard in the following audio demonstration. Sixsawtooth wavesare played in succession, with the first two sawtooths having afundamental frequencyof 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate betweenbandlimited(non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform'sFourier seriessuch that no harmonics above theNyquist frequency(11025 Hz = 22050 Hz / 2 here) are present. The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental. A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points perwavelength, or the wave arrival direction becomes ambiguous.[13]
https://en.wikipedia.org/wiki/Aliasing
Inprobability theory,Boole's inequality, also known as theunion bound, says that for anyfiniteorcountablesetofevents, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. This inequality provides an upper bound on the probability of occurrence of at least one of a countable number of events in terms of the individual probabilities of the events. Boole's inequality is named for its discoverer,George Boole.[1] Formally, for a countable set of eventsA1,A2,A3, ..., we have Inmeasure-theoreticterms, Boole's inequality follows from the fact that a measure (and certainly anyprobability measure) isσ-sub-additive. Thus Boole's inequality holds not only for probability measuresP{\displaystyle {\mathbb {P} }}, but more generally whenP{\displaystyle {\mathbb {P} }}is replaced by any finite measure. Boole's inequality may be proved for finite collections ofn{\displaystyle n}events using the method ofinduction.[citation needed] For then=1{\displaystyle n=1}case, it follows that For the casen{\displaystyle n}, we have SinceP(A∪B)=P(A)+P(B)−P(A∩B),{\displaystyle \mathbb {P} (A\cup B)=\mathbb {P} (A)+\mathbb {P} (B)-\mathbb {P} (A\cap B),}and because the union operation isassociative, we have Since by thefirst axiom of probability, we have and therefore Let eventsA1,A2,A3,…{\displaystyle A_{1},A_{2},A_{3},\dots }in ourprobability spacebe given. The countable additivity of the measureP{\displaystyle \mathbb {P} }states that ifB1,B2,B3,…{\displaystyle B_{1},B_{2},B_{3},\dots }are pairwise disjoint events, then Set ThenB1,B2,B3,…{\displaystyle B_{1},B_{2},B_{3},\dots }are pairwise disjoint. We claim that: One inclusion is clear. Indeed, sinceBi⊂Ai{\displaystyle B_{i}\subset A_{i}}for all i, thus⋃i=1∞Bi⊂⋃i=1∞Ai{\displaystyle \bigcup _{i=1}^{\infty }B_{i}\subset \bigcup _{i=1}^{\infty }A_{i}}. For the other inclusion, letx∈⋃i=1∞Ai{\displaystyle x\in \bigcup _{i=1}^{\infty }A_{i}}be given. Writek{\displaystyle k}for the minimum positiveintegersuch thatx∈Ak{\displaystyle x\in A_{k}}. Thenx∈Ak−⋃j=1k−1Aj=Bk{\displaystyle x\in A_{k}-\bigcup _{j=1}^{k-1}A_{j}=B_{k}}. Thusx∈⋃i=1∞Bi{\displaystyle x\in \bigcup _{i=1}^{\infty }B_{i}}. Therefore⋃i=1∞Ai⊂⋃i=1∞Bi{\displaystyle \bigcup _{i=1}^{\infty }A_{i}\subset \bigcup _{i=1}^{\infty }B_{i}}. Therefore where the last inequality holds becauseBi⊂Ai{\displaystyle B_{i}\subset A_{i}}implies thatP(Bi)≤P(Ai),{\displaystyle \mathbb {P} (B_{i})\leq \mathbb {P} (A_{i}),}for all i. Boole's inequality for a finite number of events may be generalized to certainupperandlower boundson the probability offinite unionsof events.[2]These bounds are known asBonferroni inequalities, afterCarlo Emilio Bonferroni; seeBonferroni (1936). Let for all integerskin {1, ...,n}. Then, whenK≤n{\displaystyle K\leq n}is odd: holds, and whenK≤n{\displaystyle K\leq n}is even: holds. The inequalities follow from theinclusion–exclusion principle, and Boole's inequality is the special case ofK=1{\displaystyle K=1}. Since the proof of the inclusion-exclusion principle requires only the finite additivity (and nonnegativity) ofP{\displaystyle \mathbb {P} }, thus the Bonferroni inequalities holds more generallyP{\displaystyle \mathbb {P} }is replaced by any finitecontent, in the sense of measure theory. LetE=⋂i=1nBi{\displaystyle E=\bigcap _{i=1}^{n}B_{i}}, whereBi∈{Ai,Aic}{\displaystyle B_{i}\in \{A_{i},A_{i}^{c}\}}for eachi=1,…,n{\displaystyle i=1,\dots ,n}. These suchE{\displaystyle E}partition thesample space, and for eachE{\displaystyle E}and everyi{\displaystyle i},E{\displaystyle E}is either contained inAi{\displaystyle A_{i}}or disjoint from it. IfE=⋂i=1nAic{\displaystyle E=\bigcap _{i=1}^{n}A_{i}^{c}}, thenE{\displaystyle E}contributes 0 to both sides of the inequality. Otherwise, assumeE{\displaystyle E}is contained in exactlyL{\displaystyle L}of theAi{\displaystyle A_{i}}. ThenE{\displaystyle E}contributes exactlyP(E){\displaystyle \mathbb {P} (E)}to the right side of the inequality, while it contributes to the left side of the inequality. However, byPascal's rule, this is equal to which telescopes to Thus, the inequality holds for all eventsE{\displaystyle E}, and so by summing overE{\displaystyle E}, we obtain the desired inequality: The proof for evenK{\displaystyle K}is nearly identical.[3] Suppose that you are estimating fiveparametersbased on a random sample, and you can control each parameter separately. If you want your estimations of all five parameters to be good with a chance 95%, what should you do to each parameter? Tuning each parameter's chance to be good to within 95% is not enough because "all are good" is a subset of each event "Estimateiis good". We can use Boole's Inequality to solve this problem. By finding the complement of event "all five are good", we can change this question into another condition: One way is to make each of them equal to 0.05/5 = 0.01, that is 1%. In other words, you have to guarantee each estimate good to 99%( for example, by constructing a 99%confidence interval) to make sure the total estimation to be good with a chance 95%. This is called the Bonferroni Method of simultaneous inference. This article incorporates material from Bonferroni inequalities onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Bonferroni_inequalities
Cherry picking,suppressing evidence, or thefallacy of incomplete evidenceis the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that maycontradictthat position. Cherry picking may be committed intentionally or unintentionally.[2] The term is based on the perceived process of harvesting fruit, such ascherries. The picker would be expected to select only the ripest and healthiest fruits. An observer who sees only the selected fruit may thus wrongly conclude that most, or even all, of the tree's fruit is in a likewise good condition. This can also give a false impression of the quality of the fruit (since it is only a sample and is not arepresentative sample). A concept sometimes confused with cherry picking is the idea of gathering only the fruit that is easy to harvest, while ignoring other fruit that is higher up on the tree and thus more difficult to obtain (seelow-hanging fruit). Cherry picking has a negative connotation as the practice neglects, overlooks or directly suppresses evidence that could lead to a complete picture. Cherry picking can be found in manylogical fallacies. For example, the "fallacy ofanecdotal evidence" tends to overlook large amounts of data in favor of that known personally, "selective use of evidence" rejects material unfavorable to an argument, while afalse dichotomypicks only two options when more are available. Some scholars classify cherry-picking as afallacyof selective attention, the most common example of which is theconfirmation bias.[3]Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to reality.[4] A story about the 5th centuryBCEatheist philosopherDiagoras of Melossays how, when shown the votive gifts of people who had supposedly escaped death by shipwreck by praying to gods, he pointed out that many peoplehaddied at sea in spite of their prayers, yet these cases were not likewise commemorated[5](this is an example ofsurvivorship bias).Michel de Montaigne(1533–1592) in hisessay on propheciescomments on people willing to believe in the validity of supposed seers: I see some who are mightily given to study and comment upon their almanacs, and produce them to us as an authority when anything has fallen out pat; and, for that matter, it is hardly possible but that these alleged authorities sometimes stumble upon a truth amongst an infinite number of lies. ... I think never the better of them for some such accidental hit. ... [N]obody records their flimflams and false prognostics, forasmuch as they are infinite and common; but if they chop upon one truth, that carries a mighty report, as being rare, incredible, and prodigious.[6] Cherry picking is one of the epistemological characteristics ofdenialismand widely used by different sciencedenialiststo seemingly contradict scientific findings. For example, it is used inclimate change denial,evolution denialby creationists, denial of the negative health effects of consumingtobacco productsand passive smoking.[1] Choosing to make selective choices among competing evidence, so as to emphasize those results that support a given position, while ignoring or dismissing any findings that do not support it, is a practice known as "cherry picking" and is a hallmark of poor science or pseudo-science.[7] Rigorous science looks at all the evidence (rather than cherry picking only favorable evidence), controls for variables as to identify what is actually working, uses blinded observations so as to minimize the effects of bias, and uses internally consistent logic."[8] In a 2002 study, a review of previous medical data found cherry picking in tests of anti-depression medication: [researchers] reviewed 31 antidepressant efficacy trials to identify the primary exclusion criteria used in determining eligibility for participation. Their findings suggest that patients in current antidepressant trials represent only a minority of patients treated in routine clinical practice for depression. Excluding potential clinical trial subjects with certain profiles means that the ability to generalize the results of antidepressant efficacy trials lacks empirical support, according to the authors.[9] In argumentation, the practice of "quote mining" is a form of cherry picking,[7]in which the debater selectively picks some quotes supporting a position (or exaggerating an opposing position) while ignoring those that moderate the original quote or put it into a different context. Cherry picking in debates is a large problem as the facts themselves are true but need to be put in context. Because research cannot be done live and is often untimely, cherry-picked facts or quotes usually stick in the public mainstream and, even when corrected, lead to widespread misrepresentation of groups targeted. Aone-sided argument(also known ascard stacking,stacking the deck,ignoring the counterevidence,slanting, andsuppressed evidence)[10]is aninformal fallacythat occurs when only the reasons supporting a proposition are supplied, while all reasons opposing it are omitted. Philosophy professorPeter Suberhas written: The one-sidedness fallacy does not make an argument invalid. It may not even make the argument unsound. The fallacy consists in persuading readers, and perhaps ourselves, that we have said enough to tilt the scale of evidence and therefore enough to justify a judgment. If we have been one-sided, though, then we haven't yet said enough to justify a judgment. The arguments on the other side may be stronger than our own. We won't know until we examine them. So the one-sidedness fallacy doesn't mean that your premises are false or irrelevant, only that they are incomplete. […] You might think that one-sidedness is actually desirable when your goal is winning rather than discovering a complex and nuanced truth. If this is true, then it's true of every fallacy. If winning is persuading a decision-maker, then any kind of manipulation or deception that actually works is desirable. But in fact, while winning may sometimes be served by one-sidedness, it is usually better served by two-sidedness. If your argument (say) in court is one-sided, then you are likely to be surprised by a strong counter-argument for which you are unprepared. The lesson is to cultivate two-sidedness in your thinking about any issue. Beware of any job that requires you to truncate your own understanding.[11] Card stackingis apropagandatechnique that seeks to manipulate audience perception of an issue by emphasizing one side and repressing another.[12]Such emphasis may be achieved throughmedia biasor the use ofone-sidedtestimonials, or by simplycensoringthe voices of critics. The technique is commonly used in persuasive speeches by political candidates to discredit their opponents and to make themselves seem more worthy.[13] The term originates from themagician's gimmick of "stacking the deck", which involves presenting adeck of cardsthat appears to have been randomly shuffled but which is, in fact, 'stacked' in a specific order. The magician knows the order and is able to control the outcome of the trick. In poker, cards can be stacked so that certain hands are dealt to certain players.[14] The phenomenon can be applied to any subject and has wide applications. Whenever a broad spectrum of information exists, appearances can be rigged by highlighting some facts and ignoring others. Card stacking can be a tool of advocacy groups or of those groups with specific agendas.[15]For example, an enlistment poster might focus upon an impressive picture, with words such as "travel" and "adventure", while placing the words, "enlist for two to four years" at the bottom in a smaller and less noticeable point size.[16]
https://en.wikipedia.org/wiki/Cherry_picking
Thegarden of forking pathsis a problem in frequentist hypothesis testing through which researchers can unintentionally produce false positives for a tested hypothesis, through leaving themselves too many degrees of freedom. In contrast to fishing expeditions such asdata dredgingwhere only expected or apparently-significant results are published, this allows for a similar effect even when only one experiment is run, through a series of choices about how to implement methods and analyses, which are themselves informed by the data as it is observed and processed.[1] Exploring a forking decision-tree while analyzing data was at one point grouped with themultiple comparisons problemas an example of poor statistical method. However Gelman and Loken demonstrated[2]that this can happen implicitly by researchers aware of best practices who only make a single comparison and only evaluate their data once. The fallacy is believing an analysis to be free of multiple comparisons despite having had enough degrees of freedom in choosing the method, after seeing some or all of the data, to produce similarly-grounded false positives. Degrees of freedom can include choosing among main effects or interactions, methods for data exclusion, whether to combine different studies, and method of data analysis. Amultiverse analysisis an approach that acknowledges the multitude of analytical paths available when analyzing data. The concept is inspired by the metaphorical "garden of forking paths," which represents the multitude of potential analyses that could be conducted on a single dataset. In a multiverse analysis, researchers systematically vary their analytical choices to explore a range of possible outcomes from the same raw data.[3][4][5]This involves altering variables such as data inclusion/exclusion criteria, variable transformations, outlier handling, statistical models, and hypothesis tests to generate a spectrum of results that could have been obtained given different analytic decisions. The key benefits of a multiverse analysis include. This approach is valuable in fields where research findings are sensitive to the methods of data analysis, such as psychology,[4]neuroscience,[5]economics, and social sciences. Multiverse analysis aims to mitigate issues related to reproducibility and replicability by revealing how different analytical choices can lead to different conclusions from the same dataset. Thus, it encourages a more nuanced understanding of data analysis, promoting integrity and credibility in scientific research. Concepts that are closely related to multiverse analysis are specification-curve analysis[6]and the assessment of vibration of effects.[7] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Garden_of_forking_paths_fallacy
Instatistics,circular analysisis the selection of the details of a data analysis using the data that is being analysed. It is often referred to asdouble dipping, as one uses the same data twice. Circular analysis unjustifiably inflates the apparent statistical strength of any results reported and, at the most extreme, can lead to the apparently significant result being found in data that consists only of noise. In particular, where an experiment is implemented to study a postulated effect, it is amisuse of statisticsto initially reduce the complete dataset by selecting a subset of data in ways that are aligned to the effects being studied. A second misuse occurs where the performance of a fitted model orclassification ruleis reported as a raw result, without allowing for the effects ofmodel-selectionand the tuning of parameters based on the data being analyzed. At its most simple, it can include the decision to remove outliers, after noticing this might help improve the analysis of an experiment. The effect can be more subtle. Infunctional magnetic resonance imaging(fMRI) data, for example, considerable amounts of pre-processing is often needed. These might be applied incrementally until the analysis 'works'. Similarly, the classifiers used in amultivoxel pattern analysisof fMRI data require parameters, which could be tuned to maximise the classification accuracy. In geology, the potential for circular analysis has been noted[1]in the case of maps of geological faults, where these may be drawn on the basis of an assumption that faults develop and propagate in a particular way, with those maps being later used as evidence that faults do actually develop in that way. Careful design of the analysis one plans to perform, prior to collecting the data, means the analysis choice is not affected by the data collected. Alternatively, one might decide to perfect the classification on one or two participants, and then use the analysis on the remaining participant data. Regarding the selection of classification parameters, a common method is to divide the data into two sets, and find the optimum parameter using one set and then test using this parameter value on the second set. This is a standard technique[citation needed]used (for example) by the princeton MVPA classification library.[2]
https://en.wikipedia.org/wiki/Circular_analysis
HARKing(hypothesizing after the results are known) is an acronym coined by social psychologistNorbert Kerr[1]that refers to the questionable research practice of "presenting a post hochypothesisin the introduction of a research report as if it were ana priorihypothesis".[1][2]Hence, a key characteristic of HARKing is that post hoc hypothesizing is falsely portrayed as a priori hypothesizing.[3]HARKing may occur when a researcher tests an a priori hypothesis but then omits that hypothesis from their research report after they find out the results of their test.Post hoc analysisorpost hoc theorizingthen may lead to a post hoc hypothesis. Several types of HARKing have been distinguished, including: Concerns about HARKing appear to be increasing in the scientific community, as shown by the increasing number of citations to Kerr's seminal article.[7]A 2017 review of six surveys found that an average of 43% of researchers surveyed (mainly psychologists) self-reported HARKing "at least once".[5]This figure may be an underestimate if researchers are concerned about reporting questionable research practices, do not perceive themselves to be responsible for HARKing that is proposed by editors and reviewers (i.e., passive HARKing), and/or do not recognize their HARKing due tohindsightorconfirmation biases. HARKing appears to be motivated by a desire to publish research in a publication environment that values a priori hypotheses over post hoc hypotheses and contains apublication biasagainstnull results. In order to improve their chances of publishing their results, researchers may secretly suppress any a priori hypotheses that failed to yield significant results, construct or retrieve post hoc hypotheses that account for any unexpected significant results, and then present these new post hoc hypotheses in their research reports as if they are a priori hypotheses.[1][8][9][5][10] HARKing is associated with the debate regarding prediction and accommodation.[11]In the case of prediction, hypotheses are deduced from a priori theory and evidence. In the case of accommodation, hypotheses are induced from the current research results.[7]One view is that HARKing represents a form of accommodation in which researchers induce ad hoc hypotheses from their current results.[1][3]Another view is that HARKing represents a form of prediction in which researchers deduce hypotheses from a priori theory and evidence after they know their current results.[7] Potential costs of HARKing include:[1]: 211 In 2022, Rubin provided a critical analysis of Kerr's 12 costs of HARKing. He concluded that these costs "are either misconceived, misattributed to HARKing, lacking evidence, or that they do not take into account pre- and post-publication peer review and public availability to research materials and data."[7] Some of the costs of HARKing are thought to have led to thereplication crisisin science.[4]Hence, Bishop described HARKing as one of "the four horsemen of the reproducibility apocalypse," with publication bias, lowstatistical power, andp-hacking[12]being the other three.[13]An alternative view is that it is premature to conclude that HARKing has contributed to the replication crisis.[7][5][14] Thepreregistrationof research hypotheses prior to data collection has been proposed as a method of identifying and deterring HARKing. However, the use of preregistration to prevent HARKing is controversial.[3] Kerr pointed out that "HARKing can entail concealment. The question then becomes whether what is concealed in HARKing can be a useful part of the 'truth' ...or is instead basically uninformative (and may, therefore, be safely ignored at an author's discretion)".[1]: 209Three different positions about the ethics of HARKing depend on whether HARKing conceals "a useful part of the 'truth'". The first position is that all HARKing is unethical under all circumstances because it violates a fundamental principle of communicating scientific research honestly and completely.[1]: 209According to this position, HARKing always conceals a useful part of the truth. A second position is that HARKing falls into a "gray zone" of ethical practice.[1][15]According to this position, some forms of HARKing are more or less ethical under some circumstances.[16][5][17][7]Hence, only some forms of HARKing conceal a useful part of the truth under some conditions. Consistent with this view, a 2018 survey of 119 USA researchers found that HARKing ("reporting an unexpected result as having been hypothesized from the start") was associated with "ambiguously unethical" research practices more than with "unambiguously unethical" research practices.[18] A third position is that HARKing is acceptable provided that hypotheses are explicitly deduced from a priori theory and evidence, as explained in a theoretical rationale, and readers have access to the relevant research data and materials.[7]According to this position, HARKing does not prevent readers from making an adequately informed evaluation of the theoretical quality and plausibility of the HARKed hypotheses and the methodological rigor with which the hypotheses have been tested.[7][17]In this case, HARKing does not conceal a useful part of the truth. Furthermore, researchers may claim that a priori theory and evidence predict their results even if the prediction is deduced after they know their results.[7][19]
https://en.wikipedia.org/wiki/HARKing
There are many coincidences with the assassinations ofU.S. presidentsAbraham LincolnandJohn F. Kennedy, and these have become a piece of Americanfolklore. The list ofcoincidencesappeared in the mainstreamAmerican pressin 1964, a year after theassassination of John F. Kennedy, having appeared prior to that in theGOPCongressional Committee Newsletter.[1][2]In the 1970s,Martin Gardnerexamined the list in an article inScientific American(later reprinted in his 1985 book,The Magic Numbers ofDr. Matrix), pointing out that several of the claimed coincidences were based onmisinformation.[3][4]Gardner's version of the list contained 16 items; many subsequent versions have circulated much longer lists. A 1999 examination bySnopesfound that the listed "coincidences are easily explained as the simple product of mere chance."[5]In 1992, theSkeptical Inquirerran a "Spooky Presidential Coincidences Contest." One winner found a series of sixteen similar coincidences between Kennedy and formerMexican presidentÁlvaro Obregón. Another winner came up with similar lists for twenty-one pairs of U.S. presidents.[6]For example, there were 13 similarities found betweenThomas JeffersonandAndrew Jackson.[7] The following are the list of "coincidences" that are commonly associated with the conspiracy, some of which are not true statements: Some urban folklorists have postulated that the list provided a way for people to make sense of two tragic events in American history by seeking out patterns.[5][48]Gardner and others have said that it is relatively easy to find seemingly meaningful patterns relating any two people or events. The psychological phenomenon ofapophenia– defined as "the tendency to perceive order in random configurations" – has been proposed as a possible reason for the lists' enduring popularity.[4] Most of the items listed above are true, such as the year in which Lincoln and Kennedy were each elected president, but this is not so unusual given that presidential elections are held only every four years. A few of the items are simply untrue: for example, Lincoln never had a secretary named Kennedy; Lincoln's secretaries wereJohn HayandJohn G. Nicolay.[5]However, Lincoln's footman,William H. Crook, did advise Lincoln not to go that night to Ford's Theatre.[49][50]David Mikkelson ofSnopespoints out many ways in which Lincoln and Kennedy do not match, to show the superficial nature of the alleged coincidences: For example, Lincoln was born in 1809 but Kennedy in 1917. Lincoln and Kennedy were both elected in '60, but Lincoln was already in his second term when he was assassinated; Kennedy was not. Also, neither the years, months, nor dates of their assassinations match. Although both were shot on Fridays, Lincoln did not die from his injuries until Saturday.[5] Buddy Starcherwrote a song, "History Repeats Itself," recounting many of these coincidences and parallels between the two presidents' careers and deaths. The song became anAmerican Top 40hit during the spring of 1966,[51]and reached number two on theCountrychart.Cab Callowayalso scored a minor chart hit with the song that same year.
https://en.wikipedia.org/wiki/Lincoln%E2%80%93Kennedy_coincidences_urban_legend
Thelook-elsewhere effectis aphenomenonin the statistical analysis ofscientific experimentswhere an apparentlystatistically significantobservation may have actually arisen by chance because of the sheer size of theparameter spaceto be searched.[1][2][3][4][5] Once the possibility of look-elsewhere error in an analysis is acknowledged, it can be compensated for by careful application of standard mathematical techniques.[6][7][8] More generally known in statistics as theproblem of multiple comparisons, the term gained some media attention in 2011, in the context of the search for theHiggs bosonat theLarge Hadron Collider.[9] Many statistical tests deliver ap-value, the probability that a given result could be obtained by chance, assuming the hypothesis one seeks to prove is in fact false. When asking "doesXaffectY?", it is common to varyXand see if there is significant variation inYas a result. If this p-value is less than some predeterminedstatistical significancethresholdα, one considers the result "significant". However, if one is performing multiple tests ("looking elsewhere" if the first test fails) then apvalue of 1/nis expected to occur once perntests. For example, when there is no real effect, an event withp< 0.05 will still occur once, on average, for each 20 tests performed. In order to compensate for this, you could divide your thresholdαby the number of testsn, so a result is significant whenp<α/n. Or, equivalently, multiply the observedpvalue by the number of tests (significant whennp<α). This is a simplified case; the numbernis actually the number ofdegrees of freedomin the tests, or the number of effectively independent tests. If they are not fully independent, the number may be lower than the number of tests. The look-elsewhere effect is a frequent cause of "significance inflation" when the number of independent testsnis underestimated because failed tests are not published. One paper may fail to mention alternative hypotheses considered, or a paper producing no result may simply not be published at all, leading to journals dominated by statistical outliers.
https://en.wikipedia.org/wiki/Look-elsewhere_effect
Metascience(also known asmeta-research) is the use ofscientific methodologyto studyscienceitself. Metascience seeks to increase the quality of scientific research while reducinginefficiency. It is also known as "research on research" and "the science of science", as it usesresearch methodsto study howresearchis done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "abird's eye viewof science".[1]In the words ofJohn Ioannidis, "Science is the best thing that has happened to human beings... but we can do it better."[2] In 1966, an early meta-research paper examined thestatistical methodsof 295 papers published in ten high-profile medical journals.[3]It found that "in almost 73% of the reports read... conclusions were drawn when the justification for these conclusions was invalid." Meta-research in the following decades found many methodological flaws, inefficiencies, and poor practices in research across numerous scientific fields. Many scientific studies could not bereproduced, particularly inmedicineand thesoft sciences. The term "replication crisis" was coined in the early 2010s as part of a growing awareness of the problem.[4] Measures have been implemented to address the issues revealed by metascience. These measures include thepre-registrationof scientific studies andclinical trialsas well as the founding of organizations such asCONSORTand theEQUATOR Networkthat issue guidelines for methodology and reporting. There are continuing efforts to reduce themisuse of statistics, to eliminateperverse incentivesfrom academia, to improve thepeer reviewprocess, to systematically collect data about the scholarly publication system,[5]to combatbiasin scientific literature, and to increase the overall quality and efficiency of the scientific process. As such, metascience is a big part of methods underlying theOpen ScienceMovement. In 1966, an early meta-research paper examined thestatistical methodsof 295 papers published in ten high-profile medical journals. It found that, "in almost 73% of the reports read ... conclusions were drawn when the justification for these conclusions was invalid."[7]A paper in 1976 called for funding for meta-research: "Because the very nature of research on research, particularly if it is prospective, requires long periods of time, we recommend that independent, highly competent groups be established with ample, long term support to conduct and support retrospective and prospective research on the nature of scientific discovery".[8]In 2005,John Ioannidispublished a paper titled "Why Most Published Research Findings Are False", which argued that a majority of papers in the medical field produce conclusions that are wrong.[6]The paper went on to become the most downloaded paper in thePublic Library of Science[9][10]and is considered foundational to the field of metascience.[11]In a related study withJeremy HowickandDespina Koletsi, Ioannidis showed that only a minority of medical interventions are supported by 'high quality' evidence according toThe Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.[12]Later meta-research identified widespread difficulty inreplicatingresults in many scientific fields, includingpsychologyandmedicine. This problem was termed "the replication crisis". Metascience has grown as a reaction to the replication crisis and to concerns about waste in research.[13] Many prominent publishers are interested in meta-research and in improving the quality of their publications. Top journals such asScience,The Lancet, andNature,provide ongoing coverage of meta-research and problems with reproducibility.[14]In 2012PLOS ONElaunched a Reproducibility Initiative. In 2015Biomed Centralintroduced a minimum-standards-of-reporting checklist to four titles. The first international conference in the broad area of meta-research was the Research Waste/EQUATORconference held in Edinburgh in 2015; the first international conference on peer review was thePeer Review Congressheld in 1989.[15]In 2016,Research Integrity and Peer Reviewwas launched. The journal's opening editorial called for "research that will increase our understanding and suggest potential solutions to issues related to peer review, study reporting, and research and publication ethics".[16] Metascience can be categorized into five major areas of interest: Methods, Reporting, Reproducibility, Evaluation, and Incentives. These correspond, respectively, with how to perform, communicate, verify, evaluate, and reward research.[1] Metascience seeks to identify poor research practices, includingbiasesin research, poor study design,abuse of statistics, and to find methods to reduce these practices.[1]Meta-research has identified numerous biases in scientific literature.[17]Of particular note is the widespreadmisuse of p-valuesand abuse ofstatistical significance.[18][19] Scientific data science is the use ofdata scienceto analyse research papers. It encompasses bothqualitativeandquantitativemethods. Research in scientific data science includesfraud detection[20]andcitation networkanalysis.[21] Journalology, also known as publication science, is the scholarly study of all aspects of theacademic publishingprocess.[22][23]The field seeks to improve the quality of scholarly research by implementingevidence-based practicesin academic publishing.[24]The term "journalology" was coined byStephen Lock, the formereditor-in-chiefofThe BMJ. The first Peer Review Congress, held in 1989 inChicago,Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[24]The field of journalology has been influential in pushing for studypre-registrationin science, particularly inclinical trials.Clinical-trial registrationis now expected in most countries.[24] Meta-research has identified poor practices in reporting, explaining, disseminating and popularizing research, particularly within the social and health sciences. Poor reporting makes it difficult to accurately interpret the results of scientific studies, toreplicatestudies, and to identify biases and conflicts of interest in the authors. Solutions include the implementation of reporting standards, and greater transparency in scientific studies (including better requirements for disclosure of conflicts of interest). There is an attempt to standardize reporting of data and methodology through the creation of guidelines by reporting agencies such asCONSORTand the largerEQUATOR Network.[1] The replication crisis is an ongoingmethodologicalcrisis in which it has been found that many scientific studies are difficult or impossible toreplicate.[25][26]While the crisis has its roots in the meta-research of the mid- to late 20th century, the phrase "replication crisis" was not coined until the early 2010s[27]as part of a growing awareness of the problem.[1]The replication crisis has been closely studied inpsychology(especiallysocial psychology) andmedicine,[28][29]including cancer research.[30][31]Replication is an essential part of the scientific process, and the widespread failure of replication puts into question the reliability of affected fields.[32] Moreover, replication of research (or failure to replicate) is considered less influential than original research, and is less likely to be published in many fields. This discourages the reporting of, and even attempts to replicate, studies.[33][34] Metascience seeks to create a scientific foundation for peer review. Meta-research evaluatespeer reviewsystems includingpre-publicationpeer review,post-publicationpeer review, andopen peer review. It also seeks to develop better research funding criteria.[1] Metascience seeks to promote better research through better incentive systems. This includes studying the accuracy, effectiveness, costs, and benefits of different approaches to ranking and evaluating research and those who perform it.[1]Critics argue thatperverse incentiveshave created apublish-or-perishenvironment in academia which promotes the production ofjunk science, low quality research, andfalse positives.[35][36]According toBrian Nosek, "The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right."[37]Proponents of reform seek to structure the incentive system to favor higher-quality results.[38]For example, by quality being judged on the basis of narrative expert evaluations ("rather than [only or mainly] indices"), institutional evaluation criteria, guaranteeing of transparency, and professional standards.[39] Studies proposed machine-readable standards and (a taxonomy of)badgesfor science publication management systems that hones in on contributorship – who has contributed what and how much of the research labor – rather that using traditional concept of plainauthorship– who was involved in any way creation of a publication.[40][41][42][43]A study pointed out one of the problems associated with the ongoing neglect of contribution nuanciation – it found that "the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers".[44] Factors other than a submission's merits can substantially influence peer reviewers' evaluations.[45]Such factors may however also be important such as the use of track-records about the veracity of a researchers' prior publications and its alignment with public interests. Nevertheless, evaluation systems – include those of peer-review – may substantially lack mechanisms and criteria that are oriented or well-performingly oriented towards merit, real-world positive impact, progress and public usefulness rather than analytical indicators such as number of citations or altmetrics even when such can be used as partial indicators of such ends.[46][47]Rethinking of the academic reward structure "to offer more formal recognition for intermediate products, such as data" could have positive impacts and reduce data withholding.[48] A commentary noted that academic rankings don't consider where (country and institute) the respective researchers were trained.[49] Scientometrics concerns itself with measuringbibliographic datain scientific publications. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts.[50]Studies suggest that "metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades" and have to some degrees "ceased" to be good measures,[44][19]leading to issues such as "overproduction, unnecessary fragmentations, overselling, predatory journals (pay and publish), clever plagiarism, and deliberate obfuscation of scientific results so as to sell and oversell".[51] Novel tools in this area include systems to quantify how much the cited-node informs the citing-node.[52]This can be used to convert unweighted citation networks to a weighted one and then forimportanceassessment, deriving "impact metrics for the various entities involved, like the publications, authors etc"[53]as well as, among other tools, for search engine- andrecommendation systems. Science fundingandscience governancecan also be explored and informed by metascience.[54] Various interventions such asprioritizationcan be important.For instance, the concept ofdifferential technological developmentrefers to deliberately developing technologies – e.g. control-, safety- and policy-technologies versusrisky biotechnologies– at different precautionary paces to decrease risks, mainlyglobal catastrophic risk, by influencing the sequence in which technologies are developed.[55][56]Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow[57]or inappropriate. Other incentives to govern science and related processes, including via metascience-based reforms, may include ensuring accountability to the public (in terms of e.g. accessibility of, especially publicly-funded, research or of it addressing various research topics of public interest in serious manners), increasing the qualified productive scientific workforce, improving the efficiency of science to improveproblem-solvingin general, and facilitating that unambiguous societal needs based on solid scientific evidence – such as about human physiology – are adequately prioritized and addressed. Such interventions, incentives and intervention-designs can be subjects of metascience. Scientific awards are one category of science incentives. Metascience can explore existing and hypothetical systems of science awards. For instance, it found that work honored byNobel Prizesclustered in only a fewscientific fieldswith only 36/71 having received at least one Nobel Prize. Of the 114/849 domains science could be divided into their DC2 and DC3 classification systems, five were shown to comprise over half of the Nobel Prizes awarded between 1995 and 2017 (particle physics [14%], cell biology [12.1%], atomic physics [10.9%], neuroscience [10.1%], molecular chemistry [5.3%]).[59][60] A study found that delegation of responsibility bypolicy-makers – a centralized authority-based top-down approach – for knowledge production and appropriate funding to science with science subsequently somehow delivering "reliable and useful knowledge to society" is too simple.[54] Measurements show that allocation of bio-medical resources can be more strongly correlated to previous allocations and research than toburden of diseases.[61] A study suggests that "[i]f peer review is maintained as the primary mechanism of arbitration in the competitive selection of research reports and funding, then thescientific communityneeds to make sure it is not arbitrary".[45] Studies indicate there to is a need to "reconsider how we measure success" (see#Factors of success and progress).[44] Funding information from grant databases and funding acknowledgment sections can be sources of data for scientometrics studies, e.g. for investigating or recognition of the impact of funding entities on the development of science and technology.[62] It has been argued that "science has two fundamental attributes that underpin its value as a global public good: that knowledge claims and the evidence on which they are based are made openly available to scrutiny, and that the results of scientific research are communicated promptly and efficiently".[63]Metascientific research is exploring topics ofscience communicationsuch asmedia coverage of science,science journalismand online communication of results by science educators and scientists.[64][65][66][67]A study found that the "main incentive academics are offered for using social media is amplification" and that it should be "moving towards an institutional culture that focuses more on how these [or such] platforms can facilitate real engagement with research".[68]Science communication may also involve the communication of societal needs, concerns and requests to scientists. Alternative metrics tools can be used not only for help in assessment (performance and impact)[61]and findability, but also aggregate many of the public discussions about a scientific paper in social media such asreddit,citations on Wikipedia, and reports about the study in the news media which can then in turn be analyzed in metascience or provided and used by related tools.[69]In terms of assessment and findability, altmetrics rate publications' performance or impact by the interactions they receive through social media or other online platforms,[70]which can for example be used for sorting recent studies by measured impact, including before other studies are citing them. The specific procedures of established altmetrics are not transparent[70]and the used algorithms can not be customized or altered by the user as open source software can. A study has described various limitations of altmetrics and points "toward avenues for continued research and development".[71]They are also limited in their use as a primary tool for researchers to find received constructive feedback.(seeabove) It has been suggested that it may benefit science if "intellectual exchange—particularly regarding the societal implications and applications of science and technology—are better appreciated and incentivized in the future".[61] Primary studies "without context, comparison or summary are ultimately of limited value" and various types[additional citation(s) needed]of research syntheses and summaries integrate primary studies.[72]Progress in key social-ecological challenges of the global environmental agenda is "hampered by a lack ofintegrationand synthesis of existing scientific evidence", with a "fast-increasing volume of data", compartmentalized information and generally unmet evidence synthesis challenges.[73]According to Khalil, researchers are facing the problem oftoo many papers– e.g. in March 2014 more than 8,000 papers were submitted toarXiv– and to "keep up with the huge amount of literature, researchers use reference manager software, they make summaries andnotes, and they rely on review papers to provide an overview of a particular topic". He notes that review papers are usually (only)" for topics in which many papers were written already, and they can get outdated quickly" and suggests "wiki-review papers" that get continuously updated with new studies on a topic and summarize many studies' results and suggest future research.[74]A study suggests that if a scientific publication is being cited in a Wikipedia article this could potentially be considered as an indicator of some form of impact for this publication,[70]for example as this may, over time, indicate that the reference has contributed to a high-level of summary of the given topic. Science journalistsplay an important role in the scientific ecosystem and in science communication to the public and need to "know how to use, relevant information when deciding whether to trust a research finding, and whether and how to report on it", vetting the findings that get transmitted to the public.[75] Some studies investigatescience education, e.g. the teaching about selectedscientific controversies[76]and historical discovery process of major scientific conclusions,[77]and commonscientific misconceptions.[78]Education can also be a topic more generally such as how to improve the quality of scientific outputs and reduce the time needed before scientific work or how to enlarge and retain various scientific workforces. Many students have misconceptions about what science is and how it works.[79]Anti-scienceattitudes and beliefs are also a subject of research.[80][81]Hotez suggests antiscience "has emerged as a dominant and highly lethal force, and one that threatens global security", and that there is a need for "new infrastructure" that mitigates it.[82] Metascience can investigate how scientific processes evolve over time. A study found that teams are growing in size, "increasing by an average of 17% per decade".[61](seelabor advantagebelow) It was found that prevalent forms of non-open accesspublication and prices charged for many conventional journals – even for publicly funded papers – are unwarranted, unnecessary – or suboptimal – and detrimental barriers to scientific progress.[63][85][86][87]Open access can save considerable amounts of financial resources, which could be used otherwise, and level the playing field for researchers in developing countries.[88]There are substantial expenses for subscriptions, gaining access to specific studies, and forarticle processing charges.Paywall: The Business of Scholarshipis a documentary on such issues.[89] Another topic are the established styles of scientific communication (e.g. long text-form studies and reviews) and thescientific publishingpractices – there are concerns about a "glacial pace" of conventional publishing.[90]The use ofpreprint-servers to publish study-drafts early is increasing andopen peer review,[91]new tools to screen studies,[92]and improved matching of submitted manuscripts to reviewers[93]are among the proposals to speed up publication. Studies have various kinds ofmetadatawhich can be utilized, complemented and made accessible in useful ways.OpenAlexis a free online index of over 200 million scientific documents that integrates and provides metadata such as sources,citations,author information,scientific fieldsandresearch topics. ItsAPIand open source website can be used for metascience,scientometricsand novel tools that query thissemanticweb ofpapers.[95][96][97]Another project under development,Scholia, usesmetadataof scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus usingWikidata's "main subject" property.[98] Beyond metadata explicitly assigned to studies by humans,natural language processingand AI can be used to assign research publicationsto topics– one study investigating the impact of science awards used such to associate a paper's text (not just keywords) with the linguistic content of Wikipedia's scientific topics pages ("pages are created and updated by scientists and users through crowdsourcing"), creating meaningful and plausible classifications of high-fidelity scientific topics for further analysis or navigability.[99] Metascience research is investigating the growth of science overall, using e.g. data on the number of publications inbibliographic databases. A study found segments with different growth rates appear related to phases of "economic (e.g., industrialization)" – money is considered as necessary input to the science system – "and/or political developments (e.g., Second World War)". It also confirmed a recent exponential growth in the volume of scientific literature and calculated an average doubling period of 17.3 years.[101] However, others have pointed out that is difficult to measure scientific progress in meaningful ways, partly because it's hard to accurately evaluate how important any given scientific discovery is. A variety of perspectives of the trajectories of science overall (impact, number of major discoveries, etc) have been described in books and articles, including that science is becoming harder (per dollar or hour spent), that if science "slowing today, it is because science has remained too focused on established fields", that papers and patents are increasingly less likely to be "disruptive" in terms of breaking with the past as measured by the "CD index",[83]and that there is a greatstagnation– possibly as part of a larger trend[102]– whereby e.g. "things haven't changed nearly as much since the 1970s" when excluding the computer and the Internet. Better understanding of potential slowdowns according to some measures could be a major opportunity to improve humanity's future.[103]For example, emphasis on citations in the measurement of scientific productivity, information overloads,[102]reliance on a narrower set of existing knowledge (which may include narrowspecializationand related contemporary practices)based on three "use of previous knowledge"-indicators,[83]and risk-avoidant funding structures[104]may have "toward incremental science and away fromexploratoryprojects that are more likely to fail".[105]The study that introduced the "CD index" suggests the overall number of papers has risen while the total of "highly disruptive" papers as measured by the index hasn't (notably, the1998discovery of theaccelerating expansion of the universehas a CD index of 0). Their results also suggest scientists and inventors "may be struggling to keep up with the pace of knowledge expansion".[106][83] Various ways of measuring "novelty" of studies, novelty metrics,[105]have been proposed to balance a potential anti-novelty bias – such as textual analysis[105]or measuring whether it makes first-time-ever combinations of referenced journals, taking into account the difficulty.[107]Other approaches include pro-actively funding risky projects.[61](seeabove) Science maps could show main interrelated topics within a certain scientific domain, their change over time, and their key actors (researchers, institutions, journals). They may help find factors determine the emergence of new scientific fields and the development of interdisciplinary areas and could be relevant for science policy purposes.[108](seeabove)Theories of scientific changecould guide "the exploration and interpretation of visualized intellectual structures and dynamic patterns".[109]The maps can show the intellectual, social or conceptual structure of a research field.[110]Beyond visual maps, expertsurvey-based studies and similar approaches could identify understudied or neglected societally important areas, topic-level problems (such as stigma or dogma), or potential misprioritizations.[additional citation(s) needed]Examples of such arestudiesaboutpolicyin relation to public health[111]and the social science of climate change mitigation[112]where it has been estimated that only 0.12% of all funding for climate-related research is spent on such despite the most urgent puzzle at the current juncture being working out how to mitigate climate change, whereas the natural science of climate change is already well established.[112] There are also studies that map a scientific field or a topic such as the study of the use of research evidencein policyandpractice, partly usingsurveys.[113] Some research is investigatingscientific controversyor controversies, and may identify currently ongoing major debates (e.g. open questions), and disagreement between scientists or studies.[additional citation(s) needed]One study suggests the level of disagreement was highest in thesocial sciencesandhumanities(0.61%), followed by biomedical and health sciences (0.41%), life and earth sciences (0.29%); physical sciences and engineering (0.15%), and mathematics and computer science (0.06%).[114]Such research may also show, where the disagreements are, especially if they cluster, including visually such as with cluster diagrams. Studies about a specificresearch question or research topicare often reviewed in the form of higher-level overviews in which results from various studies are integrated, compared, critically analyzed and interpreted. Examples of such works arescientific reviewsandmeta-analyses. These and related practices face various challenges and are a subject of metascience. Various issues with included or available studies such as, for example, heterogeneity of methods used may lead to faulty conclusions of the meta-analysis.[115] Various problems require swiftintegrationof new and existing science-based knowledge. Especially setting where there are a large number of loosely related projects and initiatives benefit from a common ground or "commons".[98] Evidence synthesis can be applied to important and, notably, both relatively urgent and certainglobal challenges: "climate change, energy transitions, biodiversity loss,antimicrobial resistance, poverty eradication and so on". It was suggested that a better system would keep summaries of research evidence up to date via living systematic reviews – e.g. asliving documents. While the number of scientific papers and data (or information and online knowledge)has risen substantially,[additional citation(s) needed]the number of published academic systematic reviews has risen from "around 6,000 in 2011 to more than 45,000 in 2021".[116]Anevidence-basedapproach is important for progress in science,policy, medical and other practices. For example, meta-analyses can quantify what is known and identify what is not yet known[72]and place "truly innovative and highlyinterdisciplinaryideas" into the context of established knowledge which may enhance their impact.[61](seeabove) It has been hypothesized that a deeper understanding of factors behind successful science could "enhance prospects of science as a whole to more effectively address societal problems".[61][19] Two metascientists reported that "structures fostering disruptive scholarship and focusing attention on novelideas" could be important as in a growingscientific fieldcitation flowsdisproportionately consolidate to already well-cited papers, possibly slowing and inhibiting canonicalprogress.[117][118]A study concluded that to enhance impact of truly innovative and highly interdisciplinary novel ideas, they should be placed in the context of established knowledge.[61] Other researchers reported that the most successful – in terms of "likelihood ofprizewinning, National Academy of Science (NAS) induction, or superstardom" –protégés studied under mentorswho publishedresearchfor which they were conferred a prize after the protégés' mentorship. Studying original topics rather than these mentors' research-topics was also positively associated with success.[119][120]Highly productive partnerships are also a topic of research – e.g. "super-ties" of frequent co-authorship of two individuals who can complement skills, likely also the result of other factors such as mutual trust, conviction, commitment and fun.[121][61] The emergence or origin of ideas by successful scientists is also a topic of research, for example reviewing existing ideas on howMendelmade hisdiscoveries,[122]– or more generally, the process of discovery by scientists. Science is a "multifaceted process of appropriation,copying, extending, or combining ideas andinventions" [and other types of knowledge or information], and not an isolated process.[61]There are also few studies investigating scientists' habits, common modes of thinking, reading habits, use of information sources,digital literacyskills, andworkflows.[123][124][125][126][127] A study theorized that in many disciplines, larger scientific productivity or success byelite universitiescan be explained by their larger pool of available funded laborers.[128][129]The study found that university prestige was only associated with higher productivity for faculty with group members, not for faculty publishing alone or the group members themselves. This is presented as evidence that the outsize productivity of elite researchers is not from a more rigorous selection of talent by top universities, but from labor advantages accrued through greater access to funding and the attraction of prestige to graduate and postdoctoral researchers. Success in science (as indicated in tenure review processes) is often measured in terms of metrics like citations, not in terms of the eventual or potential impact on lives and society,[130]which awards(seeabove)sometimes do. Problems with such metrics are roughly outlined elsewhere in this article and include thatreviewsreplace citations to primary studies.[72]There are also proposals for changes to the academic incentives systems that increase the recognition of societal impact in the research process.[131] A proposed field of "Progress Studies" could investigate how scientists (or funders or evaluators of scientists) should be acting, "figuring out interventions" and studyprogressitself.[132]The field was explicitly proposed in a 2019 essay and described as anapplied sciencethat prescribes action.[133] A study suggests that improving the way science is done could accelerate the rate of scientific discovery and its applications which could be useful for finding urgent solutions to humanity's problems, improve humanity's conditions, and enhance understanding of nature. Metascientific studies can seek to identify aspects of science that need improvement, and develop ways to improve them.[74]If science is accepted as the fundamental engine of economic growth and social progress, this could raise "the question of what we – as a society – can do to accelerate science, and to direct science toward solving society's most important problems."[134]However, one of the authors clarified that a one-size-fits-all approach is not thought to be right answer – for example, in funding, DARPA models, curiosity-driven methods, allowing "a single reviewer to champion a project even if his or her peers do not agree", and various other approaches all have their uses. Nevertheless, evaluation of them can help build knowledge of what works or works best.[104] Meta-research identifying flaws in scientific practice has inspired reforms in science. These reforms seek to address and fix problems in scientific practice which lead to low-quality or inefficient research. A 2015 study lists "fragmented" efforts in meta-research.[1] The practice of registering a scientific study before it is conducted is calledpre-registration. It arose as a means to address thereplication crisis. Pregistration requires the submission of a registered report, which is then accepted for publication or rejected by a journal based on theoretical justification, experimental design, and the proposed statistical analysis. Pre-registration of studies serves to preventpublication bias(e.g. not publishing negative results), reducedata dredging, and increase replicability.[135][136] Studies showing poor consistency and quality of reporting have demonstrated the need for reporting standards and guidelines in science, which has led to the rise of organisations that produce such standards, such asCONSORT(Consolidated Standards of Reporting Trials) and theEQUATOR Network. The EQUATOR (Enhancing theQUAlity andTransparencyOf healthResearch)[137]Network is an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability ofmedical researchliterature.[138]The EQUATOR Network was established with the goals of raising awareness of the importance of good reporting of research, assisting in the development, dissemination and implementation of reporting guidelines for different types of study designs, monitoring the status of the quality of reporting of research studies in the health sciences literature, and conducting research relating to issues that impact the quality of reporting of health research studies.[139]The Network acts as an "umbrella" organisation, bringing together developers of reporting guidelines, medical journal editors and peer reviewers, research funding bodies, and other key stakeholders with a mutual interest in improving the quality of research publications and research itself. Metascience is used in the creation and improvement of technical systems (ICTs) and standards of science evaluation, incentivation, communication, commissioning, funding, regulation, production, management, use and publication. Such can be called "applied metascience"[140][better source needed]and may seek to explore ways to increase quantity, quality and positive impact of research. One example for such is thedevelopment of alternative metrics.[61] Various websites or tools also identify inappropriate studies and/or enable feedback such asPubPeer,Cochrane's Risk of Bias Tool[141]andRetractionWatch. Medical and academic disputes are as ancient as antiquity and a study calls for research into "constructive and obsessive criticism" and into policies to "help strengthen social media into a vibrant forum for discussion, and not merely an arena for gladiator matches".[142]Feedback to studies can be found via altmetrics which is often integrated at the website of the study – most often as an embeddedAltmetricsbadge – but may often be incomplete, such as only showing social media discussions that link to the study directly but not those that link to news reports about the study.(seeabove) Tools may get developed with metaresearch or can be used or investigated by such. Notable examples may include: According to a study "a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed" is needed due to reproducibility issues in science.[151][152]A study suggests a tool for screening studies for early warning signs for research fraud.[153] Clinical research in medicine is often of low quality, and many studies cannot be replicated.[154][155]An estimated 85% of research funding is wasted.[156]Additionally, the presence of bias affects research quality.[157]Thepharmaceutical industryexerts substantial influence on the design and execution of medical research. Conflicts of interest are common among authors of medical literature[158]and among editors of medical journals. While almost all medical journals require their authors to disclose conflicts of interest, editors are not required to do so.[159]Financialconflicts of interesthave been linked to higher rates of positive study results. In antidepressant trials, pharmaceutical sponsorship is the best predictor of trial outcome.[160] Blindingis another focus of meta-research, as error caused by poor blinding is a source ofexperimental bias. Blinding is not well reported in medical literature, and widespread misunderstanding of the subject has resulted in poor implementation of blinding inclinical trials.[161]Furthermore,failure of blindingis rarely measured or reported.[162]Research showing the failure of blinding inantidepressanttrials has led some scientists to argue that antidepressants are no better thanplacebo.[163][164]In light of meta-research showing failures of blinding,CONSORTstandards recommend that all clinical trials assess and report the quality of blinding.[165] Studies have shown that systematic reviews of existing research evidence are sub-optimally used in planning a new research or summarizing the results.[166]Cumulative meta-analyses of studies evaluating the effectiveness of medical interventions have shown that many clinical trials could have been avoided if a systematic review of existing evidence was done prior to conducting a new trial.[167][168][169]For example, Lau et al.[167]analyzed 33 clinical trials (involving 36974 patients) evaluating the effectiveness of intravenousstreptokinaseforacute myocardial infarction. Their cumulative meta-analysis demonstrated that 25 of 33 trials could have been avoided if a systematic review was conducted prior to conducting a new trial. In other words, randomizing 34542 patients was potentially unnecessary. One study[170]analyzed 1523 clinical trials included in 227meta-analysesand concluded that "less than one quarter of relevant prior studies" were cited. They also confirmed earlier findings that most clinical trial reports do not present systematic review to justify the research or summarize the results.[170] Many treatments used in modern medicine have been proven to be ineffective, or even harmful. A 2007 study by John Ioannidis found that it took an average of ten years for the medical community to stop referencing popular practices after their efficacy was unequivocally disproven.[171][172] Metascience has revealed significant problems in psychological research. The field suffers from high bias, lowreproducibility, and widespreadmisuse of statistics.[173][174][175]The replication crisis affectspsychologymore strongly than any other field; as many as two-thirds of highly publicized findings may be impossible to replicate.[176]Meta-research finds that 80-95% of psychological studies support their initial hypotheses, which strongly implies the existence ofpublication bias.[177] The replication crisis has led to renewed efforts to re-test important findings.[178][179]In response to concerns aboutpublication biasandp-hacking, more than 140 psychology journals have adoptedresult-blind peer review, in which studies arepre-registeredand published without regard for their outcome.[180]An analysis of these reforms estimated that 61 percent of result-blind studies producenull results, in contrast with 5 to 20 percent in earlier research. This analysis shows that result-blind peer review substantially reduces publication bias.[177] Psychologists routinely confusestatistical significancewith practical importance, enthusiastically reporting great certainty in unimportant facts.[181]Some psychologists have responded with an increased use ofeffect sizestatistics, rather than sole reliance on thepvalues.[citation needed] Richard Feynmannoted that estimates ofphysical constantswere closer to published values than would be expected by chance. This was believed to be the result ofconfirmation bias: results that agreed with existing literature were more likely to be believed, and therefore published. Physicists now implement blinding to prevent this kind of bias.[182] Web measurement studies are essential for understanding the workings of the modern Web, particularly in the fields of security and privacy. However, these studies often require custom-built or modified crawling setups, leading to a plethora of analysis tools for similar tasks. In a paper by Nurullah Demir et al., the authors surveyed 117 recent research papers to derive best practices for Web-based measurement studies and establish criteria for reproducibility and replicability. They found that experimental setups and other critical information for reproducing and replicating results are often missing. In a large-scale Web measurement study on 4.5 million pages with 24 different measurement setups, the authors demonstrated the impact of slight differences in experimental setups on the overall results, emphasizing the need for accurate and comprehensive documentation.[183] There are several organizations and universities across the globe which work on meta-research – these include the Meta-Research Innovation Center at Berlin,[184]theMeta-Research Innovation Center at Stanford,[185][186]theMeta-Research Center at Tilburg University, the Meta-research & Evidence Synthesis Unit, The George Institute for Global Health at India andCenter for Open Science. Organizations that develop tools for metascience includeOurResearch,Center for Scientific Integrityandaltmetrics companies. There is an annual Metascience Conference hosted by the Association for Interdisciplinary Meta-Research and Open Science (AIMOS) and biannual conference hosted by the Centre for Open Science.[187][188]
https://en.wikipedia.org/wiki/Metascience
Statistics, when used in a misleading fashion, can trick the casual observer into believing something other than what thedatashows. That is, amisuse of statisticsoccurs when a statistical argument asserts afalsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes astatisticalfallacy. The consequences of such misinterpretations can be quite severe. For example, in medical science, correcting a falsehood may take decades and cost lives. Misuses can be easy to fall into. Professional scientists, mathematicians and even professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge ofprobability theoryand lack ofstandardizationof theirtests. One usable definition is: "Misuse of Statistics: Using numbers in such a manner that – either by intent or through ignorance or carelessness – the conclusions are unjustified or incorrect."[1]The "numbers" includemisleading graphicsdiscussed in other sources. The term is not commonly encountered in statistics texts and there is no single authoritative definition. It is a generalization oflying with statisticswhich was richly described by examples from statisticians 60 years ago. The definition confronts some problems (some are addressed by the source):[2] How to Lie with Statisticsacknowledges that statistics canlegitimatelytake many forms. Whether the statistics show that a product is "light and economical" or "flimsy and cheap" can be debated whatever the numbers. Some object to the substitution of statistical correctness for moral leadership (for example) as an objective. Assigning blame for misuses is often difficult because scientists, pollsters, statisticians and reporters are often employees or consultants. An insidious misuse of statistics is completed by the listener, observer, audience, or juror. The supplier provides the "statistics" as numbers or graphics (or before/after photographs), allowing the consumer to draw conclusions that may be unjustified or incorrect. The poor state of publicstatistical literacyand the non-statistical nature of human intuition make it possible to mislead without explicitly producing faulty conclusion. The definition is weak on the responsibility of the consumer of statistics. A historian listed over 100 fallacies in a dozen categories including those of generalization and those of causation.[3]A few of the fallacies are explicitly or potentially statistical including sampling, statistical nonsense, statistical probability, false extrapolation, false interpolation and insidious generalization. All of the technical/mathematical problems of applied probability would fit in the single listed fallacy of statistical probability. Many of the fallacies could be coupled to statistical analysis, allowing the possibility of a false conclusion flowing from a statistically sound analysis. An example use of statistics is in the analysis of medical research. The process includes[4][5]experimental planning, the conduct of the experiment, data analysis, drawing the logical conclusions and presentation/reporting. The report is summarized by the popular press and by advertisers. Misuses of statistics can result from problems at any step in the process. The statistical standards ideally imposed on the scientific report are much different than those imposed on the popular press and advertisers; however, cases exist of advertising disguised as science, such asAustralasian Journal of Bone & Joint Medicine. The definition of the misuse of statistics is weak on the required completeness of statistical reporting. The opinion is expressed that newspapers must provide at least the source for the statistics reported. Many misuses of statistics occur because To promote a neutral (useless) product, a company must find or conduct, for example, 40 studies with a confidence level of 95%. If the product is useless, this would produce one study showing the product was beneficial, one study showing it was harmful, and thirty-eight inconclusive studies (38 is 95% of 40). This tactic becomes more effective when there are more studies available. Organizations that do not publish every study they carry out, such as tobacco companies denying a link between smoking and cancer, anti-smoking advocacy groups and media outlets trying to prove a link between smoking and various ailments, or miracle pill vendors, are likely to use this tactic. Ronald Fisherconsidered this issue in his famouslady tasting teaexample experiment (from his 1935 book,The Design of Experiments). Regarding repeated experiments, he said, "It would be illegitimate and would rob our calculation of its basis if unsuccessful results were not all brought into the account." Another term related to this concept ischerry picking. Multivariable datasets have two or morefeatures/dimensions. If too few of these features are chosen for analysis (for example, if just one feature is chosen andsimple linear regressionis performed instead ofmultiple linear regression), the results can be misleading. This leaves the analyst vulnerable to any of variousstatistical paradoxes, or in some (not all) cases false causality as below. The answers to surveys can often be manipulated by wording the question in such a way as to induce a prevalence towards a certain answer from the respondent. For example, in polling support for a war, the questions: will likely result in data skewed in different directions, although they are both polling about the support for the war. A better way of wording the question could be "Do you support the current US military action abroad?" A still more nearly neutral way to put that question is "What is your view about the current US military action abroad?" The point should be that the person being asked has no way of guessing from the wording what the questioner might want to hear. Another way to do this is to precede the question by information that supports the "desired" answer. For example, more people will likely answer "yes" to the question "Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?" than to the question "Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?" The proper formulation of questions can be very subtle, but nonetheless can yield significant differences in results. Additionally, the responses to two questions can vary dramatically depending on the order in which they are asked.[15]"A survey that asked about 'ownership of stock' found that most Texas ranchers owned stock, though probably not the kind traded on the New York Stock Exchange."[16] Overgeneralizationis a fallacy occurring when a statistic about a particular population is asserted to hold among members of a group for which the original population is not a representative sample. For example, suppose 100% of apples are observed to be red in summer. The assertion "All apples are red" would be an instance of overgeneralization because the original statistic was true only of a specific subset of apples (those in summer), which is not expected to be representative of the population of apples as a whole. A real-world example of the overgeneralization fallacy can be observed as an artifact of modern polling techniques, which prohibit calling cell phones for over-the-phone political polls. As young people are more likely than other demographic groups to lack a conventional "landline" phone, a telephone poll that exclusively surveys responders of calls landline phones, may cause the poll results to undersample the views of young people, if no other measures are taken to account for this skewing of the sampling. Thus, a poll examining the voting preferences of young people using this technique may not be a perfectly accurate representation of young peoples' true voting preferences as a whole without overgeneralizing, because the sample used excludes young people that carry only cell phones, who may or may not have voting preferences that differ from the rest of the population. Overgeneralization often occurs when information is passed through nontechnical sources, in particular mass media. Scientists have learned at great cost that gathering good experimental data for statistical analysis is difficult. Example: Theplaceboeffect (mind over body) is very powerful. 100% of subjects developed a rash when exposed to an inert substance that was falsely called poison ivy while few developed a rash to a "harmless" object that really was poison ivy.[17]Researchers combat this effect by double-blind randomized comparativeexperiments. Statisticians typically worry more about the validity of the data than the analysis. This is reflected in a field of study within statistics known as thedesign of experiments. Pollsters have learned at great cost that gathering good survey data for statistical analysis is difficult. The selective effect of cellular telephones on data collection (discussed in the Overgeneralization section) is one potential example; If young people with traditional telephones are not representative, the sample can be biased. Sample surveys have many pitfalls and require great care in execution.[18]One effort required almost 3,000 telephone calls to get 1,000 answers. The simple random sample of the population "isn't simple and may not be random."[19] If a research team wants to know how 300 million people feel about a certain topic, it would be impractical to ask all of them. However, if the team picks a random sample of about 1,000 people, they can be fairly certain that the results given by this group are representative of what the larger group would have said if they had all been asked. This confidence can actually be quantified by thecentral limit theoremand other mathematical results. Confidence is expressed as a probability of the true result (for the larger group) being within a certain range of the estimate (the figure for the smaller group). This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%. The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has an estimated error of ±6.6% at 99% confidence. ±x{\displaystyle x}% at 95% confidence is always ±1.32x{\displaystyle 1.32x}% at 99% confidence for a normally distributed population. The smaller the estimated error, the larger the required sample, at a given confidence level; for example, at95.4%confidence: People may assume, because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct. Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable. On the other hand, people may consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled. People may think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate.[a]A poll with perfect unbiased sampling and truthful answers has a mathematically determinedmargin of error, which only depends on the number of people polled. However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1,000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%. There are also many other measurement problems in population surveys. The problems mentioned above apply to all statistical experiments, not just population surveys. When a statistical test shows a correlation between A and B, there are usually six possibilities: The sixth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the five others. If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach). This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site is "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (due to a poorer diet, for example, or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed[22]that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines andcancer.[23] In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that anIRBwould accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers' ability to empirically test causation. In a statistical test, thenull hypothesis(H0{\displaystyle H_{0}}) is considered valid until enough data proves it wrong. ThenH0{\displaystyle H_{0}}is rejected and the alternative hypothesis (HA{\displaystyle H_{A}}) is considered to be proven as correct. By chance this can happen, althoughH0{\displaystyle H_{0}}is true, with a probability denotedα{\displaystyle \alpha }(the significance level). This can be compared to the judicial process, where the accused is considered innocent (H0{\displaystyle H_{0}}) until proven guilty (HA{\displaystyle H_{A}}) beyond reasonable doubt (α{\displaystyle \alpha }). But if data does not give us enough proof to reject thatH0{\displaystyle H_{0}}, this does not automatically prove thatH0{\displaystyle H_{0}}is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to rejectH0{\displaystyle H_{0}}). Therefore, it is likely—even when smoking is dangerous—that our test will not rejectH0{\displaystyle H_{0}}. IfH0{\displaystyle H_{0}}is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to rejectH0{\displaystyle H_{0}}, so the test is useless and the value of the "proof" ofH0{\displaystyle H_{0}}is also null. This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a guilty verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a guilty verdict. "...the null hypothesis is never proved or established, but it is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis." (Fisher inThe Design of Experiments) Many reasons for confusion exist including the use of double negative logic and terminology resulting from the merger of Fisher's "significance testing" (where the null hypothesis is never accepted) with "hypothesis testing" (where some hypothesis is always accepted). Statistical significance is a measure of probability; practical significance is a measure of effect.[24]A baldness cure is statistically significant if a sparse peach-fuzz usually covers the previously naked scalp. The cure is practically significant when a hat is no longer required in cold weather and the barber asks how much to take off the top. The bald want a cure that is both statistically and practically significant; It will probably work and if it does, it will have a big hairy effect. Scientific publication often requires only statistical significance. This has led to complaints (for the last 50 years) that statistical significance testing is a misuse of statistics.[25] Data dredgingis an abuse ofdata mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of ahypothesisto be tested. Since the requiredconfidence intervalto establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study. Note that data dredging is a valid way offindinga possible hypothesis but that hypothesismustthen be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation. "You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. The remedy is clear. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last."[26] Informally called "fudging the data," this practice includes selective reporting (see alsopublication bias) and even simply making up false data. Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a patternconsistentwith the preferredhypothesiswhile ignoring other results or "data runs" that contradict the hypothesis. Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods.[27] Data manipulation is a serious issue/consideration in the most honest of statistical analyses. Outliers, missing data and non-normality can all adversely affect the validity of statistical analysis. It is appropriate to study the data and repair real problems before analysis begins. "[I]n any scatter diagram there will be some points more or less detached from the main part of the cloud: these points should be rejected only for cause."[28] Pseudoreplicationis a technical error associated withanalysis of variance. Complexity hides the fact that statistical analysis is being attempted on a single sample (N=1). For this degenerate case the variance cannot be calculated (division by zero). An (N=1) will always give the researcher the highest statistical correlation between intent bias and actual findings. Thegambler's fallacyassumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 50% (assuming the coin is unbiased). Theprosecutor's fallacy[29]assumes that the probability of an apparently criminal event being random chance is equal to the chance that the suspect is innocent. A prominent example in the UK is the wrongful conviction ofSally Clarkfor killing her two sons who appeared to have died ofSudden Infant Death Syndrome(SIDS). In his expert testimony, now discredited Professor SirRoy Meadowclaimed that due to the rarity of SIDS, the probability of Clark being innocent was 1 in 73 million. This was later questioned by theRoyal Statistical Society;[30]assuming Meadows figure was accurate, one has to weigh up all the possible explanations against each other to make a conclusion on which most likely caused the unexplained death of the two children. Available data suggest that the odds would be in favour of double SIDS compared to double homicide by a factor of nine.[31]The 1 in 73 million figure was also misleading as it was reached by finding the probability of a baby from an affluent, non-smoking family dying from SIDS andsquaringit: this erroneously treats each death asstatistically independent, assuming that there is no factor, such as genetics, that would make it more likely for two siblings to die from SIDS.[32][33]This is also an example of theecological fallacyas it assumes the probability of SIDS in Clark's family was the same as the average of all affluent, non-smoking families; social class is a highly complex and multifaceted concept, with numerous other variables such as education, line of work, and many more. Assuming that an individual will have the same attributes as the rest of a given group fails to account for the effects of other variables which in turn can be misleading.[33]The conviction ofSally Clarkwas eventually overturned and Meadow was struck from the medical register.[34] Theludic fallacy. Probabilities are based on simple models that ignore real (if remote) possibilities. Poker players do not consider that an opponent may draw a gun rather than a card. The insured (and governments) assume that insurers will remain solvent, but seeAIGandsystemic risk. Other misuses include comparingapples and oranges, using the wrong average,[35]regression toward the mean,[36]and the umbrella phrasegarbage in, garbage out.[37]Some statistics are simply irrelevant to an issue.[38] Certain advertising phrasing such as "[m]ore than 99 in 100," may be misinterpreted as 100%.[39] Anscombe's quartetis a made-up dataset that exemplifies the shortcomings of simpledescriptive statistics(and the value ofdata plottingbefore numerical analysis).
https://en.wikipedia.org/wiki/Misuse_of_statistics
Pareidolia(/ˌpærɪˈdoʊliə,ˌpɛər-/;[1]alsoUS:/ˌpɛəraɪ-/)[2]is the tendency forperceptionto impose a meaningful interpretation on a nebulousstimulus, usually visual, so that one detects an object, pattern, or meaning where there is none. Pareidolia is a specific but common type ofapophenia(the tendency to perceive meaningful connections between unrelated things or ideas). Common examples includeperceived imagesof animals, faces, or objects in cloud formations; seeing faces in inanimate objects; orlunar pareidolialike theMan in the Moonor theMoon rabbit. The concept of pareidolia may extend to includehidden messagesin recorded music played in reverse or at higher- or lower-than-normal speeds, and hearing voices (mainly indistinct) or music in random noise, such as that produced by air conditioners or by fans.[3][4]Face pareidolia has also been demonstrated inrhesus macaques.[5] The word derives from the Greek wordspará(παρά, "beside, alongside, instead [of]") and the nouneídōlon(εἴδωλον, "image, form, shape").[6] The German wordPareidoliewas used in articles byKarl Ludwig Kahlbaum—for example in his 1866 paper "Die Sinnesdelierien"[7]("On Delusion of the Senses"). When Kahlbaum's paper was reviewed the following year (1867) inThe Journal of Mental Science, Volume 13,Pareidoliewas translated into English as "pareidolia", and noted to be synonymous with the terms "...changing hallucination, partial hallucination, [and] perception of secondary images."[8] Pareidolia correlates with age and is frequent among patients withParkinson's diseaseanddementia with Lewy bodies.[9] Pareidolia can cause people to interpret random images, or patterns of light and shadow, as faces.[10]A 2009magnetoencephalographystudy found that objects perceived as faces evoke an early (165ms) activation of thefusiform face areaat a time and location similar to that evoked by faces, whereas other common objects do not evoke such activation. This activation is similar to a slightly faster time (130 ms) that is seen for images of real faces. The authors suggest that face perception evoked by face-like objects is a relatively early process, and not a late cognitive reinterpretation phenomenon.[11] Afunctional magnetic resonance imaging(fMRI) study in 2011 similarly showed that repeated presentation of novel visual shapes that were interpreted as meaningful led to decreased fMRI responses for real objects. These results indicate that the interpretation of ambiguous stimuli depends upon processes similar to those elicited by known objects.[12] Pareidolia was found to affect brain function and brain waves. In a 2022 study, EEG records show that responses in the frontal and occipitotemporal cortexes begin prior to when one recognizes faces and later, when they are not recognized.[13]By displaying these proactive brain waves, scientists can then have a basis for data rather than relying on self-reported sightings.[clarification needed] These studies help to explain why people generally identify a few lines and a circle as a "face" so quickly and without hesitation.Cognitive processesare activated by the "face-like" object which alerts the observer to both the emotional state andidentityof the subject, even before the conscious mind begins to process or even receive the information. A "stick figure face", despite its simplicity, can convey mood information, and be drawn to indicate emotions such as happiness or anger. This robust and subtle capability is hypothesized to be the result ofnatural selectionfavoring people most able to quickly identify the mental state, for example, of threatening people, thus providing the individual an opportunity to flee or attack preemptively.[14]This ability, though highly specialized for the processing andrecognition ofhumanemotions, also functions to determine the demeanor of wildlife.[15][self-published source?] A mimetolithic pattern is a pattern created on rocks that may come to mimic recognizable forms through the random processes of formation,weatheringanderosion. A well-known example is theFace on Mars, a rock formation on Mars that resembled a human face in certain satellite photos. Most mimetoliths are much larger than the subjects they resemble, such as a cliff profile that looks like a human face. Picture jaspersexhibit combinations of patterns, such as banding from flow or depositional patterns (from water or wind), or dendritic or color variations, resulting in what appear to be miniature scenes on a cut section, which is then used for jewelry. Chertnodules,concretions, or pebbles may in certain cases be mistakenly identified as skeletal remains, egg fossils, or other antiquities of organic origin by amateur enthusiasts. In the late 1970s and early 1980s, Japanese researcherChonosuke Okamuraself-published a series of reports titledOriginal Report of the Okamura Fossil Laboratory, in which he described tiny inclusions in polishedlimestonefrom theSilurianperiod (425mya) as being preservedfossilremains of tiny humans, gorillas, dogs, dragons, dinosaurs and other organisms, all of them only millimeters long, leading him to claim, "There have been no changes in the bodies of mankind since the Silurian period... except for a growth in stature from 3.5 mm to 1,700 mm."[16][17]Okamura's research earned him anIg Nobel Prize(a parody of the Nobel Prize) inbiodiversityin 1996.[18][19] Some sources describe various mimetolithic features onPluto, including aheart-shaped region.[20][21][22] Seeing shapes in cloud patterns is another example of this phenomenon. Rogowitz and Voss (1990) showed a relationship between seeing shapes in cloud patterns andfractaldimension. They varied the fractal dimension of the boundary contour from 1.2 to 1.8, and found that the lower the fractal dimension, the more likely people were to report seeing nameable shapes of animals, faces, and fantasy creatures.[23]From above, pareidolia may be perceived in satellite imagery of tropical cyclones. Notably hurricanesMatthewandMiltongained much attention for resembling a human face or skull when viewed from the side.[24] A notable example of pareidolia occurred in 1877, when observers using telescopes to view the surface of Mars thought that they saw faint straight lines, which were then interpreted by some as canals. It was theorized that the canals were possibly created by sentient beings. This created a sensation. In the next few years better photographic techniques and stronger telescopes were developed and applied, which resulted in new images in which the faint lines disappeared, and the canal theory was debunked as an example of pareidolia.[25][26] Many cultures recognize pareidolic images in the disc of thefull moon, including the human face known as theMan in the Moonin manyNorthern Hemispherecultures[27][28]and theMoon rabbitin East Asian and indigenous American cultures.[29][30]Other cultures see a walking figure carrying a wide burden on their back,[31]including inGermanic tradition,[32]Haida mythology,[33]andLatvian mythology.[34] TheRorschach inkblot testuses pareidolia in an attempt to gain insight into a person's mental state. The Rorschach is aprojective testthat elicits thoughts or feelings of respondents that are "projected" onto the ambiguous inkblot images.[35]Rorschach inkblots have low-fractal-dimension boundary contours, which may elicit general shape-naming behaviors, serving as vehicles for projected meanings.[23] Owing to the way designs areengravedand printed, occurrences of pareidolia have occasionally been reported in banknotes. One example is the 1954Canadian LandscapeCanadian dollarbanknote series, known among collectors as the "Devil's Head" variety of the initial print runs. The obverse of the notes features what appears to be an exaggerated grinning face, formed from patterns in the hair ofQueen Elizabeth II. The phenomenon generated enough attention for revised designs to be issued in 1956, which removed the effect.[36] Renaissance authors have shown a particular interest in pareidolia. InWilliam Shakespeare's playHamlet, for example,Prince Hamletpoints at the sky and "demonstrates" his supposed madness in this exchange withPolonius:[37][38] HAMLETDo you see yonder cloud that's almost in the shape of a camel?POLONIUSBy th'Mass and 'tis, like a camel indeed.HAMLETMethinks it is a weasel.POLONIUSIt is backed like a weasel.HAMLETOr a whale.POLONIUSVery like a whale. Nathaniel Hawthornewrote a short story called "The Great Stone Face" in which a face seen in the side of a mountain (based on the real-lifeThe Old Man of the Mountain) is revered by a village.[39] Renaissance artistsoften used pareidolia in paintings and drawings:Andrea Mantegna,Leonardo da Vinci,Giotto,Hans Holbein,Giuseppe Arcimboldo, and many more have shown images—often human faces—that due to pareidolia appear in objects or clouds.[40] In his notebooks,Leonardo da Vinciwrote of pareidolia as a device for painters, writing: If you look at any walls spotted with various stains or with a mixture of different kinds of stones, if you are about to invent some scene you will be able to see in it a resemblance to various different landscapes adorned with mountains, rivers, rocks, trees, plains, wide valleys, and various groups of hills. You will also be able to see divers combats and figures in quick movement, and strange expressions of faces, and outlandish costumes, and an infinite number of things which you can then reduce into separate and well conceived forms.[41] Salem, a 1908 painting bySydney Curnow Vosper, gained notoriety due to a rumour that it contained a hidden face, that of the devil. This led many commentators to visualize a demonic face depicted in the shawl of the main figure, despite the artist's denial that any faces had deliberately been painted into the shawl.[42][43] Surrealistartists such asSalvador Dalíwould intentionally use pareidolia in their works, often in the form of ahidden face. Two 13th-century edifices in Turkey display architectural use of shadows of stone carvings at the entrance. Outright pictures are avoided in Islam but tessellations and calligraphic pictures were allowed, so designed "accidental" silhouettes of carved stone tessellations became a creative escape. There have been many instances of perceptions of religious imagery and themes, especially the faces of religious figures, in ordinary phenomena. Many involve images ofJesus,[35]theVirgin Mary,[48]the wordAllah,[49]or other religious phenomena: in September 2007 inSingapore, for example, acalluson a tree resembled amonkey, leading believers to pay homage to the "Monkey god" (eitherSun WukongorHanuman) in the monkey tree phenomenon.[50] Publicity surrounding sightings of religious figures and other surprising images in ordinary objects has spawned a market for such items on online auctions likeeBay. One famous instance was a grilled cheese sandwich with the face of the Virgin Mary.[51] During theSeptember 11 attacks, television viewers supposedly saw the face ofSatanin clouds of smoke billowing out of theWorld Trade Centerafter it was struck bythe airplane.[52]Another example of face recognition pareidolia originated in thefire at Notre Dame Cathedral, when a few observers claimed to see Jesus in the flames.[53] While attempting to validate the imprint of acrucifiedman on theShroud of TurinasJesus, a variety of objects have been described as being visible on thelinen. These objects include a number of plant species, a coin withRoman numerals, and multiple insect species.[54]In an experimental setting using a picture of plain linen cloth, participants who had been told that there could possibly be visible words in the cloth, collectively saw 2 religious words. Those told that the cloth was of some religious importance saw 12 religious words, and those who were also told that it was of religious importance, but also given suggestions of possible religious words, saw 37 religious words.[55]The researchers posit that the reason the Shroud has been said to have so many different symbols and objects is because it was already deemed to have the imprint of Jesus prior to the search for symbols and other imprints in the cloth, and therefore it was simply pareidolia at work.[54] Pareidolia can occur incomputer vision,[56]specifically inimage recognitionprograms, in which vague clues can spuriously detect images orfeatures. In the case of anartificial neural network, higher-level features correspond to more recognizable features, and enhancing these features brings out what the computer sees. These examples of pareidolia reflect the training set of images that the network has "seen" previously. Striking visuals can be produced in this way, notably in theDeepDreamsoftware, which falsely detects and then exaggerates features such as eyes and faces in any image. The features can be further exaggerated by creating afeedback loopwhere the output is used as the input for the network. (The adjacent image was created by iterating the loop 50 times.) Additionally the output can be modified such as slightly zooming in to create an animation of the images perspective flying through the surrealistic imagery. In 1971Konstantīns RaudivewroteBreakthrough, detailing what he believed was the discovery ofelectronic voice phenomena(EVP). EVP has been described as auditory pareidolia.[35]Allegations ofbackmaskingin popular music, in which a listener claims a message has been recorded backward onto a track meant to be played forward, have also been described as auditory pareidolia.[35][57]In 1995, the psychologistDiana Deutschinvented an algorithm for producing phantom words and phrases with the sounds coming from two stereo loudspeakers, one to the listener's left and the other to his right, producing a phase offset in time between the speakers. After listening for a while, phantom words and phrases suddenly emerge, and these often appear to reflect what is on the listener's mind.[58][59] Medical educators sometimes teach medical students and resident physicians (doctors in training) to use pareidolia and patternicity to learn to recognize human anatomy on radiology imaging studies. Examples include assessing radiographs (X-ray images) of the human vertebral spine. Patrick Foye, M.D., professor ofphysical medicine and rehabilitationatRutgers University,New Jersey Medical School, has written that pareidolia is used to teach medical trainees to assess for spinal fractures and spinal malignancies (cancers).[60]When viewing spinal radiographs, normal bony anatomic structures resemble the face of an owl. (The spinal pedicles resemble an owl's eyes and the spinous process resembles an owl's beak.) But when cancer erodes the bony spinal pedicle, the radiographic appearance changes such that now that eye of the owl seems missing or closed, which is called the "winking owl sign". Another common pattern is a "Scottie dog sign" on a spinal X-ray.[61] In 2021, Foye again published in the medical literature on this topic, in a medical journal article called "Baby Yoda: Pareidolia and Patternicity in Sacral MRI and CT Scans".[62]Here, he introduced a novel way of visualizing thesacrumwhen viewing MRImagnetic resonance imagingandCT scans(computed tomography scans). He noted that in certain image slices the human sacral anatomy resembles the face of "Baby Yoda" (also calledGrogu), a fictional character from the television showThe Mandalorian. Sacral openings for exiting nerves (sacral foramina) resemble Baby Yoda's eyes, while the sacral canal resembles Baby Yoda's mouth.[63] In January 2017, an anonymous user placed aneBayauction of aCheetothat looked like the gorillaHarambe. Bidding began atUS$11.99, but the Cheeto was eventually sold forUS$99,000.[64] Starting from 2021, anInternet memeemerged around the online gameAmong Us, where users presented everyday items such as dogs, statues, garbage cans, big toes, and pictures of theBoomerang Nebulathat looked like the game's "crewmate" protagonists.[65][66]In May 2021, aneBayuser named Tav listed aChicken McNuggetshaped like a crewmate fromAmong Usforonline auction. The Chicken McNugget was sold forUS$99,997to an anonymous buyer.[67] Ashadow person(also known as a shadow figure, shadow being or black mass) is often attributed to pareidolia. It is the perception of a patch of shadow as a living, humanoid figure, particularly as interpreted by believers in theparanormalorsupernaturalas the presence of a spirit or other entity.[68] Pareidolia is also what some skeptics believe causes people to believe that they have seenghosts.[69]
https://en.wikipedia.org/wiki/Pareidolia
In a scientific study,post hoc analysis(fromLatinpost hoc, "after this") consists ofstatistical analysesthat were specified after the data were seen.[1][2]They are usually used to uncover specific differences between three or more group means when ananalysis of variance(ANOVA) test is significant.[3]This typically creates amultiple testingproblem because each potential analysis is effectively astatistical test. Multiple testing procedures are sometimes used to compensate, but that is often difficult or impossible to do precisely. Post hoc analysis that is conducted and interpreted without adequate consideration of this problem is sometimes calleddata dredging(p-hacking) by critics because the statistical associations that it finds are often spurious.[4] Post hoc analyses are not inherently bad or good;[5]: 12–13rather, the main requirement for theirethical useis simply that their results not bemispresentedas the original hypothesis.[5]: 12–13Modern editions of scientific manuals have clarified this point; for example,APA stylenow specifies that "hypotheses should now be stated in three groupings: preplanned–primary, preplanned–secondary, and exploratory (post hoc). Exploratory hypotheses are allowable, and there should be no pressure to disguise them as if they were preplanned."[5]: 12–13 Some common post hoc tests include:[6][7] However, with the exception of Scheffès Method, these tests should be specified "a priori" despite being called "post-hoc" in conventional usage. For example, a difference between means could be significant with the Holm-Bonferroni method but not with the Turkey Test and vice versa. It would be poor practice for a data analyst to choose which of these tests to report based on which gave the desired result. Sometimes the temptation to engage in post hoc analysis is motivated by a desire to produce positive results or see a project as successful. In the case of pharmaceutical research, there may be significant financial consequences to a failed trial.[citation needed]
https://en.wikipedia.org/wiki/Post_hoc_analysis
Instatistics,hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is becausecircular reasoning(double dipping) would be involved: something seems true in the limited data set; therefore we hypothesize that it is true in general; therefore we wrongly test it on the same, limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to aspost hoctheorizing(fromLatinpost hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis. Testing a hypothesis suggested by the data can very easily result in false positives (type I errors). If one looks long enough and in enough different places, eventually data can be found to support any hypothesis. Yet, these positive data do not by themselves constituteevidencethat the hypothesis is correct. The negative test data that were thrown out are just as important, because they give one an idea of how common the positive results are compared to chance. Running an experiment, seeing a pattern in the data, proposing a hypothesis from that pattern, then using thesameexperimental data as evidence for the new hypothesis is extremely suspect, because data from all other experiments, completed or potential, has essentially been "thrown out" by choosing to look only at the experiments that suggested the new hypothesis in the first place. A large set of tests as described above greatly inflates theprobabilityoftype I erroras all but the data most favorable to thehypothesisis discarded. This is a risk, not only inhypothesis testingbut in allstatistical inferenceas it is often problematic to accurately describe the process that has been followed in searching and discardingdata. In other words, one wants to keep all data (regardless of whether they tend to support or refute the hypothesis) from "good tests", but it is sometimes difficult to figure out what a "good test" is. It is a particular problem instatistical modelling, where many different models are rejected bytrial and errorbefore publishing a result (see alsooverfitting,publication bias). The error is particularly prevalent indata miningandmachine learning. It also commonly occurs inacademic publishingwhere only reports of positive, rather than negative, results tend to be accepted, resulting in the effect known aspublication bias. All strategies for sound testing of hypotheses suggested by the data involve including a wider range of tests in an attempt to validate or refute the new hypothesis. These include: Henry Scheffé's simultaneous testof all contrasts inmultiple comparisonproblems is the most[citation needed]well-known remedy in the case ofanalysis of variance.[1]It is a method designed for testing hypotheses suggested by the data while avoiding the fallacy described above.
https://en.wikipedia.org/wiki/Post_hoc_theorizing
TheTexas sharpshooter fallacyis aninformal fallacywhich is committed when differences in data are ignored, but similarities are overemphasized. From this reasoning, a false conclusion is inferred.[1]This fallacy is the philosophical or rhetorical application of themultiple comparisonsproblem (in statistics) andapophenia(in cognitive psychology). It is related to theclustering illusion, which is the tendency in humancognitionto interpret patterns where none actually exist. The name comes from a metaphor about a person fromTexaswho fires a gun at the side of a barn, then paints ashooting targetcentered on the tightestclusterof shots and claims to be asharpshooter.[2][3][4] The Texas sharpshooter fallacy often arises when a person has a large amount of data at their disposal but only focuses on a small subset of that data. Some factor other than the one attributed may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person attempts to account for the likelihood of findingsomesubset in the large data withsomecommon property by a factor other than its actual cause, then that person is likely committing a Texas sharpshooter fallacy. The fallacy is characterized by a lack of a specific hypothesis prior to the gathering of data, or the formulation of a hypothesis only after data have already been gathered and examined.[5]Thus, it typically does not apply if one had anex ante, or prior, expectation of the particular relationship in question before examining the data. For example, one might, prior to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if a second set of additional information can be generated using the same process as the original information, one can use the first (original) set of information to construct a hypothesis, and then test the hypothesis on the second (new) set of information. (Seehypothesis testing.) However, after constructing a hypothesis on a set of data, one would be committing the Texas sharpshooter fallacy if they then tested that hypothesis on the same data (seehypotheses suggested by the data). A Swedish study in 1992 tried to determine whetherpower lines caused some kind of poor health effects.[6]The researchers surveyed people living within 300 metres ofhigh-voltage power linesover 25 years and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those who lived closest to the power lines, which spurred calls to action by the Swedish government.[7]The problem with the conclusion, however, was that the number of potential ailments, i.e., over 800, was so large that it created a high probability that at least one ailment would have a statistically significant correlation with living distance from power lines by chance alone, a situation known as themultiple comparisons problem. Subsequent studies failed to show any association between power lines and childhood leukemia.[8] The fallacy is often found in modern-day interpretations of thequatrainsofNostradamus. Nostradamus's quatrains are often liberally translated from their original (archaic) French versions, in which their historical context is often lost, and then applied to support the erroneous conclusion that Nostradamus predicted a given modern-day event after the event actually occurred.[9]
https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
Reliefis an algorithm developed by Kira and Rendell in 1992 that takes a filter-method approach tofeature selectionthat is notably sensitive to feature interactions.[1][2]It was originally designed for application to binary classification problems with discrete or numerical features. Relief calculates a feature score for each feature which can then be applied to rank and select top scoring features for feature selection. Alternatively, these scores may be applied as feature weights to guide downstream modeling. Relief feature scoring is based on the identification of feature value differences betweennearest neighborinstance pairs. If a feature value difference is observed in a neighboring instance pair with the same class (a 'hit'), the feature score decreases. Alternatively, if a feature value difference is observed in a neighboring instance pair with different class values (a 'miss'), the feature score increases. The original Relief algorithm has since inspired a family of Relief-based feature selection algorithms (RBAs), including the ReliefF[3]algorithm. Beyond the original Relief algorithm, RBAs have been adapted to (1) perform more reliably in noisy problems,[4](2) generalize to multi-class problems[4](3) generalize to numerical outcome (i.e. regression) problems,[5]and (4) to make them robust to incomplete (i.e. missing) data.[4] To date, the development of RBA variants and extensions has focused on four areas; (1) improving performance of the 'core' Relief algorithm, i.e. examining strategies for neighbor selection and instance weighting, (2) improving scalability of the 'core' Relief algorithm to larger feature spaces through iterative approaches, (3) methods for flexibly adapting Relief to different data types, and (4) improving Relief run efficiency.[6] Their strengths are that they are not dependent on heuristics, they run in low-order polynomial time, and they are noise-tolerant and robust to feature interactions, as well as being applicable for binary or continuous data; however, it does not discriminate between redundant features, and low numbers of training instances fool the algorithm. Take a data set withninstances ofpfeatures, belonging to two known classes. Within the data set, each feature should be scaled to the interval [0 1] (binary data should remain as 0 and 1). The algorithm will be repeatedmtimes. Start with ap-long weight vector (W) of zeros. At each iteration, take the feature vector (X) belonging to one random instance, and the feature vectors of the instance closest to X (by Euclidean distance) from each class. The closest same-class instance is called 'near-hit', and the closest different-class instance is called 'near-miss'. Update the weight vector such that wherei{\displaystyle i}indexes the components and runs from 1 to p. Thus the weight of any given feature decreases if it differs from that feature in nearby instances of the same class more than nearby instances of the other class, and increases in the reverse case. Aftermiterations, divide each element of the weight vector bym. This becomes the relevance vector. Features are selected if their relevance is greater than a thresholdτ. Kira and Rendell's experiments[2]showed a clear contrast between relevant and irrelevant features, allowingτto be determined by inspection. However, it can also be determined by Chebyshev's inequality for a given confidence level (α) that aτof 1/sqrt(α*m) is good enough to make the probability of a Type I error less thanα, although it is stated thatτcan be much smaller than that. Relief was also described as generalizable to multinomial classification by decomposition into a number of binary problems. Kononenko et al. propose a number of updates to Relief.[3]Firstly, they find the near-hit and near-miss instances using theManhattan (L1) normrather than theEuclidean (L2) norm, although the rationale is not specified. Furthermore, they found taking the absolute differences between xiand near-hiti, and xiand near-missito be sufficient when updating the weight vector (rather than the square of those differences). Rather than repeating the algorithmmtimes, implement it exhaustively (i.e.ntimes, once for each instance) for relatively smalln(up to one thousand). Furthermore, rather than finding the single nearest hit and single nearest miss, which may cause redundant and noisy attributes to affect the selection of the nearest neighbors, ReliefF searches forknearest hits and misses and averages their contribution to the weights of each feature.kcan be tuned for any individual problem. In ReliefF, the contribution of missing values to the feature weight is determined using the conditional probability that two values should be the same or different, approximated with relative frequencies from the data set. This can be calculated if one or both features are missing. Rather than use Kira and Rendell's proposed decomposition of a multinomial classification into a number of binomial problems, ReliefF searches forknear misses from each different class and averages their contributions for updating W, weighted with the prior probability of each class. The following RBAs are arranged chronologically from oldest to most recent.[6]They include methods for improving (1) the core Relief algorithm concept, (2) iterative approaches for scalability, (3) adaptations to different data types, (4) strategies for computational efficiency, or (5) some combination of these goals. For more on RBAs see these book chapters[7][8][9]or this most recent review paper.[6] Robnik-Šikonja and Kononenko propose further updates to ReliefF, making it appropriate for regression.[5] Introduced deterministic neighbor selection approach and a new approach for incomplete data handling.[10] Implemented method to address bias against non-monotonic features. Introduced the first iterative Relief approach. For the first time, neighbors were uniquely determined by a radius threshold and instances were weighted by their distance from the target instance.[11] Introduced sigmoidal weighting based on distance from target instance.[12][13]All instance pairs (not just a defined subset of neighbors) contributed to score updates. Proposed an on-line learning variant of Relief. Extended the iterative Relief concept. Introduced local-learning updates between iterations for improved convergence.[14] Specifically sought to address noise in large feature spaces through the recursive elimination of features and the iterative application of ReliefF.[15] Similarly seeking to address noise in large feature spaces. Utilized an iterative `evaporative' removal of lowest quality features using ReliefF scores in association with mutual information.[16] Addressing issues related to incomplete and multi-class data.[17] Dramatically improves the efficiency of detecting 2-way feature interactions in very large feature spaces by scoring random feature subsets rather than the entire feature space.[18] Introduced calculation of feature weights relative to average feature 'diff' between instance pairs.[19] SURF identifies nearest neighbors (both hits and misses) based on a distance threshold from the target instance defined by the average distance between all pairs of instances in the training data.[20]Results suggest improved power to detect 2-way epistatic interactions over ReliefF. SURF*[21]extends the SURF[20]algorithm to not only utilized 'near' neighbors in scoring updates, but 'far' instances as well, but employing inverted scoring updates for 'far instance pairs. Results suggest improved power to detect 2-way epistatic interactions over SURF, but an inability to detect simple main effects (i.e. univariate associations).[22] SWRF* extends the SURF* algorithm adopting sigmoid weighting to take distance from the threshold into account. Also introduced a modular framework for further developing RBAs called MoRF.[23] MultiSURF*[24]extends the SURF*[21]algorithm adapting the near/far neighborhood boundaries based on the average and standard deviation of distances from the target instance to all others. MultiSURF* uses the standard deviation to define a dead-band zone where 'middle-distance' instances do not contribute to scoring. Evidence suggests MultiSURF* performs best in detecting pure 2-way feature interactions.[22] Introduces a feature-wise adaptive k parameter for more flexibly detecting univariate effects and interaction effects.[25] MultiSURF[22]simplifies the MultiSURF*[24]algorithm by preserving the dead-band zone, and target-instance-centric neighborhood determination, but eliminating the 'far' scoring. Evidence suggests MultiSURF to be a well rounded option, able to detect 2-way and 3-way interactions, as well as simple univariate associations.[22]Also introduced the RBA software package called ReBATE that includes implementations of (Relief, ReliefF, SURF, SURF*, MultiSURF*, MultiSURF, and TuRF). STIR[26][27]reformulates and slightly adjusts the original Relief formula by incorporating sample variance of the nearest neighbor distances into the attribute importance estimation. This variance permits the calculation of statistical significance of features and adjustment for multiple testing of Relief-based scores. Currently, STIR supports binary outcome variable but will soon be extended to multi-state and continuous outcome. Different RBAs have been applied to feature selection in a variety of problem domains.
https://en.wikipedia.org/wiki/Relief_(feature_selection)
Instatistics,devianceis agoodness-of-fitstatistic for astatistical model; it is often used forstatistical hypothesis testing. It is a generalization of the idea of using thesum of squares of residuals(SSR) inordinary least squaresto cases where model-fitting is achieved bymaximum likelihood. It plays an important role inexponential dispersion modelsandgeneralized linear models. Deviance can be related toKullback-Leibler divergence.[1] The unit deviance[2][3]d(y,μ){\displaystyle d(y,\mu )}is a bivariate function that satisfies the following conditions: The total devianceD(y,μ^){\displaystyle D(\mathbf {y} ,{\hat {\boldsymbol {\mu }}})}of a model with predictionsμ^{\displaystyle {\hat {\boldsymbol {\mu }}}}of the observationy{\displaystyle \mathbf {y} }is the sum of its unit deviances:D(y,μ^)=∑id(yi,μ^i){\textstyle D(\mathbf {y} ,{\hat {\boldsymbol {\mu }}})=\sum _{i}d(y_{i},{\hat {\mu }}_{i})}. The (total) deviance for a modelM0with estimatesμ^=E[Y|θ^0]{\displaystyle {\hat {\mu }}=E[Y|{\hat {\theta }}_{0}]}, based on a datasety, may be constructed by its likelihood as:[4][5]D(y,μ^)=2(log⁡[p(y∣θ^s)]−log⁡[p(y∣θ^0)]).{\displaystyle D(y,{\hat {\mu }})=2\left(\log \left[p(y\mid {\hat {\theta }}_{s})\right]-\log \left[p(y\mid {\hat {\theta }}_{0})\right]\right).} Hereθ^0{\displaystyle {\hat {\theta }}_{0}}denotes the fitted values of the parameters in the modelM0, whileθ^s{\displaystyle {\hat {\theta }}_{s}}denotes the fitted parameters for thesaturated model: both sets of fitted values are implicitly functions of the observationsy. Here, thesaturated modelis a model with a parameter for every observation so that the data are fitted exactly. This expression is simply 2 times thelog-likelihood ratioof the full model compared to the reduced model. The deviance is used to compare two models – in particular in the case ofgeneralized linear models(GLM) where it has a similar role to residual sum of squares fromANOVAin linear models (RSS). Suppose in the framework of the GLM, we have twonested models,M1andM2. In particular, suppose thatM1contains the parameters inM2, andkadditional parameters. Then, under the null hypothesis thatM2is the true model, the difference between the deviances for the two models follows, based onWilks' theorem, an approximatechi-squared distributionwithk-degrees of freedom.[5]This can be used for hypothesis testing on the deviance. Some usage of the term "deviance" can be confusing. According to Collett:[6] However, since the principal use is in the form of the difference of the deviances of two models, this confusion in definition is unimportant. The unit deviance for thePoisson distributionisd(y,μ)=2(ylog⁡yμ−y+μ){\displaystyle d(y,\mu )=2\left(y\log {\frac {y}{\mu }}-y+\mu \right)}, the unit deviance for thenormal distributionwith unit variance is given byd(y,μ)=(y−μ)2{\displaystyle d(y,\mu )=\left(y-\mu \right)^{2}}.
https://en.wikipedia.org/wiki/Deviance_(statistics)
Instatistics, ageneralized linear model(GLM) is a flexible generalization of ordinarylinear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via alink functionand by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Generalized linear models were formulated byJohn NelderandRobert Wedderburnas a way of unifying various other statistical models, includinglinear regression,logistic regressionandPoisson regression.[1]They proposed aniteratively reweighted least squaresmethodformaximum likelihood estimation(MLE) of the model parameters. MLE remains popular and is the default method on many statistical computing packages. Other approaches, includingBayesian regressionandleast squares fittingtovariance stabilizedresponses, have been developed. Ordinary linear regression predicts theexpected valueof a given unknown quantity (theresponse variable, arandom variable) as alinear combinationof a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. alinear-response model). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights. However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically (i.e. exponentially) varying, rather than constantly varying, output changes. As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over differently-sized beaches. More specifically, the problem is that if the model is used to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, it would predict an impossible attendance value of −950. Logically, a more realistic model would instead predict a constantrateof increased beach attendance (e.g. an increase of 10 degrees leads to a doubling in beach attendance, and a drop of 10 degrees leads to a halving in attendance). Such a model is termed anexponential-response model(orlog-linear model, since thelogarithmof the response is predicted to vary linearly). Similarly, a model that predicts a probability of making a yes/no choice (aBernoulli variable) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is theoddsthat are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is alog-odds orlogisticmodel. Generalized linear models cover all these situations by allowing for response variables that have arbitrary distributions (rather than simplynormal distributions), and for an arbitrary function of the response variable (thelink function) to vary linearly with the predictors (rather than assuming that the response itself must vary linearly). For example, the case above of predicted number of beach attendees would typically be modeled with aPoisson distributionand a log link, while the case of predicted probability of beach attendance would typically be modelled with aBernoulli distribution(orbinomial distribution, depending on exactly how the problem is phrased) and a log-odds (orlogit) link function. In a generalized linear model (GLM), each outcomeYof thedependent variablesis assumed to be generated from a particulardistributionin anexponential family, a large class ofprobability distributionsthat includes thenormal,binomial,Poissonandgammadistributions, among others. The conditional meanμof the distribution depends on the independent variablesXthrough: where E(Y|X) is theexpected valueofYconditionalonX;Xβis thelinear predictor, a linear combination of unknown parametersβ;gis the link function. In this framework, the variance is typically a function,V, of the mean: It is convenient ifVfollows from an exponential family of distributions, but it may simply be that the variance is a function of the predicted value. The unknown parameters,β, are typically estimated withmaximum likelihood, maximumquasi-likelihood, orBayesiantechniques. The GLM consists of three elements: Anoverdispersed exponential familyof distributions is a generalization of anexponential familyand theexponential dispersion modelof distributions and includes those families of probability distributions, parameterized byθ{\displaystyle {\boldsymbol {\theta }}}andτ{\displaystyle \tau }, whose density functionsf(orprobability mass function, for the case of adiscrete distribution) can be expressed in the form Thedispersion parameter,τ{\displaystyle \tau }, typically is known and is usually related to the variance of the distribution. The functionsh(y,τ){\displaystyle h(\mathbf {y} ,\tau )},b(θ){\displaystyle \mathbf {b} ({\boldsymbol {\theta }})},T(y){\displaystyle \mathbf {T} (\mathbf {y} )},A(θ){\displaystyle A({\boldsymbol {\theta }})}, andd(τ){\displaystyle d(\tau )}are known. Many common distributions are in this family, including the normal, exponential, gamma, Poisson, Bernoulli, and (for fixed number of trials) binomial, multinomial, and negative binomial. For scalary{\displaystyle \mathbf {y} }andθ{\displaystyle {\boldsymbol {\theta }}}(denotedy{\displaystyle y}andθ{\displaystyle \theta }in this case), this reduces to θ{\displaystyle {\boldsymbol {\theta }}}is related to the mean of the distribution. Ifb(θ){\displaystyle \mathbf {b} ({\boldsymbol {\theta }})}is the identity function, then the distribution is said to be incanonical form(ornatural form). Note that any distribution can be converted to canonical form by rewritingθ{\displaystyle {\boldsymbol {\theta }}}asθ′{\displaystyle {\boldsymbol {\theta }}'}and then applying the transformationθ=b(θ′){\displaystyle {\boldsymbol {\theta }}=\mathbf {b} ({\boldsymbol {\theta }}')}. It is always possible to convertA(θ){\displaystyle A({\boldsymbol {\theta }})}in terms of the new parametrization, even ifb(θ′){\displaystyle \mathbf {b} ({\boldsymbol {\theta }}')}is not aone-to-one function; see comments in the page onexponential families. If, in addition,T(y){\displaystyle \mathbf {T} (\mathbf {y} )}andb(θ){\displaystyle \mathbf {b} ({\boldsymbol {\theta }})}are the identity, thenθ{\displaystyle {\boldsymbol {\theta }}}is called thecanonical parameter(ornatural parameter) and is related to the mean through For scalary{\displaystyle \mathbf {y} }andθ{\displaystyle {\boldsymbol {\theta }}}, this reduces to Under this scenario, the variance of the distribution can be shown to be[2] For scalary{\displaystyle \mathbf {y} }andθ{\displaystyle {\boldsymbol {\theta }}}, this reduces to The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbolη(Greek"eta") denotes a linear predictor. It is related to theexpected valueof the data through the link function. ηis expressed as linear combinations (thus, "linear") of unknown parametersβ. The coefficients of the linear combination are represented as the matrix of independent variablesX.ηcan thus be expressed as The link function provides the relationship between the linear predictor and themeanof the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-definedcanonicallink function which is derived from the exponential of the response'sdensity function. However, in some cases it makes sense to try to match thedomainof the link function to therangeof the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for exampleBayesian probit regression. When using a distribution function with a canonical parameterθ,{\displaystyle \theta ,}the canonical link function is the function that expressesθ{\displaystyle \theta }in terms ofμ,{\displaystyle \mu ,}i.e.θ=g(μ).{\displaystyle \theta =g(\mu ).}For the most common distributions, the meanμ{\displaystyle \mu }is one of the parameters in the standard form of the distribution'sdensity function, and theng(μ){\displaystyle g(\mu )}is the function as defined above that maps the density function into its canonical form. When using the canonical link function,g(μ)=θ=Xβ,{\displaystyle g(\mu )=\theta =\mathbf {X} {\boldsymbol {\beta }},}which allowsXTY{\displaystyle \mathbf {X} ^{\rm {T}}\mathbf {Y} }to be asufficient statisticforβ{\displaystyle {\boldsymbol {\beta }}}. Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here). In the cases of the exponential and gamma distributions, the domain of the canonical link function is not the same as the permitted range of the mean. In particular, the linear predictor may be positive, which would give an impossible negative mean. When maximizing the likelihood, precautions must be taken to avoid this. An alternative is to use a noncanonical link function. In the case of the Bernoulli, binomial, categorical and multinomial distributions, the support of the distributions is not the same type of data as the parameter being predicted. In all of these cases, the predicted parameter is one or more probabilities, i.e. real numbers in the range[0,1]{\displaystyle [0,1]}. The resulting model is known aslogistic regression(ormultinomial logistic regressionin the case thatK-way rather than binary values are being predicted). For the Bernoulli and binomial distributions, the parameter is a single probability, indicating the likelihood of occurrence of a single event. The Bernoulli still satisfies the basic condition of the generalized linear model in that, even though a single outcome will always be either 0 or 1, theexpected valuewill nonetheless be a real-valued probability, i.e. the probability of occurrence of a "yes" (or 1) outcome. Similarly, in a binomial distribution, the expected value isNp, i.e. the expected proportion of "yes" outcomes will be the probability to be predicted. For categorical and multinomial distributions, the parameter to be predicted is aK-vector of probabilities, with the further restriction that all probabilities must add up to 1. Each probability indicates the likelihood of occurrence of one of theKpossible values. For the multinomial distribution, and for the vector form of the categorical distribution, the expected values of the elements of the vector can be related to the predicted probabilities similarly to the binomial and Bernoulli distributions. Themaximum likelihoodestimates can be found using aniteratively reweighted least squaresalgorithm or aNewton's methodwith updates of the form: whereJ(β(t)){\displaystyle {\mathcal {J}}({\boldsymbol {\beta }}^{(t)})}is theobserved information matrix(the negative of theHessian matrix) andu(β(t)){\displaystyle u({\boldsymbol {\beta }}^{(t)})}is thescore function; or aFisher's scoringmethod: whereI(β(t)){\displaystyle {\mathcal {I}}({\boldsymbol {\beta }}^{(t)})}is theFisher informationmatrix. Note that if the canonical link function is used, then they are the same.[3] In general, theposterior distributioncannot be found inclosed formand so must be approximated, usually usingLaplace approximationsor some type ofMarkov chain Monte Carlomethod such asGibbs sampling. A possible point of confusion has to do with the distinction between generalized linear models andgeneral linear models, two broad statistical models. Co-originatorJohn Nelderhas expressed regret over this terminology.[4] The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link areasymptotic(tending to work well with large samples). A simple, very important example of a generalized linear model (also an example of a general linear model) islinear regression. In linear regression, the use of theleast-squaresestimator is justified by theGauss–Markov theorem, which does not assume that the distribution is normal. From the perspective of generalized linear models, however, it is useful to suppose that the distribution function is the normal distribution with constant variance and the link function is the identity, which is the canonical link if the variance is known. Under these assumptions, the least-squares estimator is obtained as the maximum-likelihood parameter estimate. For the normal distribution, the generalized linear model has aclosed formexpression for the maximum-likelihood estimates, which is convenient. Most other GLMs lackclosed formestimates. When the response data,Y, are binary (taking on only values 0 and 1), the distribution function is generally chosen to be theBernoulli distributionand the interpretation ofμiis then the probability,p, ofYitaking on the value one. There are several popular link functions for binomial functions. The most typical link function is the canonicallogitlink: GLMs with this setup arelogistic regressionmodels (orlogit models). Alternatively, the inverse of any continuouscumulative distribution function(CDF) can be used for the link since the CDF's range is[0,1]{\displaystyle [0,1]}, the range of the binomial mean. Thenormal CDFΦ{\displaystyle \Phi }is a popular choice and yields theprobit model. Its link is The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributedprior distributionsare placed on the parameters, the relationship between the normal priors and the normal CDF link function means that aprobit modelcan be computed usingGibbs sampling, while a logit model generally cannot.) The complementary log-log function may also be used: This link function is asymmetric and will often produce different results from the logit and probit link functions.[5]The cloglog model corresponds to applications where we observe either zero events (e.g., defects) or one or more, where the number of events is assumed to follow thePoisson distribution.[6]The Poisson assumption means that whereμis a positive number denoting the expected number of events. Ifprepresents the proportion of observations with at least one event, its complement and then A linear model requires the response variable to take values over the entire real line. Sinceμmust be positive, we can enforce that by taking the logarithm, and letting log(μ) be a linear model. This produces the "cloglog" transformation The identity linkg(p) = pis also sometimes used for binomial data to yield alinear probability model. However, the identity link can predict nonsense "probabilities" less than zero or greater than one. This can be avoided by using a transformation like cloglog, probit or logit (or any inverse cumulative distribution function). A primary merit of the identity link is that it can be estimated using linear math—and other standard link functions are approximately linear matching the identity link nearp= 0.5. Thevariance functionfor "quasibinomial" data is: where the dispersion parameterτis exactly 1 for the binomial distribution. Indeed, the standard binomial likelihood omitsτ. When it is present, the model is called "quasibinomial", and the modified likelihood is called aquasi-likelihood, since it is not generally the likelihood corresponding to any real family of probability distributions. Ifτexceeds 1, the model is said to exhibitoverdispersion. The binomial case may be easily extended to allow for amultinomial distributionas the response (also, a Generalized Linear Model for counts, with a constrained total). There are two ways in which this is usually done: If the response variable isordinal, then one may fit a model function of the form: form> 2. Different linksglead toordinal regressionmodels likeproportional odds modelsorordered probitmodels. If the response variable is anominal measurement, or the data do not satisfy the assumptions of an ordered model, one may fit a model of the following form: form> 2. Different linksglead tomultinomial logitormultinomial probitmodels. These are more general than the ordered response models, and more parameters are estimated. Another example of generalized linear models includesPoisson regressionwhich modelscount datausing thePoisson distribution. The link is typically the logarithm, the canonical link. The variance function is proportional to the mean where the dispersion parameterτis typically fixed at exactly one. When it is not, the resultingquasi-likelihoodmodel is often described as Poisson withoverdispersionorquasi-Poisson. The standard GLM assumes that the observations areuncorrelated. Extensions have been developed to allow forcorrelationbetween observations, as occurs for example inlongitudinal studiesand clustered designs: Generalized additive models(GAMs) are another extension to GLMs in which the linear predictorηis not restricted to be linear in the covariatesXbut is the sum ofsmoothing functionsapplied to thexis: The smoothing functionsfiare estimated from the data. In general this requires a large number of data points and is computationally intensive.[9][10]
https://en.wikipedia.org/wiki/Generalized_linear_model
Innon-parametric statistics, theTheil–Sen estimatoris a method forrobustlyfitting a lineto sample points in the plane (simple linear regression) by choosing themedianof theslopesof all lines through pairs of points. It has also been calledSen's slope estimator,[1][2]slope selection,[3][4]thesingle median method,[5]theKendall robust line-fit method,[6]and theKendall–Theil robust line.[7]It is named afterHenri TheilandPranab K. Sen, who published papers on this method in 1950 and 1968 respectively,[8]and afterMaurice Kendallbecause of its relation to theKendall tau rank correlation coefficient.[9] Theil–Sen regression has several advantages overOrdinary least squaresregression. It is insensitive tooutliers. It can be used for significance tests even when residuals are not normally distributed.[10]It can be significantly more accurate thannon-robust simple linear regression(least squares) forskewedandheteroskedasticdata, and competes well against least squares even fornormally distributeddata in terms ofstatistical power.[11]It has been called "the most popular nonparametric technique for estimating a linear trend".[2]There are fast algorithms for efficiently computing the parameters. As defined byTheil (1950), the Theil–Sen estimator of a set of two-dimensional points(xi,yi)is the medianmof the slopes(yj−yi)/(xj−xi)determined by all pairs of sample points.Sen (1968)extended this definition to handle the case in which two data points have the samexcoordinate. In Sen's definition, one takes the median of the slopes defined only from pairs of points having distinctxcoordinates.[8] Once the slopemhas been determined, one may determine a line from the sample points by setting they-interceptbto be the median of the valuesyi−mxi. The fit line is then the liney=mx+bwith coefficientsmandbinslope–intercept form.[12]As Sen observed, this choice of slope makes theKendall tau rank correlation coefficientbecome approximately zero, when it is used to compare the valuesxiwith their associatedresidualsyi−mxi−b. Intuitively, this suggests that how far the fit line passes above or below a data point is not correlated with whether that point is on the left or right side of the data set. The choice ofbdoes not affect the Kendall coefficient, but causes the median residual to become approximately zero; that is, the fit line passes above and below equal numbers of points.[9] Aconfidence intervalfor the slope estimate may be determined as the interval containing the middle 95% of the slopes of lines determined by pairs of points[13]and may be estimated quickly by sampling pairs of points and determining the 95% interval of the sampled slopes. According to simulations, approximately 600 sample pairs are sufficient to determine an accurate confidence interval.[11] A variation of the Theil–Sen estimator, therepeated median regressionofSiegel (1982), determines for each sample point(xi,yi), the medianmiof the slopes(yj−yi)/(xj−xi)of lines through that point, and then determines the overall estimator as the median of these medians. It can tolerate a greater number of outliers than the Theil–Sen estimator, but known algorithms for computing it efficiently are more complicated and less practical.[14] A different variant pairs up sample points by the rank of theirx-coordinates: the point with the smallest coordinate is paired with the first point above the median coordinate, the second-smallest point is paired with the next point above the median, and so on. It then computes the median of the slopes of the lines determined by these pairs of points, gaining speed by examining significantly fewer pairs than the Theil–Sen estimator.[15] Variations of the Theil–Sen estimator based onweighted medianshave also been studied, based on the principle that pairs of samples whosex-coordinates differ more greatly are more likely to have an accurate slope and therefore should receive a higher weight.[16] For seasonal data, it may be appropriate to smooth out seasonal variations in the data by considering only pairs of sample points that both belong to the same month or the same season of the year, and finding the median of the slopes of the lines determined by this more restrictive set of pairs.[17] The Theil–Sen estimator is anunbiased estimatorof the true slope insimple linear regression.[18]For many distributions of theresponse error, this estimator has highasymptotic efficiencyrelative toleast-squaresestimation.[19]Estimators with low efficiency require more independent observations to attain the same sample variance of efficient unbiased estimators. The Theil–Sen estimator is morerobustthan the least-squares estimator because it is much less sensitive tooutliers. It has abreakdown pointof meaning that it can tolerate arbitrary corruption of up to 29.3% of the input data-points without degradation of its accuracy.[12]However, the breakdown point decreases for higher-dimensional generalizations of the method.[20]A higher breakdown point, 50%, holds for a different robust line-fitting algorithm, therepeated median estimatorof Siegel.[12] The Theil–Sen estimator isequivariantunder everylinear transformationof its response variable, meaning that transforming the data first and then fitting a line, or fitting a line first and then transforming it in the same way, both produce the same result.[21]However, it is not equivariant underaffine transformationsof both the predictor and response variables.[20] The median slope of a set ofnsample points may be computed exactly by computing allO(n2)lines through pairs of points, and then applying a linear timemedian finding algorithm. Alternatively, it may be estimated by sampling pairs of points. This problem is equivalent, underprojective duality, to the problem of finding the crossing point in anarrangement of linesthat has the medianx-coordinate among all such crossing points.[22] The problem of performing slope selection exactly but more efficiently than the brute force quadratic time algorithm has been extensively studied incomputational geometry. Several different methods are known for computing the Theil–Sen estimator exactly inO(nlogn)time, either deterministically[3]or usingrandomized algorithms.[4]Siegel's repeated median estimator can also be constructed in the same time bound.[23]In models of computation in which the input coordinates are integers and in whichbitwise operationson integers take constant time, the Theil–Sen estimator can be constructed even more quickly, in randomized expected timeO(nlog⁡n){\displaystyle O(n{\sqrt {\log n}})}.[24] An estimator for the slope with approximately median rank, having the same breakdown point as the Theil–Sen estimator, may be maintained in thedata stream model(in which the sample points are processed one by one by an algorithm that does not have enough persistent storage to represent the entire data set) using an algorithm based onε-nets.[25] In theRstatistics package, both the Theil–Sen estimator and Siegel's repeated median estimator are available through themblmlibrary.[26]A free standaloneVisual Basicapplication for Theil–Sen estimation,KTRLine, has been made available by theUS Geological Survey.[27]The Theil–Sen estimator has also been implemented inPythonas part of theSciPyandscikit-learnlibraries.[28] Theil–Sen estimation has been applied toastronomydue to its ability to handlecensored regression models.[29]Inbiophysics,Fernandes & Leblanc (2005)suggest its use forremote sensingapplications such as the estimation of leaf area from reflectance data due to its "simplicity in computation, analytical estimates of confidence intervals, robustness to outliers, testable assumptions regarding residuals and ... limited a priori information regarding measurement errors".[30]For measuring seasonal environmental data such aswater quality, a seasonally adjusted variant of the Theil–Sen estimator has been proposed as preferable to least squares estimation due to its high precision in the presence of skewed data.[17]Incomputer science, the Theil–Sen method has been used to estimate trends insoftware aging.[31]Inmeteorologyandclimatology, it has been used to estimate the long-term trends of wind occurrence and speed.[32]
https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator
Multiverse analysisis a scientific method that specifies and then runs a set of plausible alternative models or statistical tests for a single hypothesis.[1]It is a method to address the issue that the "scientific process confronts researchers with a multiplicity of seemingly minor, yet nontrivial, decision points, each of which may introduce variability in research outcomes".[2]A problem also known asResearcher degrees of freedom[3]or as thegarden of forking paths. It is a method arising in response to the credibility andreplication crisistaking place in science, because it can diagnose the fragility or robustness of a study's findings. Multiverse analyses have been used in the fields of psychology[4]and neuroscience.[5]It is also a form ofmeta-analysisallowing researchers to provide evidence on how different model specifications impact results for the same hypothesis, and thus can point scientists toward where they might need better theory orcausal models. Multiverse analysis most often produces a large number of results that tend to go in all directions. This means that most studies do not offer consensus or specific rejection of an hypothesis. Its strongest utilities thus far are instead to provide evidence against conclusions based on findings from single studies or to provide evidence about which model specifications are more or less likely to cause larger or more robust effect sizes (or not). Evidence against single studies or statistical models, is useful in identifying potentialfalse positiveresults. For example, a now infamous study concluded that female gender named hurricanes are more deadly than male gender named hurricanes.[6]In a follow up study,[7]researchers ran thousands of models using the same hurricane data, but making various plausible adjustments to the regression model. By plotting a density curve of all regression coefficients, they showed that the coefficient of the original study was an extreme outlier. In a study of birth order effects,[8]researchers visualized a multiverse of plausible models using aspecification curvewhich allows researchers to visually inspect a plot of all model outcomes against various model specifications. They could show that their findings supported previous research of birth order on intellect, but provided evidence against an effect on life satisfaction and various personality traits.
https://en.wikipedia.org/wiki/Multiverse_analysis
Thegrowth function, also called theshatter coefficientor theshattering number, measures the richness of aset familyor class of functions. It is especially used in the context ofstatistical learning theory, where it is used to study properties of statistical learning methods. The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties.[1]It is a basic concept inmachine learning.[2][3] LetH{\displaystyle H}be aset family(a set of sets) andC{\displaystyle C}a set. Theirintersectionis defined as the following set-family: Theintersection-size(also called theindex) ofH{\displaystyle H}with respect toC{\displaystyle C}is|H∩C|{\displaystyle |H\cap C|}. If a setCm{\displaystyle C_{m}}hasm{\displaystyle m}elements then the index is at most2m{\displaystyle 2^{m}}. If the index is exactly 2mthen the setC{\displaystyle C}is said to beshatteredbyH{\displaystyle H}, becauseH∩C{\displaystyle H\cap C}contains all the subsets ofC{\displaystyle C}, i.e.: The growth function measures the size ofH∩C{\displaystyle H\cap C}as a function of|C|{\displaystyle |C|}. Formally: Equivalently, letH{\displaystyle H}be a hypothesis-class (a set of binary functions) andC{\displaystyle C}a set withm{\displaystyle m}elements. TherestrictionofH{\displaystyle H}toC{\displaystyle C}is the set of binary functions onC{\displaystyle C}that can be derived fromH{\displaystyle H}:[3]: 45 The growth function measures the size ofHC{\displaystyle H_{C}}as a function of|C|{\displaystyle |C|}:[3]: 49 1.The domain is the real lineR{\displaystyle \mathbb {R} }. The set-familyH{\displaystyle H}contains all thehalf-lines(rays) from a given number to positive infinity, i.e., all sets of the form{x>x0∣x∈R}{\displaystyle \{x>x_{0}\mid x\in \mathbb {R} \}}for somex0∈R{\displaystyle x_{0}\in \mathbb {R} }. For any setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}containsm+1{\displaystyle m+1}sets: the empty set, the set containing the largest element ofC{\displaystyle C}, the set containing the two largest elements ofC{\displaystyle C}, and so on. Therefore:Growth⁡(H,m)=m+1{\displaystyle \operatorname {Growth} (H,m)=m+1}.[1]: Ex.1The same is true whetherH{\displaystyle H}contains open half-lines, closed half-lines, or both. 2.The domain is the segment[0,1]{\displaystyle [0,1]}. The set-familyH{\displaystyle H}contains all the open sets. For any finite setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}contains all possible subsets ofC{\displaystyle C}. There are2m{\displaystyle 2^{m}}such subsets, soGrowth⁡(H,m)=2m{\displaystyle \operatorname {Growth} (H,m)=2^{m}}.[1]: Ex.2 3.The domain is the Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}. The set-familyH{\displaystyle H}contains all thehalf-spacesof the form:x⋅ϕ≥1{\displaystyle x\cdot \phi \geq 1}, whereϕ{\displaystyle \phi }is a fixed vector. ThenGrowth⁡(H,m)=Comp⁡(n,m){\displaystyle \operatorname {Growth} (H,m)=\operatorname {Comp} (n,m)}, where Comp is thenumber of components in a partitioning of an n-dimensional space by m hyperplanes.[1]: Ex.3 4.The domain is the real lineR{\displaystyle \mathbb {R} }. The set-familyH{\displaystyle H}contains all the real intervals, i.e., all sets of the form{x∈[x0,x1]|x∈R}{\displaystyle \{x\in [x_{0},x_{1}]|x\in \mathbb {R} \}}for somex0,x1∈R{\displaystyle x_{0},x_{1}\in \mathbb {R} }. For any setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}contains all runs of between 0 andm{\displaystyle m}consecutive elements ofC{\displaystyle C}. The number of such runs is(m+12)+1{\displaystyle {m+1 \choose 2}+1}, soGrowth⁡(H,m)=(m+12)+1{\displaystyle \operatorname {Growth} (H,m)={m+1 \choose 2}+1}. The main property that makes the growth function interesting is that it can be either polynomial or exponential - nothing in-between. The following is a property of the intersection-size:[1]: Lem.1 This implies the following property of the Growth function.[1]: Th.1For every familyH{\displaystyle H}there are two cases: For any finiteH{\displaystyle H}: since for everyC{\displaystyle C}, the number of elements inH∩C{\displaystyle H\cap C}is at most|H|{\displaystyle |H|}. Therefore, the growth function is mainly interesting whenH{\displaystyle H}is infinite. For any nonemptyH{\displaystyle H}: I.e, the growth function has an exponential upper-bound. We say that a set-familyH{\displaystyle H}shattersa setC{\displaystyle C}if their intersection contains all possible subsets ofC{\displaystyle C}, i.e.H∩C=2C{\displaystyle H\cap C=2^{C}}. IfH{\displaystyle H}shattersC{\displaystyle C}of sizem{\displaystyle m}, thenGrowth⁡(H,C)=2m{\displaystyle \operatorname {Growth} (H,C)=2^{m}}, which is the upper bound. Define the Cartesian intersection of two set-families as: Then:[2]: 57 For every two set-families:[2]: 58 TheVC dimensionofH{\displaystyle H}is defined according to these two cases: SoVCDim⁡(H)≥d{\displaystyle \operatorname {VCDim} (H)\geq d}if-and-only-ifGrowth⁡(H,d)=2d{\displaystyle \operatorname {Growth} (H,d)=2^{d}}. The growth function can be regarded as a refinement of the concept of VC dimension. The VC dimension only tells us whetherGrowth⁡(H,d){\displaystyle \operatorname {Growth} (H,d)}is equal to or smaller than2d{\displaystyle 2^{d}}, while the growth function tells us exactly howGrowth⁡(H,m){\displaystyle \operatorname {Growth} (H,m)}changes as a function ofm{\displaystyle m}. Another connection between the growth function and the VC dimension is given by theSauer–Shelah lemma:[3]: 49 In particular, This upper bound is tight, i.e., for allm>d{\displaystyle m>d}there existsH{\displaystyle H}with VC dimensiond{\displaystyle d}such that:[2]: 56 While the growth-function is related to themaximumintersection-size, theentropyis related to theaverageintersection size:[1]: 272–273 The intersection-size has the following property. For every set-familyH{\displaystyle H}: Hence: Moreover, the sequenceEntropy⁡(H,m)/m{\displaystyle \operatorname {Entropy} (H,m)/m}converges to a constantc∈[0,1]{\displaystyle c\in [0,1]}whenm→∞{\displaystyle m\to \infty }. Moreover, the random-variablelog2⁡|H∩Cm|/m{\displaystyle \log _{2}{|H\cap C_{m}|/m}}is concentrated nearc{\displaystyle c}. LetΩ{\displaystyle \Omega }be a set on which aprobability measurePr{\displaystyle \Pr }is defined. LetH{\displaystyle H}be family of subsets ofΩ{\displaystyle \Omega }(= a family of events). Suppose we choose a setCm{\displaystyle C_{m}}that containsm{\displaystyle m}elements ofΩ{\displaystyle \Omega }, where each element is chosen at random according to the probability measureP{\displaystyle P}, independently of the others (i.e., with replacements). For each eventh∈H{\displaystyle h\in H}, we compare the following two quantities: We are interested in the difference,D(h,Cm):=||h∩Cm|/m−Pr[h]|{\displaystyle D(h,C_{m}):={\big |}|h\cap C_{m}|/m-\Pr[h]{\big |}}. This difference satisfies the following upper bound: which is equivalent to:[1]: Th.2 In words: the probability that forallevents inH{\displaystyle H}, the relative-frequency is near the probability, is lower-bounded by an expression that depends on the growth-function ofH{\displaystyle H}. A corollary of this is that, if the growth function is polynomial inm{\displaystyle m}(i.e., there exists somen{\displaystyle n}such thatGrowth⁡(H,m)≤mn+1{\displaystyle \operatorname {Growth} (H,m)\leq m^{n}+1}), then the above probability approaches 1 asm→∞{\displaystyle m\to \infty }. I.e, the familyH{\displaystyle H}enjoysuniform convergence in probability.
https://en.wikipedia.org/wiki/Growth_function
Incombinatorial mathematicsandextremal set theory, theSauer–Shelah lemmastates that everyfamily of setswith smallVC dimensionconsists of a small number of sets. It is named afterNorbert SauerandSaharon Shelah, who published it independently of each other in 1972.[1][2]The same result was also published slightly earlier and again independently, byVladimir VapnikandAlexey Chervonenkis, after whom the VC dimension is named.[3]In his paper containing the lemma, Shelah gives credit also toMicha Perles,[2]and for this reason the lemma has also been called thePerles–Sauer–Shelah lemmaand theSauer–Shelah–Perles lemma.[4][5] Buzaglo et al. call this lemma "one of the most fundamental results on VC-dimension",[4]and it has applications in many areas. Sauer's motivation was in thecombinatoricsof set systems,[1]while Shelah's was inmodel theory[2]and that of Vapnik and Chervonenkis was instatistics.[3]It has also been applied indiscrete geometry[6]andgraph theory.[7] IfF={S1,S2,…}{\displaystyle \textstyle {\mathcal {F}}=\{S_{1},S_{2},\dots \}}is a family of sets andT{\displaystyle T}is a set, thenT{\displaystyle T}is said to beshatteredbyF{\displaystyle {\mathcal {F}}}if every subset ofT{\displaystyle T}(including theempty setandT{\displaystyle T}itself) can be obtained as the intersectionT∩Si{\displaystyle T\cap S_{i}}ofT{\displaystyle T}with some setSi{\displaystyle S_{i}}in the family. TheVC dimensionofF{\displaystyle {\mathcal {F}}}is the largestcardinalityof a set shattered byF{\displaystyle {\mathcal {F}}}.[6] In terms of these definitions, the Sauer–Shelah lemma states that if the VC dimension ofF{\displaystyle {\mathcal {F}}}isk{\displaystyle k}, and the union ofF{\displaystyle {\mathcal {F}}}hasn{\displaystyle n}elements, thenF{\displaystyle {\mathcal {F}}}can consist of at most∑i=0k(ni)=O(nk){\displaystyle \sum _{i=0}^{k}{\binom {n}{i}}=O(n^{k})}sets, as expressed usingbig O notation. Equivalently, if the number of sets in the family,|F|{\displaystyle |{\mathcal {F}}|}, obeys the inequality|F|>∑i=0k−1(ni),{\displaystyle |{\mathcal {F}}|>\sum _{i=0}^{k-1}{\binom {n}{i}},}thenF{\displaystyle {\mathcal {F}}}shatters a set of sizek{\displaystyle k}.[6] The bound of the lemma is tight: Let the familyF{\displaystyle {\mathcal {F}}}be composed of all subsets of{1,2,…,n}{\displaystyle \{1,2,\dots ,n\}}with size less thank{\displaystyle k}. Then the number of sets inF{\displaystyle {\mathcal {F}}}is exactly∑i=0k−1(ni){\textstyle \sum _{i=0}^{k-1}{\binom {n}{i}}}but it does not shatter any set of sizek{\displaystyle k}.[8] A strengthening of the Sauer–Shelah lemma, due toPajor (1985), states that every finite set familyF{\displaystyle {\mathcal {F}}}shatters at least|F|{\displaystyle |{\mathcal {F}}|}sets.[9]This immediately implies the Sauer–Shelah lemma, because only∑i=0k−1(ni){\textstyle \sum _{i=0}^{k-1}{\tbinom {n}{i}}}of the subsets of ann{\displaystyle n}-item universe have cardinality less thank{\displaystyle k}. Thus, when|F|>∑i=0k−1(ni),{\displaystyle |{\mathcal {F}}|>\sum _{i=0}^{k-1}{\binom {n}{i}},}there are not enough small sets to be shattered, so one of the shattered sets must have cardinality at leastk{\displaystyle k}.[10] For a restricted type of shattered set, called an order-shattered set, the number of shattered sets always equals the cardinality of the set family.[11] Pajor's variant of the Sauer–Shelah lemma may be proved bymathematical induction; the proof has variously been credited toNoga Alon[12]or toRon Aharoniand Ron Holzman.[11] A different proof of the Sauer–Shelah lemma in its original form, byPéter FranklandJános Pach, is based onlinear algebraand theinclusion–exclusion principle.[6][8]This proof extends to other settings such as families of vector spaces and, more generally,geometric lattices.[5] The original application of the lemma, by Vapnik and Chervonenkis, was in showing that every probability distribution can be approximated (with respect to a family of events of a given VC dimension) by a finite set of sample points whosecardinalitydepends only on the VC dimension of the family of events. In this context, there are two important notions of approximation, both parameterized by a numberε{\displaystyle \varepsilon }: a setS{\displaystyle S}of samples, and a probability distribution onS{\displaystyle S}, is said to be anε{\displaystyle \varepsilon }-approximation of the original distribution if the probability of each event with respect toS{\displaystyle S}differs from its original probability by at mostε{\displaystyle \varepsilon }. A setS{\displaystyle S}of (unweighted) samples is said to be anε{\displaystyle \varepsilon }-netif every event with probability at leastε{\displaystyle \varepsilon }includes at least one point ofS{\displaystyle S}. Anε{\displaystyle \varepsilon }-approximation must also be anε{\displaystyle \varepsilon }-net but not necessarily vice versa. Vapnik and Chervonenkis used the lemma to show that set systems of VC dimensiond{\displaystyle d}always haveε{\displaystyle \varepsilon }-approximations of cardinalityO(dε2log⁡dε).{\displaystyle O({\tfrac {d}{\varepsilon ^{2}}}\log {\tfrac {d}{\varepsilon }}).}Later authors includingHaussler & Welzl (1987)[13]andKomlós, Pach & Woeginger (1992)[14]similarly showed that there always existε{\displaystyle \varepsilon }-nets of cardinalityO(dεlog⁡1ε){\displaystyle O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})}, and more precisely of cardinality at most[6]dεln⁡1ε+2dεln⁡ln⁡1ε+6dε.{\displaystyle {\tfrac {d}{\varepsilon }}\ln {\tfrac {1}{\varepsilon }}+{\tfrac {2d}{\varepsilon }}\ln \ln {\tfrac {1}{\varepsilon }}+{\tfrac {6d}{\varepsilon }}.}The main idea of the proof of the existence of smallε{\displaystyle \varepsilon }-nets is to choose a random samplex{\displaystyle x}of cardinalityO(dεlog⁡1ε){\textstyle O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})}and a second independent random sampley{\displaystyle y}of cardinalityO(dεlog2⁡1ε){\textstyle O({\tfrac {d}{\varepsilon }}\log ^{2}{\tfrac {1}{\varepsilon }})}, and to bound the probability thatx{\displaystyle x}is missed by some large eventE{\displaystyle E}by the probability thatx{\displaystyle x}is missed and simultaneously the intersection ofy{\displaystyle y}withE{\displaystyle E}is larger than its median value. For any particularE{\displaystyle E}, the probability thatx{\displaystyle x}is missed whiley{\displaystyle y}is larger than its median is very small, and the Sauer–Shelah lemma (applied tox∪y{\displaystyle x\cup y}) shows that only a small number of distinct eventsE{\displaystyle E}need to be considered, so by theunion bound, with nonzero probability,x{\displaystyle x}is anε{\displaystyle \varepsilon }-net.[6] In turn,ε{\displaystyle \varepsilon }-nets andε{\displaystyle \varepsilon }-approximations, and the likelihood that a random sample of large enough cardinality has these properties, have important applications inmachine learning, in the area ofprobably approximately correct learning.[15]Incomputational geometry, they have been applied torange searching,[13]derandomization,[16]andapproximation algorithms.[17][18] Kozma & Moran (2013)use generalizations of the Sauer–Shelah lemma to prove results ingraph theorysuch as that the number ofstrong orientationsof a given graph is sandwiched between its numbers ofconnectedand2-edge-connectedsubgraphs.[7]
https://en.wikipedia.org/wiki/Sauer%E2%80%93Shelah_lemma
Adaptive coordinate descent[1]is an improvement of thecoordinate descentalgorithmto non-separableoptimizationby the use ofadaptive encoding.[2]The adaptive coordinate descent approach gradually builds a transformation of the coordinate system such that the new coordinates are as decorrelated as possible with respect to the objective function. The adaptive coordinate descent was shown to be competitive to the state-of-the-artevolutionary algorithmsand has the following invariance properties: CMA-like Adaptive Encoding Update (b) mostly based onprincipal component analysis(a) is used to extend the coordinate descent method (c) to the optimization of non-separable problems (d). The adaptation of an appropriate coordinate system allows adaptive coordinate descent to outperform coordinate descent on non-separable functions. The following figure illustrates the convergence of both algorithms on 2-dimensionalRosenbrock functionup to a target function value10−10{\displaystyle 10^{-10}}, starting from the initial pointx0=(−3,−4){\displaystyle x_{0}=(-3,-4)}. The adaptive coordinate descent method reaches the target value after only 325 function evaluations (about 70 times faster than coordinate descent), that is comparable togradient-based methods. The algorithm has lineartime complexityif update coordinate system every D iterations, it is also suitable for large-scale (D>>100) non-linear optimization. First approaches to optimization using adaptive coordinate system were proposed already in the 1960s (see, e.g.,Rosenbrock's method). PRincipal Axis (PRAXIS) algorithm, also referred to as Brent's algorithm, is a derivative-free algorithm which assumes quadratic form of the optimized function and repeatedly updates a set of conjugate search directions.[3]The algorithm, however, is not invariant to scaling of the objective function and may fail under its certain rank-preserving transformations (e.g., will lead to a non-quadratic shape of the objective function). A recent analysis of PRAXIS can be found in.[4]For practical applications see,[5]where an adaptive coordinate descent approach with step-size adaptation and local coordinate system rotation was proposed for robot-manipulator path planning in 3D space with static polygonal obstacles.
https://en.wikipedia.org/wiki/Adaptive_coordinate_descent
Inmathematics, theconjugate gradient methodis analgorithmfor thenumerical solutionof particularsystems of linear equations, namely those whose matrix ispositive-semidefinite. The conjugate gradient method is often implemented as aniterative algorithm, applicable tosparsesystems that are too large to be handled by a direct implementation or other direct methods such as theCholesky decomposition. Large sparse systems often arise when numerically solvingpartial differential equationsor optimization problems. The conjugate gradient method can also be used to solve unconstrainedoptimizationproblems such asenergy minimization. It is commonly attributed toMagnus HestenesandEduard Stiefel,[1][2]who programmed it on theZ4,[3]and extensively researched it.[4][5] Thebiconjugate gradient methodprovides a generalization to non-symmetric matrices. Variousnonlinear conjugate gradient methodsseek minima of nonlinear optimization problems. Suppose we want to solve thesystem of linear equations for the vectorx{\displaystyle \mathbf {x} }, where the knownn×n{\displaystyle n\times n}matrixA{\displaystyle \mathbf {A} }issymmetric(i.e.,AT=A{\displaystyle \mathbf {A} ^{\mathsf {T}}=\mathbf {A} }),positive-definite(i.e.xTAx>0{\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {Ax} >0}for all non-zero vectorsx{\displaystyle \mathbf {x} }inRn{\displaystyle \mathbb {R} ^{n}}), andreal, andb{\displaystyle \mathbf {b} }is known as well. We denote the unique solution of this system byx∗{\displaystyle \mathbf {x} _{*}}. The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of theArnoldi/Lanczositeration foreigenvalueproblems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method. We say that two non-zero vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }are conjugate (with respect toA{\displaystyle \mathbf {A} }) if SinceA{\displaystyle \mathbf {A} }is symmetric and positive-definite, the left-hand side defines aninner product Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: ifu{\displaystyle \mathbf {u} }is conjugate tov{\displaystyle \mathbf {v} }, thenv{\displaystyle \mathbf {v} }is conjugate tou{\displaystyle \mathbf {u} }. Suppose that is a set ofn{\displaystyle n}mutually conjugate vectors with respect toA{\displaystyle \mathbf {A} }, i.e.piTApj=0{\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}for alli≠j{\displaystyle i\neq j}. ThenP{\displaystyle P}forms abasisforRn{\displaystyle \mathbb {R} ^{n}}, and we may express the solutionx∗{\displaystyle \mathbf {x} _{*}}ofAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }in this basis: Left-multiplying the problemAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }with the vectorpkT{\displaystyle \mathbf {p} _{k}^{\mathsf {T}}}yields and so This gives the following method[4]for solving the equationAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }: find a sequence ofn{\displaystyle n}conjugate directions, and then compute the coefficientsαk{\displaystyle \alpha _{k}}. If we choose the conjugate vectorspk{\displaystyle \mathbf {p} _{k}}carefully, then we may not need all of them to obtain a good approximation to the solutionx∗{\displaystyle \mathbf {x} _{*}}. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems wheren{\displaystyle n}is so large that the direct method would take too much time. We denote the initial guess forx∗{\displaystyle \mathbf {x} _{*}}byx0{\displaystyle \mathbf {x} _{0}}(we can assume without loss of generality thatx0=0{\displaystyle \mathbf {x} _{0}=\mathbf {0} }, otherwise consider the systemAz=b−Ax0{\displaystyle \mathbf {Az} =\mathbf {b} -\mathbf {Ax} _{0}}instead). Starting withx0{\displaystyle \mathbf {x} _{0}}we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solutionx∗{\displaystyle \mathbf {x} _{*}}(that is unknown to us). This metric comes from the fact that the solutionx∗{\displaystyle \mathbf {x} _{*}}is also the unique minimizer of the followingquadratic function The existence of a unique minimizer is apparent as itsHessian matrixof second derivatives is symmetric positive-definite and that the minimizer (useDf(x)=0{\displaystyle Df(\mathbf {x} )=0}) solves the initial problem follows from its first derivative This suggests taking the first basis vectorp0{\displaystyle \mathbf {p} _{0}}to be the negative of the gradient off{\displaystyle f}atx=x0{\displaystyle \mathbf {x} =\mathbf {x} _{0}}. The gradient off{\displaystyle f}equalsAx−b{\displaystyle \mathbf {Ax} -\mathbf {b} }. Starting with an initial guessx0{\displaystyle \mathbf {x} _{0}}, this means we takep0=b−Ax0{\displaystyle \mathbf {p} _{0}=\mathbf {b} -\mathbf {Ax} _{0}}. The other vectors in the basis will be conjugate to the gradient, hence the nameconjugate gradient method. Note thatp0{\displaystyle \mathbf {p} _{0}}is also theresidualprovided by this initial step of the algorithm. Letrk{\displaystyle \mathbf {r} _{k}}be theresidualat thek{\displaystyle k}th step: As observed above,rk{\displaystyle \mathbf {r} _{k}}is the negative gradient off{\displaystyle f}atxk{\displaystyle \mathbf {x} _{k}}, so thegradient descentmethod would require to move in the directionrk. Here, however, we insist that the directionspk{\displaystyle \mathbf {p} _{k}}must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example ofGram-Schmidt orthonormalization. This gives the following expression: (see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by with where the last equality follows from the definition ofrk{\displaystyle \mathbf {r} _{k}}. The expression forαk{\displaystyle \alpha _{k}}can be derived if one substitutes the expression forxk+1intofand minimizing it with respect toαk{\displaystyle \alpha _{k}} The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows thatri{\displaystyle \mathbf {r} _{i}}is orthogonal torj{\displaystyle \mathbf {r} _{j}}, i.e.riTrj=0{\displaystyle \mathbf {r} _{i}^{\mathsf {T}}\mathbf {r} _{j}=0}, fori≠j{\displaystyle i\neq j}. Andpi{\displaystyle \mathbf {p} _{i}}isA{\displaystyle \mathbf {A} }-orthogonal topj{\displaystyle \mathbf {p} _{j}}, i.e.piTApj=0{\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}, fori≠j{\displaystyle i\neq j}. This can be regarded that as the algorithm progresses,pi{\displaystyle \mathbf {p} _{i}}andri{\displaystyle \mathbf {r} _{i}}span the sameKrylov subspace, whereri{\displaystyle \mathbf {r} _{i}}form the orthogonal basis with respect to the standard inner product, andpi{\displaystyle \mathbf {p} _{i}}form the orthogonal basis with respect to the inner product induced byA{\displaystyle \mathbf {A} }. Therefore,xk{\displaystyle \mathbf {x} _{k}}can be regarded as the projection ofx{\displaystyle \mathbf {x} }on the Krylov subspace. That is, if the CG method starts withx0=0{\displaystyle \mathbf {x} _{0}=0}, then[6]xk=argminy∈Rn{(x−y)⊤A(x−y):y∈span⁡{b,Ab,…,Ak−1b}}{\displaystyle x_{k}=\mathrm {argmin} _{y\in \mathbb {R} ^{n}}{\left\{(x-y)^{\top }A(x-y):y\in \operatorname {span} \left\{b,Ab,\ldots ,A^{k-1}b\right\}\right\}}}The algorithm is detailed below for solvingAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }whereA{\displaystyle \mathbf {A} }is a real, symmetric, positive-definite matrix. The input vectorx0{\displaystyle \mathbf {x} _{0}}can be an approximate initial solution or0{\displaystyle \mathbf {0} }. It is a different formulation of the exact procedure described above. This is the most commonly used algorithm. The same formula forβk{\displaystyle \beta _{k}}is also used in the Fletcher–Reevesnonlinear conjugate gradient method. We note thatx1{\displaystyle \mathbf {x} _{1}}is computed by thegradient descentmethod applied tox0{\displaystyle \mathbf {x} _{0}}. Settingβk=0{\displaystyle \beta _{k}=0}would similarly makexk+1{\displaystyle \mathbf {x} _{k+1}}computed by thegradient descentmethod fromxk{\displaystyle \mathbf {x} _{k}}, i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations.[4]Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due toround-off error. The formulasxk+1:=xk+αkpk{\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}andrk:=b−Axk{\displaystyle \mathbf {r} _{k}:=\mathbf {b} -\mathbf {Ax} _{k}}, which both hold in exact arithmetic, make the formulasrk+1:=rk−αkApk{\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}andrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication byA{\displaystyle \mathbf {A} }since the vectorApk{\displaystyle \mathbf {Ap} _{k}}is already computed to evaluateαk{\displaystyle \alpha _{k}}. The latter may be more accurate, substituting the explicit calculationrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}for the implicit one by the recursion subject toround-off erroraccumulation, and is thus recommended for an occasional evaluation.[7] A norm of the residual is typically used for stopping criteria. The norm of the explicit residualrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}provides a guaranteed level of accuracy both in exact arithmetic and in the presence of therounding errors, where convergence naturally stagnates. In contrast, the implicit residualrk+1:=rk−αkApk{\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}is known to keep getting smaller in amplitude well below the level ofrounding errorsand thus cannot be used to determine the stagnation of convergence. In the algorithm,αk{\displaystyle \alpha _{k}}is chosen such thatrk+1{\displaystyle \mathbf {r} _{k+1}}is orthogonal tork{\displaystyle \mathbf {r} _{k}}. The denominator is simplified from sincerk+1=pk+1−βkpk{\displaystyle \mathbf {r} _{k+1}=\mathbf {p} _{k+1}-\mathbf {\beta } _{k}\mathbf {p} _{k}}. Theβk{\displaystyle \beta _{k}}is chosen such thatpk+1{\displaystyle \mathbf {p} _{k+1}}is conjugate topk{\displaystyle \mathbf {p} _{k}}. Initially,βk{\displaystyle \beta _{k}}is using and equivalently Apk=1αk(rk−rk+1),{\displaystyle \mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}(\mathbf {r} _{k}-\mathbf {r} _{k+1}),} the numerator ofβk{\displaystyle \beta _{k}}is rewritten as becauserk+1{\displaystyle \mathbf {r} _{k+1}}andrk{\displaystyle \mathbf {r} _{k}}are orthogonal by design. The denominator is rewritten as using that the search directionspk{\displaystyle \mathbf {p} _{k}}are conjugated and again that the residuals are orthogonal. This gives theβ{\displaystyle \beta }in the algorithm after cancellingαk{\displaystyle \alpha _{k}}. Consider the linear systemAx=bgiven by we will perform two steps of the conjugate gradient method beginning with the initial guess in order to find an approximate solution to the system. For reference, the exact solution is Our first step is to calculate the residual vectorr0associated withx0. This residual is computed from the formular0=b-Ax0, and in our case is equal to Since this is the first iteration, we will use the residual vectorr0as our initial search directionp0; the method of selectingpkwill change in further iterations. We now compute the scalarα0using the relationship We can now computex1using the formula This result completes the first iteration, the result being an "improved" approximate solution to the system,x1. We may now move on and compute the next residual vectorr1using the formula Our next step in the process is to compute the scalarβ0that will eventually be used to determine the next search directionp1. Now, using this scalarβ0, we can compute the next search directionp1using the relationship We now compute the scalarα1using our newly acquiredp1using the same method as that used forα0. Finally, we findx2using the same method as that used to findx1. The result,x2, is a "better" approximation to the system's solution thanx1andx0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached aftern= 2 iterations (nbeing the order of the system). Under exact arithmetic, the number of iterations required is no more than the order of the matrix. This behavior is known as thefinite termination propertyof the conjugate gradient method. It refers to the method's ability to reach the exact solution of a linear system in a finite number of steps—at most equal to the dimension of the system—when exact arithmetic is used. This property arises from the fact that, at each iteration, the method generates a residual vector that is orthogonal to all previous residuals. These residuals form a mutually orthogonal set. In an \(n\)-dimensional space, it is impossible to construct more than \(n\) linearly independent and mutually orthogonal vectors unless one of them is the zero vector. Therefore, once a zero residual appears, the method has reached the solution and must terminate. This ensures that the conjugate gradient method converges in at most \(n\) steps. To demonstrate this, consider the system: A=[3−2−24],b=[11]{\displaystyle A={\begin{bmatrix}3&-2\\-2&4\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}1\\1\end{bmatrix}}} We start from an initial guessx0=[12]{\displaystyle \mathbf {x} _{0}={\begin{bmatrix}1\\2\end{bmatrix}}}. SinceA{\displaystyle A}is symmetric positive-definite and the system is 2-dimensional, the conjugate gradient method should find the exact solution in no more than 2 steps. The following MATLAB code demonstrates this behavior: The output confirms that the method reaches[11]{\displaystyle {\begin{bmatrix}1\\1\end{bmatrix}}}after two iterations, consistent with the theoretical prediction. This example illustrates how the conjugate gradient method behaves as a direct method under idealized conditions. The finite termination property also has practical implications in solving large sparse systems, which frequently arise in scientific and engineering applications. For instance, discretizing the two-dimensional Laplace equation∇2u=0{\displaystyle \nabla ^{2}u=0}using finite differences on a uniform grid leads to a sparse linear systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, whereA{\displaystyle A}is symmetric and positive definite. Using a5×5{\displaystyle 5\times 5}interior grid yields a25×25{\displaystyle 25\times 25}system, and the coefficient matrixA{\displaystyle A}has a five-point stencil pattern. Each row ofA{\displaystyle A}contains at most five nonzero entries corresponding to the central point and its immediate neighbors. For example, the matrix generated from such a grid may look like: A=[4−10⋯−10⋯−14−1⋯00⋯0−14−100⋯⋮⋮⋱⋱⋱⋮−10⋯−14−1⋯00⋯0−14⋯⋮⋮⋯⋯⋯⋱]{\displaystyle A={\begin{bmatrix}4&-1&0&\cdots &-1&0&\cdots \\-1&4&-1&\cdots &0&0&\cdots \\0&-1&4&-1&0&0&\cdots \\\vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\-1&0&\cdots &-1&4&-1&\cdots \\0&0&\cdots &0&-1&4&\cdots \\\vdots &\vdots &\cdots &\cdots &\cdots &\ddots \end{bmatrix}}} Although the system dimension is 25, the conjugate gradient method is theoretically guaranteed to terminate in at most 25 iterations under exact arithmetic. In practice, convergence often occurs in far fewer steps due to the matrix's spectral properties. This efficiency makes CGM particularly attractive for solving large-scale systems arising from partial differential equations, such as those found in heat conduction, fluid dynamics, and electrostatics. The conjugate gradient method can theoretically be viewed as a direct method, as in the absence ofround-off errorit produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces. As aniterative method, the conjugate gradient method monotonically (in the energy norm) improves approximationsxk{\displaystyle \mathbf {x} _{k}}to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by thecondition numberκ(A){\displaystyle \kappa (A)}of the system matrixA{\displaystyle A}: the largerκ(A){\displaystyle \kappa (A)}is, the slower the improvement.[8] However, an interesting case appears when the eigenvalues are spaced logarithmically for a large symmetric matrix. For example, letA=QDQT{\displaystyle A=QDQ^{T}}whereQ{\displaystyle Q}is a random orthogonal matrix andD{\displaystyle D}is a diagonal matrix with eigenvalues ranging fromλn=1{\displaystyle \lambda _{n}=1}toλ1=106{\displaystyle \lambda _{1}=10^{6}}, spaced logarithmically. Despite the finite termination property of CGM, where the exact solution should theoretically be reached in at mostn{\displaystyle n}steps, the method may exhibit stagnation in convergence. In such a scenario, even after many more iterations—e.g., ten times the matrix size—the error may only decrease modestly (e.g., to10−5{\displaystyle 10^{-5}}). Moreover, the iterative error may oscillate significantly, making it unreliable as a stopping condition. This poor convergence is not explained by the condition number alone (e.g.,κ2(A)=106{\displaystyle \kappa _{2}(A)=10^{6}}), but rather by the eigenvalue distribution itself. When the eigenvalues are more evenly spaced or randomly distributed, such convergence issues are typically absent, highlighting that CGM performance depends not only onκ(A){\displaystyle \kappa (A)}but also on how the eigenvalues are distributed.[9] Ifκ(A){\displaystyle \kappa (A)}is large,preconditioningis commonly used to replace the original systemAx−b=0{\displaystyle \mathbf {Ax} -\mathbf {b} =0}withM−1(Ax−b)=0{\displaystyle \mathbf {M} ^{-1}(\mathbf {Ax} -\mathbf {b} )=0}such thatκ(M−1A){\displaystyle \kappa (\mathbf {M} ^{-1}\mathbf {A} )}is smaller thanκ(A){\displaystyle \kappa (\mathbf {A} )}, see below. Define a subset of polynomials as whereΠk{\displaystyle \Pi _{k}}is the set ofpolynomialsof maximal degreek{\displaystyle k}. Let(xk)k{\displaystyle \left(\mathbf {x} _{k}\right)_{k}}be the iterative approximations of the exact solutionx∗{\displaystyle \mathbf {x} _{*}}, and define the errors asek:=xk−x∗{\displaystyle \mathbf {e} _{k}:=\mathbf {x} _{k}-\mathbf {x} _{*}}. Now, the rate of convergence can be approximated as[4][10] whereσ(A){\displaystyle \sigma (\mathbf {A} )}denotes thespectrum, andκ(A){\displaystyle \kappa (\mathbf {A} )}denotes thecondition number. This showsk=12κ(A)log⁡(‖e0‖Aε−1){\displaystyle k={\tfrac {1}{2}}{\sqrt {\kappa (\mathbf {A} )}}\log \left(\left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\varepsilon ^{-1}\right)}iterations suffices to reduce the error to2ε{\displaystyle 2\varepsilon }for anyε>0{\displaystyle \varepsilon >0}. Note, the important limit whenκ(A){\displaystyle \kappa (\mathbf {A} )}tends to∞{\displaystyle \infty } This limit shows a faster convergence rate compared to the iterative methods ofJacobiorGauss–Seidelwhich scale as≈1−2κ(A){\displaystyle \approx 1-{\frac {2}{\kappa (\mathbf {A} )}}}. Noround-off erroris assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained[5]byAnne Greenbaum. If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound withκ(A){\textstyle {\sqrt {\kappa (\mathbf {A} )}}}, but may be super-linear, depending on a distribution of the spectrum of the matrixA{\displaystyle A}and the spectral distribution of the error.[5]In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications indouble-precision floating-point formatfor matrices of large sizes, the conjugate gradient method uses a stopping criterion with a tolerance that terminates the iterations during the first or second stage. In most cases,preconditioningis necessary to ensure fast convergence of the conjugate gradient method. IfM−1{\displaystyle \mathbf {M} ^{-1}}is symmetric positive-definite andM−1A{\displaystyle \mathbf {M} ^{-1}\mathbf {A} }has a better condition number thanA,{\displaystyle \mathbf {A} ,}a preconditioned conjugate gradient method can be used. It takes the following form:[11] The above formulation is equivalent to applying the regular conjugate gradient method to the preconditioned system[12] where The Cholesky decomposition of the preconditioner must be used to keep the symmetry (and positive definiteness) of the system. However, this decomposition does not need to be computed, and it is sufficient to knowM−1{\displaystyle \mathbf {M} ^{-1}}. It can be shown thatE−1A(E−1)T{\displaystyle \mathbf {E} ^{-1}\mathbf {A} (\mathbf {E} ^{-1})^{\mathsf {T}}}has the same spectrum asM−1A{\displaystyle \mathbf {M} ^{-1}\mathbf {A} }. The preconditioner matrixMhas to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration. If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable. An example of a commonly usedpreconditioneris theincomplete Cholesky factorization.[13] It is important to keep in mind that we don't want to invert the matrixM{\displaystyle \mathbf {M} }explicitly in order to getM−1{\displaystyle \mathbf {M} ^{-1}}for use it in the process, since invertingM{\displaystyle \mathbf {M} }would take more time/computational resources than solving the conjugate gradient algorithm itself. As an example, let's say that we are using a preconditioner coming from incomplete Cholesky factorization. The resulting matrix is the lower triangular matrixL{\displaystyle \mathbf {L} }, and the preconditioner matrix is: M=LLT{\displaystyle \mathbf {M} =\mathbf {LL} ^{\mathsf {T}}} Then we have to solve: Mz=r{\displaystyle \mathbf {Mz} =\mathbf {r} } z=M−1r{\displaystyle \mathbf {z} =\mathbf {M} ^{-1}\mathbf {r} } But: M−1=(L−1)TL−1{\displaystyle \mathbf {M} ^{-1}=(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}} Then: z=(L−1)TL−1r{\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}\mathbf {r} } Let's take an intermediary vectora{\displaystyle \mathbf {a} }: a=L−1r{\displaystyle \mathbf {a} =\mathbf {L} ^{-1}\mathbf {r} } r=La{\displaystyle \mathbf {r} =\mathbf {L} \mathbf {a} } Sincer{\displaystyle \mathbf {r} }andL{\displaystyle \mathbf {L} }and known, andL{\displaystyle \mathbf {L} }is lower triangular, solving fora{\displaystyle \mathbf {a} }is easy and computationally cheap by usingforward substitution. Then, we substitutea{\displaystyle \mathbf {a} }in the original equation: z=(L−1)Ta{\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {a} } a=LTz{\displaystyle \mathbf {a} =\mathbf {L} ^{\mathsf {T}}\mathbf {z} } Sincea{\displaystyle \mathbf {a} }andLT{\displaystyle \mathbf {L} ^{\mathsf {T}}}are known, andLT{\displaystyle \mathbf {L} ^{\mathsf {T}}}is upper triangular, solving forz{\displaystyle \mathbf {z} }is easy and computationally cheap by usingbackward substitution. Using this method, there is no need to invertM{\displaystyle \mathbf {M} }orL{\displaystyle \mathbf {L} }explicitly at all, and we still obtainz{\displaystyle \mathbf {z} }. In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using thePolak–Ribièreformula instead of theFletcher–Reevesformula may dramatically improve the convergence in this case.[14]This version of the preconditioned conjugate gradient method can be called[15]flexible, as it allows for variable preconditioning. The flexible version is also shown[16]to be robust even if the preconditioner is not symmetric positive definite (SPD). The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner,rk+1Tzk=0,{\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k}=0,}so both formulas forβkare equivalent in exact arithmetic, i.e., without theround-off error. The mathematical explanation of the better convergence behavior of the method with thePolak–Ribièreformula is that the method islocally optimalin this case, in particular, it does not converge slower than the locally optimal steepest descent method.[17] In both the original and the preconditioned conjugate gradient methods one only needs to setβk:=0{\displaystyle \beta _{k}:=0}in order to make them locally optimal, using theline search,steepest descentmethods. With this substitution, vectorspare always the same as vectorsz, so there is no need to store vectorsp. Thus, every iteration of thesesteepest descentmethods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPDpreconditioneris used, see above. The conjugate gradient method can also be derived usingoptimal control theory.[18]In this approach, the conjugate gradient method falls out as anoptimal feedback controller,u=k(x,v):=−γa∇f(x)−γbv{\displaystyle u=k(x,v):=-\gamma _{a}\nabla f(x)-\gamma _{b}v}for thedouble integrator system,x˙=v,v˙=u{\displaystyle {\dot {x}}=v,\quad {\dot {v}}=u}The quantitiesγa{\displaystyle \gamma _{a}}andγb{\displaystyle \gamma _{b}}are variable feedback gains.[18] The conjugate gradient method can be applied to an arbitraryn-by-mmatrix by applying it tonormal equationsATAand right-hand side vectorATb, sinceATAis a symmetricpositive-semidefinitematrix for anyA. The result isconjugate gradient on the normal equations(CGNorCGNR). As an iterative method, it is not necessary to formATAexplicitly in memory but only to perform the matrix–vector and transpose matrix–vector multiplications. Therefore, CGNR is particularly useful whenAis asparse matrixsince these operations are usually extremely efficient. However the downside of forming the normal equations is that thecondition numberκ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a goodpreconditioneris often an important part of using the CGNR method. Several algorithms have been proposed (e.g., CGLS, LSQR). TheLSQRalgorithm purportedly has the best numerical stability whenAis ill-conditioned, i.e.,Ahas a largecondition number. The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equationsAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }for the complex-valued vector x, where A isHermitian(i.e., A' = A) andpositive-definite matrix, and the symbol ' denotes theconjugate transpose. The trivial modification is simply substituting theconjugate transposefor the realtransposeeverywhere. The advantages and disadvantages of the conjugate gradient methods are summarized in the lecture notes by Nemirovsky and BenTal.[19]: Sec.7.3 This example is from[20]Lett∈(0,1){\textstyle t\in (0,1)}, and defineW=[ttt1+ttt1+ttt⋱⋱⋱tt1+t],b=[10⋮0]{\displaystyle W={\begin{bmatrix}t&{\sqrt {t}}&&&&\\{\sqrt {t}}&1+t&{\sqrt {t}}&&&\\&{\sqrt {t}}&1+t&{\sqrt {t}}&&\\&&{\sqrt {t}}&\ddots &\ddots &\\&&&\ddots &&\\&&&&&{\sqrt {t}}\\&&&&{\sqrt {t}}&1+t\end{bmatrix}},\quad b={\begin{bmatrix}1\\0\\\vdots \\0\end{bmatrix}}}SinceW{\displaystyle W}is invertible, there exists a unique solution toWx=b{\textstyle Wx=b}. Solving it by conjugate gradient descent gives us rather bad convergence:‖b−Wxk‖2=(1/t)k,‖b−Wxn‖2=0{\displaystyle \|b-Wx_{k}\|^{2}=(1/t)^{k},\quad \|b-Wx_{n}\|^{2}=0}In words, during the CG process, the error grows exponentially, until it suddenly becomes zero as the unique solution is found.
https://en.wikipedia.org/wiki/Conjugate_gradient
Inoptimization,line searchis a basiciterativeapproach to find alocal minimumx∗{\displaystyle \mathbf {x} ^{*}}of anobjective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }. It first finds adescent directionalong which the objective functionf{\displaystyle f}will be reduced, and then computes a step size that determines how farx{\displaystyle \mathbf {x} }should move along that direction. The descent direction can be computed by various methods, such asgradient descentorquasi-Newton method. The step size can be determined either exactly or inexactly. Supposefis a one-dimensional function,f:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }, and assume that it isunimodal, that is, contains exactly one local minimumx* in a given interval [a,z]. This means thatfis strictly decreasing in [a,x*] and strictly increasing in [x*,z]. There are several ways to find an (approximate) minimum point in this case.[1]: sec.5 Zero-order methods use only function evaluations (i.e., avalue oracle) - not derivatives:[1]: sec.5 Zero-order methods are very general - they do not assume differentiability or even continuity. First-order methods assume thatfis continuously differentiable, and that we can evaluate not onlyfbut also its derivative.[1]: sec.5 Curve-fitting methods try to attainsuperlinear convergenceby assuming thatfhas some analytic form, e.g. a polynomial of finite degree. At each iteration, there is a set of "working points" in which we know the value off(and possibly also its derivative). Based on these points, we can compute a polynomial that fits the known values, and find its minimum analytically. The minimum point becomes a new working point, and we proceed to the next iteration:[1]: sec.5 Curve-fitting methods have superlinear convergence when started close enough to the local minimum, but might diverge otherwise.Safeguarded curve-fitting methodssimultaneously execute a linear-convergence method in parallel to the curve-fitting method. They check in each iteration whether the point found by the curve-fitting method is close enough to the interval maintained by safeguard method; if it is not, then the safeguard method is used to compute the next iterate.[1]: 5.2.3.4 In general, we have a multi-dimensionalobjective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }. The line-search method first finds adescent directionalong which the objective functionf{\displaystyle f}will be reduced, and then computes a step size that determines how farx{\displaystyle \mathbf {x} }should move along that direction. The descent direction can be computed by various methods, such asgradient descentorquasi-Newton method. The step size can be determined either exactly or inexactly. Here is an example gradient method that uses a line search in step 5: At the line search step (2.3), the algorithm may minimizehexactly, by solvingh′(αk)=0{\displaystyle h'(\alpha _{k})=0}, orapproximately, by using one of the one-dimensional line-search methods mentioned above. It can also be solvedloosely, by asking for a sufficient decrease inhthat does not necessarily approximate the optimum. One example of the former isconjugate gradient method. The latter is called inexact line search and may be performed in a number of ways, such as abacktracking line searchor using theWolfe conditions. Like other optimization methods, line search may be combined withsimulated annealingto allow it to jump over somelocal minima.
https://en.wikipedia.org/wiki/Line_search
Mathematical optimization(alternatively spelledoptimisation) ormathematical programmingis the selection of a best element, with regard to some criteria, from some set of available alternatives.[1][2]It is generally divided into two subfields:discrete optimizationandcontinuous optimization. Optimization problems arise in all quantitative disciplines fromcomputer scienceandengineering[3]tooperations researchandeconomics, and the development of solution methods has been of interest inmathematicsfor centuries.[4][5] In the more general approach, anoptimization problemconsists ofmaximizing or minimizingareal functionby systematically choosinginputvalues from within an allowed set and computing thevalueof the function. The generalization of optimization theory and techniques to other formulations constitutes a large area ofapplied mathematics.[6] Optimization problems can be divided into two categories, depending on whether thevariablesarecontinuousordiscrete: An optimization problem can be represented in the following way: Such a formulation is called anoptimization problemor amathematical programming problem(a term not directly related tocomputer programming, but still in use for example inlinear programming– seeHistorybelow). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields ofphysicsmay refer to the technique asenergyminimization,[7]speaking of the value of the functionfas representing the energy of thesystembeingmodeled. Inmachine learning, it is always necessary to continuously evaluate the quality of a data model by using acost functionwhere a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically,Ais somesubsetof theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, often specified by a set ofconstraints, equalities or inequalities that the members ofAhave to satisfy. ThedomainAoffis called thesearch spaceor thechoice set, while the elements ofAare calledcandidate solutionsorfeasible solutions. The functionfis variously called anobjective function,criterion function,loss function,cost function(minimization),[8]utility functionorfitness function(maximization), or, in certain fields, anenergy functionorenergyfunctional. A feasible solution that minimizes (or maximizes) the objective function is called anoptimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. Alocal minimumx*is defined as an element for which there exists someδ> 0such that the expressionf(x*) ≤f(x)holds; that is to say, on some region aroundx*all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, aglobal minimumis at least as good as every feasible element. Generally, unless the objective function isconvexin a minimization problem, there may be several local minima. In aconvex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem.Global optimizationis the branch ofapplied mathematicsandnumerical analysisthat is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Optimization problems are often expressed with special notation. Here are some examples: Consider the following notation: This denotes the minimumvalueof the objective functionx2+ 1, when choosingxfrom the set ofreal numbersR{\displaystyle \mathbb {R} }. The minimum value in this case is 1, occurring atx= 0. Similarly, the notation asks for the maximum value of the objective function2x, wherexmay be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". Consider the following notation: or equivalently This represents the value (or values) of theargumentxin theinterval(−∞,−1]that minimizes (or minimize) the objective functionx2+ 1(the actual minimum value of that function is not what the problem asks for). In this case, the answer isx= −1, sincex= 0is infeasible, that is, it does not belong to thefeasible set. Similarly, or equivalently represents the{x,y}pair (or pairs) that maximizes (or maximize) the value of the objective functionxcosy, with the added constraint thatxlie in the interval[−5,5](again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form{5, 2kπ}and{−5, (2k+ 1)π}, wherekranges over allintegers. Operatorsarg minandarg maxare sometimes also written asargminandargmax, and stand forargument of the minimumandargument of the maximum. FermatandLagrangefound calculus-based formulae for identifying optima, whileNewtonandGaussproposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due toGeorge B. Dantzig, although much of the theory had been introduced byLeonid Kantorovichin 1939. (Programmingin this context does not refer tocomputer programming, but comes from the use ofprogramby theUnited Statesmilitary to refer to proposed training andlogisticsschedules, which were the problems Dantzig studied at that time.) Dantzig published theSimplex algorithmin 1947, and alsoJohn von Neumannand other researchers worked on the theoretical aspects of linear programming (like the theory ofduality) around the same time.[9] Other notable researchers in mathematical optimization include the following: In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as thePareto set. The curve created plotting weight against stiffness of the best designs is known as thePareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further intovector optimizationproblems where the (partial) ordering is no longer given by the Pareto ordering. Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches toglobal optimizationproblems, where multiple local extrema may be present includeevolutionary algorithms,Bayesian optimizationandsimulated annealing. Thesatisfiability problem, also called thefeasibility problem, is just the problem of finding anyfeasible solutionat all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is torelaxthe feasibility conditions using aslack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. Theextreme value theoremofKarl Weierstrassstates that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. One of Fermat's theoremsstates that optima of unconstrained problems are found atstationary points, where the first derivative or the gradient of the objective function is zero (seefirst derivative test). More generally, they may be found atcritical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by theLagrange multipliermethod. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called theHessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called thebordered Hessianin constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. Theenvelope theoremdescribes how the value of an optimal solution changes when an underlyingparameterchanges. The process of computing this change is calledcomparative statics. Themaximum theoremofClaude Berge(1963) describes the continuity of an optimal solution as a function of underlying parameters. For unconstrained problems with twice-differentiable functions, somecritical pointscan be found by finding the points where thegradientof the objective function is zero (that is, the stationary points). More generally, a zerosubgradientcertifies that a local minimum has been found forminimization problems with convexfunctionsand otherlocallyLipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.[10] Further, critical points can be classified using thedefinitenessof theHessian matrix: If the Hessian ispositivedefinite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind ofsaddle point. Constrained problems can often be transformed into unconstrained problems with the help ofLagrange multipliers.Lagrangian relaxationcan also provide approximate solutions to difficult constrained problems. When the objective function is aconvex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such asinterior-point methods. More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies online searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence usestrust regions. Both line searches and trust regions are used in modern methods ofnon-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such asBFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. To solve problems, researchers may usealgorithmsthat terminate in a finite number of steps, oriterative methodsthat converge to a solution (on some specified class of problems), orheuristicsthat may provide approximate solutions to some problems (although their iterates need not converge). Theiterative methodsused to solve problems ofnonlinear programmingdiffer according to whether theyevaluateHessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase thecomputational complexity(or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Besides (finitely terminating)algorithmsand (convergent)iterative methods, there areheuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: Problems inrigid body dynamics(in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve anordinary differential equationon a constraint manifold;[11]the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving alinear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is theengineering optimization, and another recent and growing subset of this field ismultidisciplinary design optimization, which, while useful in many problems, has in particular been applied toaerospace engineeringproblems. This approach may be applied in cosmology and astrophysics.[12] Economicsis closely enough linked to optimization ofagentsthat an influential definition relatedly describes economicsquascience as the "study of human behavior as a relationship between ends andscarcemeans" with alternative uses.[13]Modern optimization theory includes traditional optimization theory but also overlaps withgame theoryand the study of economicequilibria. TheJournal of Economic Literaturecodesclassify mathematical programming, optimization techniques, and related topics underJEL:C61-C63. In microeconomics, theutility maximization problemand itsdual problem, theexpenditure minimization problem, are economic optimization problems. Insofar as they behave consistently,consumersare assumed to maximize theirutility, whilefirmsare usually assumed to maximize theirprofit. Also, agents are often modeled as beingrisk-averse, thereby preferring to avoid risk.Asset pricesare also modeled using optimization theory, though the underlying mathematics relies on optimizingstochastic processesrather than on static optimization.International trade theoryalso uses optimization to explain trade patterns between nations. The optimization ofportfoliosis an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time usingcontrol theory.[14]For example, dynamicsearch modelsare used to studylabor-market behavior.[15]A crucial distinction is between deterministic and stochastic models.[16]Macroeconomistsbuilddynamic stochastic general equilibrium (DSGE)models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.[17][18] Some common applications of optimization techniques inelectrical engineeringincludeactive filterdesign,[19]stray field reduction in superconducting magnetic energy storage systems,space mappingdesign ofmicrowavestructures,[20]handset antennas,[21][22][23]electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empiricalsurrogate modelandspace mappingmethodologies since the discovery ofspace mappingin 1993.[24][25]Optimization techniques are also used inpower-flow analysis.[26] Optimization has been widely used in civil engineering.Construction managementandtransportation engineeringare among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures,[27]resource leveling,[28][29]water resource allocation,trafficmanagement[30]and schedule optimization. Another field that uses optimization techniques extensively isoperations research.[31]Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research usesstochastic programmingto model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization andstochastic optimizationmethods. Mathematical optimization is used in much modern controller design. High-level controllers such asmodel predictive control(MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. Optimization techniques are regularly used ingeophysicalparameter estimation problems. Given a set of geophysical measurements, e.g.seismic recordings, it is common to solve for thephysical propertiesandgeometrical shapesof the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. Nonlinear optimization methods are widely used inconformational analysis. Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology.[32]Linear programminghas been applied to calculate the maximal possible yields of fermentation products,[32]and to infer gene regulatory networks from multiple microarray datasets[33]as well as transcriptional regulatory networks from high-throughput data.[34]Nonlinear programminghas been used to analyze energy metabolism[35]and has been applied to metabolic engineering and parameter estimation in biochemical pathways.[36]
https://en.wikipedia.org/wiki/Mathematical_optimization
Incomputer science,incremental learningis a method ofmachine learningin which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique ofsupervised learningandunsupervised learningthat can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms. Many traditional machine learning algorithms inherently support incremental learning. Other algorithms can be adapted to facilitate incremental learning. Examples of incremental algorithms includedecision trees(IDE4,[1]ID5R[2]andgaenari),decision rules,[3]artificial neural networks(RBF networks,[4]Learn++,[5]Fuzzy ARTMAP,[6]TopoART,[7]and IGNG[8]) or the incrementalSVM.[9] The aim of incremental learning is for the learning model to adapt to new data without forgetting its existing knowledge. Some incremental learners have built-in some parameter or assumption that controls the relevancy of old data, while others, called stable incremental machine learning algorithms, learn representations of the training data that are not even partially forgotten over time. Fuzzy ART[10]and TopoART[7]are two examples for this second approach. Incremental algorithms are frequently applied todata streamsorbig data, addressing issues in data availability and resource scarcity respectively. Stock trend prediction and user profiling are some examples of data streams where new data becomes continuously available. Applying incremental learning to big data aims to produce fasterclassificationorforecastingtimes.
https://en.wikipedia.org/wiki/Incremental_learning
(Not to be confused with the lazy learning regime, seeNeural tangent kernel). Inmachine learning,lazy learningis a learning method in which generalization of thetraining datais, in theory, delayed until a query is made to the system, as opposed toeager learning, where the system tries to generalize the training data before receiving queries.[1] The primary motivation for employing lazy learning, as in theK-nearest neighbors algorithm, used by onlinerecommendation systems("people who viewed/purchased/listened to this movie/item/tune also ...") is that the data set is continuously updated with new entries (e.g., new items for sale at Amazon, new movies to view at Netflix, new clips at YouTube, new music at Spotify or Pandora). Because of the continuous update, the "training data" would be rendered obsolete in a relatively short time especially in areas like books and movies, where new best-sellers or hit movies/music are published/released continuously. Therefore, one cannot really talk of a "training phase". Lazy classifiers are most useful for large, continuously changing datasets with few attributes that are commonly queried. Specifically, even if a large set of attributes exist - for example, books have a year of publication, author/s, publisher, title, edition, ISBN, selling price, etc. - recommendation queries rely on far fewer attributes - e.g., purchase or viewing co-occurrence data, and user ratings of items purchased/viewed.[2] The main advantage gained in employing a lazy learning method is that the target function will be approximated locally, such as in thek-nearest neighbor algorithm. Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems and deal successfully with changes in the problem domain. At the same time they can reuse a lot of theoretical and applied results from linear regression modelling (notablyPRESS statistic) and control.[3]It is said that the advantage of this system is achieved if the predictions using a single training set are only developed for few objects.[4]This can be demonstrated in the case of the k-NN technique, which is instance-based and function is only estimated locally.[5][6] Theoretical disadvantages with lazy learning include: There are standard techniques to improve re-computation efficiency so that a particular answer is not recomputed unless the data that impact this answer has changed (e.g., new items, new purchases, new views). In other words, the stored answers are updated incrementally. This approach, used by large e-commerce or media sites, has long been used in theEntrezportal of theNational Center for Biotechnology Information(NCBI) to precompute similarities between the different items in its large datasets: biological sequences, 3-D protein structures, published-article abstracts, etc. Because "find similar" queries are asked so frequently, the NCBI uses highly parallel hardware to perform nightly recomputation. The recomputation is performed only for new entries in the datasets against each other and against existing entries: the similarity between two existing entries need not be recomputed.
https://en.wikipedia.org/wiki/Lazy_learning
Inprobability theoryandmachine learning, themulti-armed bandit problem(sometimes called theK-[1]orN-armed bandit problem[2]) is a problem in which a decision maker iteratively selects one of multiple fixed choices (i.e., arms or actions) when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.[3] Instances of the multi-armed bandit problem include the task of iteratively allocating a fixed, limited set of resources between competing (alternative) choices in a way that minimizes theregret.[4][5]A notable alternative setup for the multi-armed bandit problem includes the "best arm identification (BAI)" problem where the goal is instead to identify the best choice by the end of a finite number of rounds.[6] The multi-armed bandit problem is a classicreinforcement learningproblem that exemplifies theexploration–exploitation tradeoff dilemma. In contrast to general RL, the selected actions in bandit problems do not affect the reward distribution of the arms. The name comes from imagining agamblerat a row ofslot machines(sometimes known as "one-armed bandits"), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.[7]The multi-armed bandit problem also falls into the broad category ofstochastic scheduling. In the problem, each machine provides a random reward from aprobability distributionspecific to that machine, that is not knowna priori. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls.[4][5]The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get moreinformationabout the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in machine learning. In practice, multi-armed bandits have been used to model problems such as managing research projects in a large organization, like a science foundation or apharmaceutical company.[4][5]In early versions of the problem, the gambler begins with no initial knowledge about the machines. Herbert Robbinsin 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments".[8]A theorem, theGittins index, first published byJohn C. Gittins, gives an optimal policy for maximizing the expected discounted reward.[9] The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize their decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize their total value over the period of time considered. There are many practical applications of the bandit model, for example: In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as theexploitation vs. exploration tradeoffinmachine learning. The model has also been used to control dynamic allocation of resources to different projects, answering the question of which project to work on, given uncertainty about the difficulty and payoff of each possibility.[14] Originally considered by Allied scientists inWorld War II, it proved so intractable that, according toPeter Whittle, the problem was proposed to be dropped overGermanyso that German scientists could also waste their time on it.[15] The version of the problem now commonly analyzed was formulated byHerbert Robbinsin 1952. The multi-armed bandit (short:banditor MAB) can be seen as a set of realdistributionsB={R1,…,RK}{\displaystyle B=\{R_{1},\dots ,R_{K}\}}, each distribution being associated with the rewards delivered by one of theK∈N+{\displaystyle K\in \mathbb {N} ^{+}}levers. Letμ1,…,μK{\displaystyle \mu _{1},\dots ,\mu _{K}}be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizonH{\displaystyle H}is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-stateMarkov decision process. Theregretρ{\displaystyle \rho }afterT{\displaystyle T}rounds is defined as the expected difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: ρ=Tμ∗−∑t=1Tr^t{\displaystyle \rho =T\mu ^{*}-\sum _{t=1}^{T}{\widehat {r}}_{t}}, whereμ∗{\displaystyle \mu ^{*}}is the maximal reward mean,μ∗=maxk{μk}{\displaystyle \mu ^{*}=\max _{k}\{\mu _{k}\}}, andr^t{\displaystyle {\widehat {r}}_{t}}is the reward in roundt. Azero-regret strategyis a strategy whose average regret per roundρ/T{\displaystyle \rho /T}tends to zero with probability 1 when the number of played rounds tends to infinity.[16]Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played. A common formulation is theBinary multi-armed banditorBernoulli multi-armed bandit,which issues a reward of one with probabilityp{\displaystyle p}, and otherwise a reward of zero. Another formulation of the multi-armed bandit has each arm representing an independent Markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the Markov state evolution probabilities. There is a reward depending on the current state of the machine. In a generalization called the "restless bandit problem", the states of non-played arms can also evolve over time.[17]There has also been discussion of systems where the number of choices (about which arm to play) increases over time.[18] Computer science researchers have studied multi-armed bandits under worst-case assumptions, obtaining algorithms to minimize regret in both finite and infinite (asymptotic) time horizons for both stochastic[1]and non-stochastic[19]arm payoffs. An important variation of the classicalregret minimizationproblem in multi-armed bandits is the one of Best Arm Identification (BAI),[20]also known aspure exploration. This problem is crucial in various applications, including clinical trials, adaptive routing, recommendation systems, and A/B testing. In BAI, the objective is to identify the arm having the highest expected reward. An algorithm in this setting is characterized by asampling rule, adecision rule,and astopping rule, described as follows: There are two predominant settings in BAI: Fixed budget setting:Given a time horizonT≥1{\displaystyle T\geq 1}, the objective is to identify the arm with the highest expected rewarda⋆∈arg⁡maxkμk{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}minimizing probability of errorδ{\displaystyle \delta }. Fixed confidence setting:Given a confidence levelδ∈(0,1){\displaystyle \delta \in (0,1)}, the objective is to identify the arm with the highest expected rewarda⋆∈arg⁡maxkμk{\displaystyle a^{\star }\in \arg \max _{k}\mu _{k}}with the least possible amount of trials and with probability of errorP(a^τ≠a⋆)≤δ{\displaystyle \mathbb {P} ({\hat {a}}_{\tau }\neq a^{\star })\leq \delta }. For example using adecision rule, we could usem1{\displaystyle m_{1}}wherem{\displaystyle m}is themachineno.1 (you can use a different variable respectively) and1{\displaystyle 1}is the amount for each time an attempt is made at pulling the lever, where∫∑m1,m2,(...)=M{\displaystyle \int \sum m_{1},m_{2},(...)=M}, identifyM{\displaystyle M}as the sum of each attemptsm1+m2{\displaystyle m_{1}+m_{2}}, (...) as needed, and from there you can get a ratio, sum or mean as quantitative probability and sample your formulation for each slots. You can also do∫∑k∝iN−(nj){\displaystyle \int \sum _{k\propto _{i}}^{N}-(n_{j})}wherem1+m2{\displaystyle m1+m2}equal to each a unique machine slot,x,y{\displaystyle x,y}is the amount each time the lever is triggered,N{\displaystyle N}is the sum of(m1x,y)+(m2x,y)(...){\displaystyle (m1_{x},_{y})+(m2_{x},_{y})(...)},k{\displaystyle k}would be the total available amount in your possession,k{\displaystyle k}is relative toN{\displaystyle N}whereN=n(na,b),(n1a,b),(n2a,b){\displaystyle N=n(n_{a},b),(n1_{a},b),(n2_{a},b)}reducednj{\displaystyle n_{j}}as the sum of each gain or loss froma,b{\displaystyle a,b}(let's say you have 100$ that is defined asn{\displaystyle n}anda{\displaystyle a}would be a gainb{\displaystyle b}is equal to a loss, from there you get your results either positive or negative to add forN{\displaystyle N}with your own specific rule) andi{\displaystyle i}as the maximum you are willing to spend. It is possible to express this construction using a combination of multiple algebraic formulation, as mentioned above where you can limit withT{\displaystyle T}for, or in Time and so on. A major breakthrough was the construction of optimal population selection strategies, or policies (that possess uniformly maximum convergence rate to the population with highest mean) in the work described below. In the paper "Asymptotically efficient adaptive allocation rules", Lai and Robbins[21](following papers of Robbins and his co-workers going back to Robbins in the year 1952) constructed convergent population selection policies that possess the fastest rate of convergence (to the population with highest mean) for the case that the population reward distributions are the one-parameter exponential family. Then, inKatehakisandRobbins[22]simplifications of the policy and the main proof were given for the case of normal populations with known variances. The next notable progress was obtained by Burnetas andKatehakisin the paper "Optimal adaptive policies for sequential allocation problems",[23]where index based policies with uniformly maximum convergence rate were constructed, under more general conditions that include the case in which the distributions of outcomes from each population depend on a vector of unknown parameters. Burnetas and Katehakis (1996) also provided an explicit solution for the important case in which the distributions of outcomes follow arbitrary (i.e., non-parametric) discrete, univariate distributions. Later in "Optimal adaptive policies for Markov decision processes"[24]Burnetas and Katehakis studied the much larger model of Markov Decision Processes under partial information, where the transition law and/or the expected one period rewards may depend on unknown parameters. In this work, the authors constructed an explicit form for a class of adaptive policies with uniformly maximum convergence rate properties for the total expected finite horizon reward under sufficient assumptions of finite state-action spaces and irreducibility of the transition law. A main feature of these policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right-hand side of the estimated average reward optimality equations. These inflations have recently been called the optimistic approach in the work of Tewari and Bartlett,[25]Ortner[26]Filippi, Cappé, and Garivier,[27]and Honda and Takemura.[28] For Bernoulli multi-armed bandits, Pilarski et al.[29]studied computation methods of deriving fully optimal solutions (not just asymptotically) using dynamic programming in the paper "Optimal Policy for Bernoulli Bandits: Computation and Algorithm Gauge."[29]Via indexing schemes, lookup tables, and other techniques, this work provided practically applicable optimal solutions for Bernoulli bandits provided that time horizons and numbers of arms did not become excessively large. Pilarski et al.[30]later extended this work in "Delayed Reward Bernoulli Bandits: Optimal Policy and Predictive Meta-Algorithm PARDI"[30]to create a method of determining the optimal policy for Bernoulli bandits when rewards may not be immediately revealed following a decision and may be delayed. This method relies upon calculating expected values of reward outcomes which have not yet been revealed and updating posterior probabilities when rewards are revealed. When optimal solutions to multi-arm bandit tasks[31]are used to derive the value of animals' choices, the activity of neurons in the amygdala and ventral striatum encodes the values derived from these policies, and can be used to decode when the animals make exploratory versus exploitative choices. Moreover, optimal policies better predict animals' choice behavior than alternative strategies (described below). This suggests that the optimal solutions to multi-arm bandit problems are biologically plausible, despite being computationally demanding.[32] Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the four broad categories detailed below. Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common agreedybehavior where thebestlever (based on previous observations) is always pulled except when a (uniformly) random action is taken. Probability matching strategies reflect the idea that the number of pulls for a given lever shouldmatchits actual probability of being the optimal lever. Probability matching strategies are also known asThompson samplingor Bayesian Bandits,[37][38]and are surprisingly easy to implement if you can sample from the posterior for the mean value of each alternative. Probability matching strategies also admit solutions to so-called contextual bandit problems.[37] Pricing strategies establish apricefor each lever. For example, as illustrated with the POKER algorithm,[16]the price can be the sum of the expected reward plus an estimation of extra future rewards that will gain through the additional knowledge. The lever of highest price is always pulled. A useful generalization of the multi-armed bandit is the contextual multi-armed bandit. At each iteration an agent still has to choose between arms, but they also see a d-dimensional feature vector, the context vector they can use together with the rewards of the arms played in the past to make the choice of the arm to play. Over time, the learner's aim is to collect enough information about how the context vectors and rewards relate to each other, so that it can predict the next best arm to play by looking at the feature vectors.[39] Many strategies exist that provide an approximate solution to the contextual bandit problem, and can be put into two broad categories detailed below. In practice, there is usually a cost associated with the resource consumed by each action and the total cost is limited by a budget in many applications such as crowdsourcing and clinical trials. Constrained contextual bandit (CCB) is such a model that considers both the time and budget constraints in a multi-armed bandit setting. A. Badanidiyuru et al.[54]first studied contextual bandits with budget constraints, also referred to as Resourceful Contextual Bandits, and show that aO(T){\displaystyle O({\sqrt {T}})}regret is achievable. However, their work focuses on a finite set of policies, and the algorithm is computationally inefficient. A simple algorithm with logarithmic regret is proposed in:[55] Another variant of the multi-armed bandit problem is called the adversarial bandit, first introduced by Auer and Cesa-Bianchi (1998). In this variant, at each iteration, an agent chooses an arm and an adversary simultaneously chooses the payoff structure for each arm. This is one of the strongest generalizations of the bandit problem[56]as it removes all assumptions of the distribution and a solution to the adversarial bandit problem is a generalized solution to the more specific bandit problems. An example often considered for adversarial bandits is theiterated prisoner's dilemma. In this example, each adversary has two arms to pull. They can either Deny or Confess. Standard stochastic bandit algorithms don't work very well with these iterations. For example, if the opponent cooperates in the first 100 rounds, defects for the next 200, then cooperate in the following 300, etc. then algorithms such as UCB won't be able to react very quickly to these changes. This is because after a certain point sub-optimal arms are rarely pulled to limit exploration and focus on exploitation. When the environment changes the algorithm is unable to adapt or may not even detect the change. Source:[57] EXP3 is a popular algorithm for adversarial multiarmed bandits, suggested and analyzed in this setting by Auer et al. [2002b]. Recently there was an increased interest in the performance of this algorithm in the stochastic setting, due to its new applications to stochastic multi-armed bandits with side information [Seldin et al., 2011] and to multi-armed bandits in the mixed stochastic-adversarial setting [Bubeck and Slivkins, 2012]. The paper presented an empirical evaluation and improved analysis of the performance of the EXP3 algorithm in the stochastic setting, as well as a modification of the EXP3 algorithm capable of achieving "logarithmic" regret in stochastic environment. Exp3 chooses an arm at random with probability(1−γ){\displaystyle (1-\gamma )}it prefers arms with higher weights (exploit), it chooses with probabilityγ{\displaystyle \gamma }to uniformly randomly explore. After receiving the rewards the weights are updated. The exponential growth significantly increases the weight of good arms. The (external) regret of the Exp3 algorithm is at mostO(KTlog(K)){\displaystyle O({\sqrt {KTlog(K)}})} We follow the arm that we think has the best performance so far adding exponential noise to it to provide exploration.[58] In the original specification and in the above variants, the bandit problem is specified with a discrete and finite number of arms, often indicated by the variableK{\displaystyle K}. In the infinite armed case, introduced by Agrawal (1995),[59]the "arms" are a continuous variable inK{\displaystyle K}dimensions. This framework refers to the multi-armed bandit problem in anon-stationarysetting (i.e., in presence ofconcept drift). In the non-stationary setting, it is assumed that the expected reward for an armk{\displaystyle k}can change at every time stept∈T{\displaystyle t\in {\mathcal {T}}}:μt−1k≠μtk{\displaystyle \mu _{t-1}^{k}\neq \mu _{t}^{k}}. Thus,μtk{\displaystyle \mu _{t}^{k}}no longer represents the whole sequence of expected (stationary) rewards for armk{\displaystyle k}. Instead,μk{\displaystyle \mu ^{k}}denotes the sequence of expected rewards for armk{\displaystyle k}, defined asμk={μtk}t=1T{\displaystyle \mu ^{k}=\{\mu _{t}^{k}\}_{t=1}^{T}}.[60] Adynamic oraclerepresents the optimal policy to be compared with other policies in the non-stationary setting. The dynamic oracle optimises the expected reward at each stept∈T{\displaystyle t\in {\mathcal {T}}}by always selecting the best arm, with expected reward ofμt∗{\displaystyle \mu _{t}^{*}}. Thus, the cumulative expected rewardD(T){\displaystyle {\mathcal {D}}(T)}for the dynamic oracle at final time stepT{\displaystyle T}is defined as: D(T)=∑t=1Tμt∗.{\displaystyle {\mathcal {D}}(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}.} Hence, theregretρπ(T){\displaystyle \rho ^{\pi }(T)}for policyπ{\displaystyle \pi }is computed as the difference betweenD(T){\displaystyle {\mathcal {D}}(T)}and the cumulative expected reward at stepT{\displaystyle T}for policyπ{\displaystyle \pi }: ρπ(T)=∑t=1Tμt∗−Eπμ[∑t=1Trt]=D(T)−Eπμ[∑t=1Trt].{\displaystyle \rho ^{\pi }(T)=\sum _{t=1}^{T}{\mu _{t}^{*}}-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right]={\mathcal {D}}(T)-\mathbb {E} _{\pi }^{\mu }\left[\sum _{t=1}^{T}{r_{t}}\right].} Garivier and Moulines derive some of the first results with respect to bandit problems where the underlying model can change during play. A number of algorithms were presented to deal with this case, including Discounted UCB[61]and Sliding-Window UCB.[62]A similar approach based on Thompson Sampling algorithm is the f-Discounted-Sliding-Window Thompson Sampling (f-dsw TS)[63]proposed by Cavenaghi et al. The f-dsw TS algorithm exploits a discount factor on the reward history and an arm-related sliding window to contrast concept drift in non-stationary environments. Another work by Burtini et al. introduces a weighted least squares Thompson sampling approach (WLS-TS), which proves beneficial in both the known and unknown non-stationary cases.[64] Many variants of the problem have been proposed in recent years. The dueling bandit variant was introduced by Yue et al. (2012)[65]to model the exploration-versus-exploitation tradeoff for relative feedback. In this variant the gambler is allowed to pull two levers at the same time, but they only get a binary feedback telling which lever provided the best reward. The difficulty of this problem stems from the fact that the gambler has no way of directly observing the reward of their actions. The earliest algorithms for this problem were InterleaveFiltering[65]and Beat-The-Mean.[66]The relative feedback of dueling bandits can also lead tovoting paradoxes. A solution is to take theCondorcet winneras a reference.[67] More recently, researchers have generalized algorithms from traditional MAB to dueling bandits: Relative Upper Confidence Bounds (RUCB),[68]Relative EXponential weighing (REX3),[69]Copeland Confidence Bounds (CCB),[70]Relative Minimum Empirical Divergence (RMED),[71]and Double Thompson Sampling (DTS).[72] Approaches using multiple bandits that cooperate sharing knowledge in order to better optimize their performance started in 2013 with "A Gang of Bandits",[73]an algorithm relying on a similarity graph between the different bandit problems to share knowledge. The need of a similarity graph was removed in 2014 by the work on the CLUB algorithm.[74]Following this work, several other researchers created algorithms to learn multiple models at the same time under bandit feedback. For example, COFIBA was introduced by Li and Karatzoglou and Gentile (SIGIR 2016),[75]where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. The Combinatorial Multiarmed Bandit (CMAB) problem[76][77][78]arises when instead of a single discrete variable to choose from, an agent needs to choose values for a set of variables. Assuming each variable is discrete, the number of possible choices per iteration is exponential in the number of variables. Several CMAB settings have been studied in the literature, from settings where the variables are binary[77]to more general setting where each variable can take an arbitrary set of values.[78]
https://en.wikipedia.org/wiki/Multi-armed_bandit
Inmathematical optimization, theproximaloperator is anoperatorassociated with a proper,[note 1]lower semi-continuousconvex functionf{\displaystyle f}from aHilbert spaceX{\displaystyle {\mathcal {X}}}to[−∞,+∞]{\displaystyle [-\infty ,+\infty ]}, and is defined by:[1] For any function in this class, the minimizer of the right-hand side above is unique, hence making the proximal operator well-defined. The proximal operator is used in proximal gradient methods, which is frequently used in optimization algorithms associated with non-differentiableoptimization problems such astotal variation denoising. Theprox{\displaystyle {\text{prox}}}of a proper, lower semi-continuous convex functionf{\displaystyle f}enjoys several useful properties for optimization.
https://en.wikipedia.org/wiki/Proximal_operator
Stochastic optimization(SO) areoptimizationmethodsthat generate and userandom variables. Forstochasticoptimization problems, theobjective functionsorconstraintsare random. Stochastic optimization also include methods with randomiterates. Some hybrid methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.[1]Stochastic optimization methods generalizedeterministicmethods for deterministic problems. Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization whereMonte Carlo simulationsare run as estimates of an actual system,[2][3]and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that usestatistical inferencetools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include: On the other hand, even when thedata setconsists of precise measurements, some methods introduce randomness into the search-process to accelerate progress.[7]Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.[8][9]Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, thisrandomizationprinciple is known to be a simple and effective way to obtain algorithms withalmost certaingood performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include: In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place.[21]Fred W. Glover[22]argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness.
https://en.wikipedia.org/wiki/Stochastic_optimization
Stochastic approximationmethods are a family ofiterative methodstypically used forroot-findingproblems or foroptimizationproblems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximatingextreme valuesof functions which cannot be computed directly, but only estimated via noisy observations. In a nutshell, stochastic approximation algorithms deal with a function of the formf(θ)=Eξ⁡[F(θ,ξ)]{\textstyle f(\theta )=\operatorname {E} _{\xi }[F(\theta ,\xi )]}which is theexpected valueof a function depending on arandom variableξ{\textstyle \xi }. The goal is to recover properties of such a functionf{\textstyle f}without evaluating it directly. Instead, stochastic approximation algorithms use random samples ofF(θ,ξ){\textstyle F(\theta ,\xi )}to efficiently approximate properties off{\textstyle f}such as zeros or extrema. Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings withbig data. These applications range fromstochastic optimizationmethods and algorithms, to online forms of theEM algorithm, reinforcement learning viatemporal differences, anddeep learning, and others.[1]Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory.[2] The earliest, and prototypical, algorithms of this kind are theRobbins–MonroandKiefer–Wolfowitzalgorithms introduced respectively in 1951 and 1952. The Robbins–Monro algorithm, introduced in 1951 byHerbert RobbinsandSutton Monro,[3]presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a functionM(θ){\textstyle M(\theta )}, and a constantα{\textstyle \alpha }, such that the equationM(θ)=α{\textstyle M(\theta )=\alpha }has a unique root atθ∗.{\textstyle \theta ^{*}.}It is assumed that while we cannot directly observe the functionM(θ),{\textstyle M(\theta ),}we can instead obtain measurements of the random variableN(θ){\textstyle N(\theta )}whereE⁡[N(θ)]=M(θ){\textstyle \operatorname {E} [N(\theta )]=M(\theta )}. The structure of the algorithm is to then generate iterates of the form: θn+1=θn−an(N(θn)−α){\displaystyle \theta _{n+1}=\theta _{n}-a_{n}(N(\theta _{n})-\alpha )} Here,a1,a2,…{\displaystyle a_{1},a_{2},\dots }is a sequence of positive step sizes.Robbinsand Monro proved[3], Theorem 2thatθn{\displaystyle \theta _{n}}convergesinL2{\displaystyle L^{2}}(and hence also in probability) toθ∗{\displaystyle \theta ^{*}}, and Blum[4]later proved the convergence is actually with probability one, provided that: ∑n=0∞an=∞and∑n=0∞an2<∞{\displaystyle \qquad \sum _{n=0}^{\infty }a_{n}=\infty \quad {\mbox{ and }}\quad \sum _{n=0}^{\infty }a_{n}^{2}<\infty \quad }A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form:an=a/n{\textstyle a_{n}=a/n}, fora>0{\textstyle a>0}. Other series, such asan=1nln⁡n,1nln⁡nln⁡ln⁡n,…{\displaystyle a_{n}={\frac {1}{n\ln n}},{\frac {1}{n\ln n\ln \ln n}},\dots }are possible but in order to average out the noise inN(θ){\textstyle N(\theta )}, the above condition must be met. Consider the problem of estimating the meanθ∗{\displaystyle \theta ^{*}}of a probability distribution from a stream of independent samplesX1,X2,…{\displaystyle X_{1},X_{2},\dots }. LetN(θ):=θ−X{\displaystyle N(\theta ):=\theta -X}, then the unique solution toE⁡[N(θ)]=0{\textstyle \operatorname {E} [N(\theta )]=0}is the desired meanθ∗{\displaystyle \theta ^{*}}. The RM algorithm gives usθn+1=θn−an(θn−Xn){\displaystyle \theta _{n+1}=\theta _{n}-a_{n}(\theta _{n}-X_{n})}This is equivalent tostochastic gradient descentwith loss functionL(θ)=12‖X−θ‖2{\displaystyle L(\theta )={\frac {1}{2}}\|X-\theta \|^{2}}. It is also equivalent to a weighted average:θn+1=(1−an)θn+anXn{\displaystyle \theta _{n+1}=(1-a_{n})\theta _{n}+a_{n}X_{n}}In general, if there exists some functionL{\displaystyle L}such that∇L(θ)=N(θ)−α{\displaystyle \nabla L(\theta )=N(\theta )-\alpha }, then the Robbins–Monro algorithm is equivalent to stochastic gradient descent with loss functionL(θ){\displaystyle L(\theta )}. However, the RM algorithm does not requireL{\displaystyle L}to exist in order to converge. While the Robbins–Monro algorithm is theoretically able to achieveO(1/n){\textstyle O(1/n)}under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning.[6][8] Chung (1954)[9]and Fabian (1968)[10]showed that we would achieve optimal convergence rateO(1/n){\textstyle O(1/{\sqrt {n}})}withan=▽2f(θ∗)−1/n{\textstyle a_{n}=\bigtriangledown ^{2}f(\theta ^{*})^{-1}/n}(oran=1(nM′(θ∗)){\textstyle a_{n}={\frac {1}{(nM'(\theta ^{*}))}}}). Lai and Robbins[11][12]designed adaptive procedures to estimateM′(θ∗){\textstyle M'(\theta ^{*})}such thatθn{\textstyle \theta _{n}}has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991)[13]and Ruppert (1988)[14]independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky[15]also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:θn+1−θn=an(α−N(θn)),θ¯n=1n∑i=0n−1θi{\displaystyle \theta _{n+1}-\theta _{n}=a_{n}(\alpha -N(\theta _{n})),\qquad {\bar {\theta }}_{n}={\frac {1}{n}}\sum _{i=0}^{n-1}\theta _{i}}The convergence ofθ¯n{\displaystyle {\bar {\theta }}_{n}}to the unique rootθ∗{\displaystyle \theta ^{*}}relies on the condition that the step sequence{an}{\displaystyle \{a_{n}\}}decreases sufficiently slowly. That is A1)an→0,an−an+1an=o(an){\displaystyle a_{n}\rightarrow 0,\qquad {\frac {a_{n}-a_{n+1}}{a_{n}}}=o(a_{n})} Therefore, the sequencean=n−α{\textstyle a_{n}=n^{-\alpha }}with0<α<1{\textstyle 0<\alpha <1}satisfies this restriction, butα=1{\textstyle \alpha =1}does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rateO(1/n){\textstyle O(1/{\sqrt {n}})}yet with a more robust step size policy.[15]Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin[16]for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rateO(1/n){\textstyle O(1/{\sqrt {n}})}. A more general result is given in Chapter 11 of Kushner and Yin[17]by defining interpolated timetn=∑i=0n−1ai{\textstyle t_{n}=\sum _{i=0}^{n-1}a_{i}}, interpolated processθn(⋅){\textstyle \theta ^{n}(\cdot )}and interpolated normalized processUn(⋅){\textstyle U^{n}(\cdot )}as θn(t)=θn+i,Un(t)=(θn+i−θ∗)/an+ifort∈[tn+i−tn,tn+i+1−tn),i≥0{\displaystyle \theta ^{n}(t)=\theta _{n+i},\quad U^{n}(t)=(\theta _{n+i}-\theta ^{*})/{\sqrt {a_{n+i}}}\quad {\mbox{for}}\quad t\in [t_{n+i}-t_{n},t_{n+i+1}-t_{n}),i\geq 0}Let the iterate average beΘn=ant∑i=nn+t/an−1θi{\displaystyle \Theta _{n}={\frac {a_{n}}{t}}\sum _{i=n}^{n+t/a_{n}-1}\theta _{i}}and the associate normalized error to beU^n(t)=ant∑i=nn+t/an−1(θi−θ∗){\displaystyle {\hat {U}}^{n}(t)={\frac {\sqrt {a_{n}}}{t}}\sum _{i=n}^{n+t/a_{n}-1}(\theta _{i}-\theta ^{*})}. With assumptionA1)and the followingA2) A2)There is a Hurwitz matrixA{\textstyle A}and a symmetric and positive-definite matrixΣ{\textstyle \Sigma }such that{Un(⋅)}{\textstyle \{U^{n}(\cdot )\}}converges weakly toU(⋅){\textstyle U(\cdot )}, whereU(⋅){\textstyle U(\cdot )}is the statisolution todU=AUdt+Σ1/2dw{\displaystyle dU=AU\,dt+\Sigma ^{1/2}\,dw}wherew(⋅){\textstyle w(\cdot )}is a standard Wiener process. satisfied, and defineV¯=(A−1)′Σ(A′)−1{\textstyle {\bar {V}}=(A^{-1})'\Sigma (A')^{-1}}. Then for eacht{\textstyle t}, U^n(t)⟶DN(0,Vt),whereVt=V¯/t+O(1/t2).{\displaystyle {\hat {U}}^{n}(t){\stackrel {\mathcal {D}}{\longrightarrow }}{\mathcal {N}}(0,V_{t}),\quad {\text{where}}\quad V_{t}={\bar {V}}/t+O(1/t^{2}).} The success of the averaging idea is because of the time scale separation of the original sequence{θn}{\textstyle \{\theta _{n}\}}and the averaged sequence{Θn}{\textstyle \{\Theta _{n}\}}, with the time scale of the former one being faster. Suppose we want to solve the following stochastic optimization problemg(θ∗)=minθ∈ΘE⁡[Q(θ,X)],{\displaystyle g(\theta ^{*})=\min _{\theta \in \Theta }\operatorname {E} [Q(\theta ,X)],}whereg(θ)=E⁡[Q(θ,X)]{\textstyle g(\theta )=\operatorname {E} [Q(\theta ,X)]}is differentiable and convex, then this problem is equivalent to find the rootθ∗{\displaystyle \theta ^{*}}of∇g(θ)=0{\displaystyle \nabla g(\theta )=0}. HereQ(θ,X){\displaystyle Q(\theta ,X)}can be interpreted as some "observed" cost as a function of the chosenθ{\displaystyle \theta }and random effectsX{\displaystyle X}. In practice, it might be hard to get an analytical form of∇g(θ){\displaystyle \nabla g(\theta )}, Robbins–Monro method manages to generate a sequence(θn)n≥0{\displaystyle (\theta _{n})_{n\geq 0}}to approximateθ∗{\displaystyle \theta ^{*}}if one can generate(Xn)n≥0{\displaystyle (X_{n})_{n\geq 0}}, in which the conditional expectation ofXn{\displaystyle X_{n}}givenθn{\displaystyle \theta _{n}}is exactly∇g(θn){\displaystyle \nabla g(\theta _{n})}, i.e.Xn{\displaystyle X_{n}}is simulated from a conditional distribution defined by E⁡[H(θ,X)|θ=θn]=∇g(θn).{\displaystyle \operatorname {E} [H(\theta ,X)|\theta =\theta _{n}]=\nabla g(\theta _{n}).} HereH(θ,X){\displaystyle H(\theta ,X)}is an unbiased estimator of∇g(θ){\displaystyle \nabla g(\theta )}. IfX{\displaystyle X}depends onθ{\displaystyle \theta }, there is in general no natural way of generating a random outcomeH(θ,X){\displaystyle H(\theta ,X)}that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimatorH(θ,X){\displaystyle H(\theta ,X)}. IfX{\displaystyle X}is viewed as some "fundamental" underlying random process that is generatedindependentlyofθ{\displaystyle \theta }, and under some regularization conditions for derivative-integral interchange operations so thatE⁡[∂∂θQ(θ,X)]=∇g(θ){\displaystyle \operatorname {E} {\Big [}{\frac {\partial }{\partial \theta }}Q(\theta ,X){\Big ]}=\nabla g(\theta )}, thenH(θ,X)=∂∂θQ(θ,X){\displaystyle H(\theta ,X)={\frac {\partial }{\partial \theta }}Q(\theta ,X)}gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in whichH(θ,X){\displaystyle H(\theta ,X)}has a conditional expectation close to∇g(θ){\displaystyle \nabla g(\theta )}but not exactly equal to it. We then define a recursion analogously toNewton's Methodin the deterministic algorithm: The following result gives sufficient conditions onθn{\displaystyle \theta _{n}}for the algorithm to converge:[18] C1)εn≥0,∀n≥0.{\displaystyle \varepsilon _{n}\geq 0,\forall \;n\geq 0.} C2)∑n=0∞εn=∞{\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}=\infty } C3)∑n=0∞εn2<∞{\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}^{2}<\infty } C4)|Xn|≤B,for a fixed boundB.{\displaystyle |X_{n}|\leq B,{\text{ for a fixed bound }}B.} C5)g(θ)is strictly convex, i.e.{\displaystyle g(\theta ){\text{ is strictly convex, i.e.}}} Thenθn{\displaystyle \theta _{n}}converges toθ∗{\displaystyle \theta ^{*}}almost surely. Here are some intuitive explanations about these conditions. SupposeH(θn,Xn+1){\displaystyle H(\theta _{n},X_{n+1})}is a uniformly bounded random variables. If C2) is not satisfied, i.e.∑n=0∞εn<∞{\displaystyle \sum _{n=0}^{\infty }\varepsilon _{n}<\infty }, thenθn−θ0=−∑i=0n−1εiH(θi,Xi+1){\displaystyle \theta _{n}-\theta _{0}=-\sum _{i=0}^{n-1}\varepsilon _{i}H(\theta _{i},X_{i+1})}is a bounded sequence, so the iteration cannot converge toθ∗{\displaystyle \theta ^{*}}if the initial guessθ0{\displaystyle \theta _{0}}is too far away fromθ∗{\displaystyle \theta ^{*}}. As for C3) note that ifθn{\displaystyle \theta _{n}}converges toθ∗{\displaystyle \theta ^{*}}then θn+1−θn=−εnH(θn,Xn+1)→0,asn→∞.{\displaystyle \theta _{n+1}-\theta _{n}=-\varepsilon _{n}H(\theta _{n},X_{n+1})\rightarrow 0,{\text{ as }}n\rightarrow \infty .}so we must haveεn↓0{\displaystyle \varepsilon _{n}\downarrow 0},and the condition C3) ensures it. A natural choice would beεn=1/n{\displaystyle \varepsilon _{n}=1/n}. Condition C5) is a fairly stringent condition on the shape ofg(θ){\displaystyle g(\theta )}; it gives the search direction of the algorithm. SupposeQ(θ,X)=f(θ)+θTX{\displaystyle Q(\theta ,X)=f(\theta )+\theta ^{T}X}, wheref{\displaystyle f}is differentiable andX∈Rp{\displaystyle X\in \mathbb {R} ^{p}}is a random variable independent ofθ{\displaystyle \theta }. Theng(θ)=E⁡[Q(θ,X)]=f(θ)+θTE⁡X{\displaystyle g(\theta )=\operatorname {E} [Q(\theta ,X)]=f(\theta )+\theta ^{T}\operatorname {E} X}depends on the mean ofX{\displaystyle X}, and the stochastic gradient method would be appropriate in this problem. We can chooseH(θ,X)=∂∂θQ(θ,X)=∂∂θf(θ)+X.{\displaystyle H(\theta ,X)={\frac {\partial }{\partial \theta }}Q(\theta ,X)={\frac {\partial }{\partial \theta }}f(\theta )+X.} The Kiefer–Wolfowitz algorithm was introduced in 1952 byJacob WolfowitzandJack Kiefer,[19]and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function. LetM(x){\displaystyle M(x)}be a function which has a maximum at the pointθ{\displaystyle \theta }. It is assumed thatM(x){\displaystyle M(x)}is unknown; however, certain observationsN(x){\displaystyle N(x)}, whereE⁡[N(x)]=M(x){\displaystyle \operatorname {E} [N(x)]=M(x)}, can be made at any pointx{\displaystyle x}. The structure of the algorithm follows a gradient-like method, with the iterates being generated as whereN(xn+cn){\displaystyle N(x_{n}+c_{n})}andN(xn−cn){\displaystyle N(x_{n}-c_{n})}are independent. At every step, the gradient ofM(x){\displaystyle M(x)}is approximated akin to acentral difference methodwithh=2cn{\displaystyle h=2c_{n}}. So the sequence{cn}{\displaystyle \{c_{n}\}}specifies the sequence of finite difference widths used for the gradient approximation, while the sequence{an}{\displaystyle \{a_{n}\}}specifies a sequence of positive step sizes taken along that direction. Kiefer and Wolfowitz proved that, ifM(x){\displaystyle M(x)}satisfied certain regularity conditions, thenxn{\displaystyle x_{n}}will converge toθ{\displaystyle \theta }in probability asn→∞{\displaystyle n\to \infty }, and later Blum[4]in 1954 showedxn{\displaystyle x_{n}}converges toθ{\displaystyle \theta }almost surely, provided that: A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would bean=1/n{\displaystyle a_{n}=1/n}andcn=n−1/3{\displaystyle c_{n}=n^{-1/3}}. An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.[21][22]These methods are also applied incontrol theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step sizean{\displaystyle a_{n}}should not converge to zero but should be chosen so as to track the function.[21], 2nd ed., chapter 3 C. Johan MasreliezandR. Douglas Martinwere the first to apply stochastic approximation torobustestimation.[23] The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem byAryeh Dvoretzkypublished in 1956.[24]
https://en.wikipedia.org/wiki/Stochastic_approximation
Incosmology, theanthropic principle, also known as theobservation selection effect, is the proposition that the range of possible observations that could be made about theuniverseis limited by the fact that observations are only possible in the type of universe that is capable of developing observers in the first place. Proponents of the anthropic principle argue that it explains why the universe has theageand the fundamentalphysical constantsnecessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to befinely tunedfor the existence of life. There are many different formulations of the anthropic principle. PhilosopherNick Bostromcounts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail.[1] The principle was formulated as a response to aseries of observationsthat the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life onEarth. The anthropic principle states that this is ana posteriorinecessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observesomeuniverse, and hence, the laws and constants of any such universe must accommodate that possibility. The termanthropicin "anthropic principle" has been argued[2]to be amisnomer.[note 1]While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena requirehumanlife or some kind ofcarbon chauvinism.[3][4]Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved.[5] The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. Critics of the weak anthropic principle point out that its lack offalsifiabilityentails that it is non-scientific and therefore inherently not useful. Stronger variants of the anthropic principle which are not tautologies can still make claims considered controversial by some; these would be contingent upon empirical verification.[clarification needed] In 1961,Robert Dickenoted that theage of the universe, as seen by living observers, cannot be random.[6]Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old.[7]If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels ofmetallicity(levels of elements besideshydrogenandhelium) especiallycarbon, bynucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on themain sequenceand would have turned intowhite dwarfs, aside from the dimmestred dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspiredDirac's varying-Gtheory. Dicke later reasoned that the density of matter in the universe must be almost exactly thecritical densityneeded to prevent theBig Crunch(the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density ofbaryonicmatter, and some theoretical predictions of the amount ofdark matter, account for about 30% of this critical density, with the rest contributed by acosmological constant.Steven Weinberg[8]gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120orders of magnitudesmaller than the valueparticle physicspredicts (this has been described as the "worst prediction in physics").[9]However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophicinflation, which would preclude the formation of stars, and hence life. The observed values of thedimensionless physical constants(such as thefine-structure constant) governing the fourfundamental interactionsare balanced as iffine-tunedto permit the formation of commonly found matter and subsequently the emergence of life.[10]A slight increase in thestrong interaction(up to 50% for some authors[11]) would bind thedineutronand thediprotonand convert all hydrogen in the early universe to helium;[12]likewise, an increase in theweak interactionalso would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist.[13]More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life. The phrase "anthropic principle" first appeared inBrandon Carter's contribution to a 1973KrakówsymposiumhonouringCopernicus's500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to theCopernican Principle, which states that humans do not occupy a privileged position in theUniverse. Carter said: "Although our situation is not necessarilycentral, it is inevitably privileged to some extent."[14]Specifically, Carter disagreed with using the Copernican principle to justify thePerfect Cosmological Principle, which states that all large regionsand timesin the universe must be statistically identical. The latter principle underlies thesteady-state theory, which had recently been falsified by the 1965 discovery of thecosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via theBig Bang).[citation needed] Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privilegedspacetimelocations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. Roger Penroseexplained the weak form as follows: The argument can be used to explain why the conditions happen to be just right for the existence of (intelligent) life on the Earth at the present time. For if they were not just right, then we should not have found ourselves to be here now, but somewhere else, at some other appropriate time. This principle was used very effectively by Brandon Carter andRobert Dicketo resolve an issue that had puzzled physicists for a good many years. The issue concerned various striking numerical relations that are observed to hold between the physical constants (thegravitational constant, the mass of theproton, theage of the universe, etc.). A puzzling aspect of this was that some of the relations hold only at the present epoch in the Earth's history, so we appear, coincidentally, to be living at a very special time (give or take a few million years!). This was later explained, by Carter and Dicke, by the fact that this epoch coincided with the lifetime of what are calledmain-sequencestars, such as the Sun. At any other epoch, the argument ran, there would be no intelligent life around to measure the physical constants in question—so the coincidence had to hold, simply because there would beintelligent lifearound only at the particular time that the coincidence did hold! One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions intoexplanationsby assuming that thereismore than one universe, in fact a large and possibly infinite collection of universes, something that is now called themultiverse("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of aselection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do thefundamental laws of physicstake the particular form we observe and not another?" Since Carter's 1973 paper, the termanthropic principlehas been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 bookThe Anthropic Cosmological PrinciplebyJohn D. BarrowandFrank Tipler,[15]which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, theevolutionary biologistAlfred Russel Wallaceanticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man."[16]In 1957,Robert Dickewrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem."[17] Ludwig Boltzmannmay have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of theBig BangBoltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably lowentropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be.[18][19] Weak anthropic principle (WAP)(Carter): "... our location in the universe isnecessarilyprivileged to the extent of being compatible with our existence as observers."[14]For Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP)(Carter): "[T]he universe (and hence thefundamental parameterson which it depends) must be such as to admit the creation of observers within it at some stage. To paraphraseDescartes,cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates adeductionfrom the fact of our existence; the statement is thus atruism. In their 1986 book,The anthropic cosmological principle,John BarrowandFrank Tiplerdepart from Carter and define the WAP and SAP as follows:[20][21] Weak anthropic principle (WAP)(Barrow and Tipler): "The observed values of all physical andcosmologicalquantities are not equally probable but they take on values restricted by the requirement that there exist sites wherecarbon-based lifecanevolveand by the requirements that the universe be old enough for it to have already done so."[22]Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as thefine-structure constant, thenumber of spacetime dimensions, and thecosmological constant—topics that fall under Carter's SAP. Strong anthropic principle (SAP)(Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."[23]This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler:[24] ThephilosophersJohn Leslie[25]andNick Bostrom[19]reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance foranthropic bias—that is, the bias created by anthropicselection effects(which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: Many 'anthropic principles' are simply confused. Some, especially those drawing inspiration from Brandon Carter's seminal papers, are sound, but... they are too weak to do any real scientific work. In particular, I argue that existing methodology does not permit any observational consequences to be derived from contemporary cosmological theories, though these theories quite plainly can be and are being tested empirically by astronomers. What is needed to bridge this methodological gap is a more adequate formulation of how observationselection effectsare to be taken into account. Strong self-sampling assumption (SSSA)(Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class."Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice. According toJürgen Schmidhuber, the anthropic principle essentially just says that theconditional probabilityof finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on theprior distributionofalternative universesare necessary.[27][28] Playwright and novelistMichael Frayndescribes a form of the strong anthropic principle in his 2006 bookThe Human Touch, which explores what he characterises as "the central oddity of the Universe": It's this simple paradox. The Universe is very old and very large. Humankind, by comparison, is only a tiny disturbance in one small corner of it – and a very recent one. Yet the Universe is only very large and very old because we are here to say it is... And yet, of course, we all know perfectly well that it is what it is whether we are here or not.[29] Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, thedimensionless physical constantsand initial conditions for theBig Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in isfine tunedto permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions".[30]If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was themany-worlds interpretationofquantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review byMax Tegmark.[31]An important development in the 1980s was the combination ofinflation theorywith the hypothesis that some parameters are determined bysymmetry breakingin the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, thestring landscapeemerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.[note 2] The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for atheory of everythinghaving no free parameters. AsAlbert Einsteinsaid: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything",string theory, proclaimed "the end of the anthropic principle"[32]since there would be no free parameters to select. In 2003, however,Leonard Susskindstated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle."[33] The modern form of adesign argumentis put forth byintelligent design. Proponents of intelligent design often cite thefine-tuningobservations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning,Sober(2005)[34]and Ikeda andJefferys,[35][36]argue that the anthropic principle as conventionally stated actually undermines intelligent design. Paul Davies's bookThe Goldilocks Enigma(2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate:[7]: 261–267 Omitted here isLee Smolin's model ofcosmological natural selection, also known asfecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).[37] Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994). The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983)[38]inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into accountcosmologicalandastrophysicalconsiderations. With this in mind, Carter concluded that given the best estimates of theage of the universe, the evolutionary chain culminating inHomo sapiensprobably admits only one or two low probability links. No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included inourportion ofthisuniverse[undue weight?–discuss]). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist. PhilosopherJohn Leslie[39]states that the Carter SAP (withmultiverse) predicts the following: Hogan[40]has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universenotto support life. Probabilistic predictions of parameter values can be made given: The probability of observing valueXis then proportional toN(X)P(X). A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of thecosmological constantcan be regarded as a successful prediction in this sense. One thing that wouldnotcount as evidence for the anthropic principle is evidence that the Earth or theSolar Systemoccupied a privileged position in the universe, in violation of theCopernican principle(for possible counterevidence to this principle, seeCopernican principle), unless there was some reason to think that that position was anecessary conditionfor our existence as observers. Fred Hoylemay have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based oncarbon-12nuclei, that there must be an undiscoveredresonancein the carbon-12 nucleus facilitating its synthesis in stellar interiors via thetriple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 millionelectronvolts.[41][42]Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, in 2010Helge Kraghargued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance. An investigation of the historical circumstances of the prediction and its subsequent experimental confirmation shows that Hoyle and his contemporaries did not associate the level in the carbon nucleus with life at all.[43] Don Pagecriticized the entire theory ofcosmic inflationas follows.[44]He emphasized that initial conditions that made possible a thermodynamicarrow of timein a universe with aBig Bangorigin, must include the assumption that at the initial singularity, theentropyof the universe was low and therefore extremely improbable.Paul Daviesrebutted this criticism by invoking an inflationary version of the anthropic principle.[45]While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theorypredicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape".Leonard Susskindhas argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed.[33] Steven Weinberg[46]believes the anthropic principle may be appropriated bycosmologistscommitted tonontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notablyDavid Grossbut alsoLuboš Motl,Peter Woit, andLee Smolin—argue that this is not predictive.Max Tegmark,[47]Mario Livio, andMartin Rees[48]argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Jürgen Schmidhuber(2000–2002) points out thatRay Solomonoff'stheory of universal inductive inferenceand its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and someprior distributionon the set of possible explanations of the universe. Zhi-Wei WangandSamuel L. Braunsteinproved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life.[49] There are two kinds of dimensions:spatial(bidirectional) andtemporal(unidirectional).[51]Let the number of spatial dimensions beNand the number of temporal dimensions beT. ThatN= 3andT= 1, setting aside the compactified dimensions invoked bystring theoryand undetectable to date, can be explained by appealing to the physical consequences of lettingNdiffer from 3 andTdiffer from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed toGottfried Wilhelm Leibniz, who in theDiscourse on Metaphysicssuggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena".[52]Immanuel Kantargued that 3-dimensional space was a consequence of the inverse squarelaw of universal gravitation. While Kant's argument is historically important,John D. Barrowsaid that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204).[note 3] In 1920,Paul Ehrenfestshowed that if there is only a single time dimension and more than three spatial dimensions, theorbitof aplanetabout its Sun cannot remain stable. The same is true of a star's orbit around the center of itsgalaxy.[53]Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of awaveimpulse will travel at different speeds. If there are5+2k{\displaystyle 5+2k}spatial dimensions, wherekis a positive whole number, then wave impulses become distorted. In 1922,Hermann Weylclaimed thatMaxwell's theory ofelectromagnetismcan be expressed in terms of an action only for a four-dimensional manifold.[54]Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electronorbitalsaround nuclei cannot be stable; electrons would either fall into thenucleusor disperse.[55] Max Tegmarkexpands on the preceding argument in the following anthropic manner.[56]IfTdiffers from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevantpartial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, ifT> 1, Tegmark maintains thatprotonsandelectronswould be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.)[56]Lastly, ifN< 3, gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, whenN< 3,nervescannot cross without intersecting.[56]Hence anthropic and other arguments rule out all cases exceptN= 3andT= 1, which describes the world around us. On the other hand, in view of creatingblack holesfrom an idealmonatomic gasunder its self-gravity, Wei-Xiang Feng showed that(3 + 1)-dimensional spacetime is the marginal dimensionality. Moreover, it is the uniquedimensionalitythat can afford a "stable" gas sphere with a "positive"cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021solar masses, due to the small positivity of the cosmological constant observed.[57] In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks.[58][59] Some of the metaphysical disputes and speculations include, for example, attempts to backPierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compareOmega Point), expressing acreatio evolutivainstead the elder notion ofcreatio continua.[60]From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology.[60]Karl W. Giberson[61]has laconically stated that What emerges is the suggestion that cosmology may at last be in possession of some raw material for apostmoderncreation myth. William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point.[62] A thorough extant study of the anthropic principle is the bookThe Anthropic Cosmological PrinciplebyJohn D. Barrow, acosmologist, andFrank J. Tipler, a cosmologist andmathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoreticalastrophysics, it also touches onquantum physics,chemistry, andearth science. An entire chapter argues thatHomo sapiensis, with high probability, the onlyintelligent speciesin theMilky Way. The book begins with an extensive review of many topics in thehistory of ideasthe authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions ofteleologyandintelligent design. They discuss the writings ofFichte,Hegel,Bergson, andAlfred North Whitehead, and theOmega Pointcosmology ofTeilhard de Chardin. Barrow and Tipler carefully distinguishteleologicalreasoning fromeutaxiologicalreasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks.[63] Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose thefinal anthropic principle(FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.[64] Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of theuniverse, constraints developed further in Tipler'sThe Physics of Immortality.[65]One such constraint is that the universe must end in aBig Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 aboutdark energy, based on observations of very distantsupernovas. In his review[66]of Barrow and Tipler,Martin Gardnerridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP): At the instant theOmega Pointis reached, life will have gained control ofallmatter and forces not only in a single universe, but in all universes whose existence is logically possible; life will have spread intoallspatial regions in all universes which could logically exist, and will have stored an infinite amount of information, includingallbits of knowledge that it is logically possible to know. And this is the end.[67] Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involveshumans in particular, to the exclusion ofnon-human intelligencemore broadly.[68]Others[69]have criticised the word "principle" as being too grandiose to describe straightforward applications ofselection effects. A common criticism of Carter's SAP is that it is an easydeus ex machinathat discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."[70] Carter's SAP and Barrow and Tipler's WAP have been dismissed astruismsor trivialtautologies—that is, statements true solely by virtue of theirlogical formand not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different",[citation needed]which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable norfalsifiable, and thus is not ascientific statementbut rather a philosophical one. The same criticism has been leveled against the hypothesis of amultiverse, although some argue[71]that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.[72] Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler'santhropic cosmological principle, which areteleologicalnotions that tend to describe the existence of life as anecessary prerequisitefor the observable constants of physics. Similarly,Stephen Jay Gould,[73][74]Michael Shermer,[75]and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to housebarnacles.[citation needed]These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having beenfine-tunedthroughnatural selectionto adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa. Some applications of the anthropic principle have been criticized as anargument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see alsoalternative biochemistry).[76]The range offundamental physical constantsconsistent with the evolution of carbon-based life may also be wider than those who advocate afine-tuned universehave argued.[77]For instance, Harnik et al.[78]propose aWeakless Universein which theweak nuclear forceis eliminated. They show that this has no significant effect on the otherfundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars,planets,galaxies, etc. Lee Smolinhas offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth hisfecund universestheory, which assumes universes have "offspring" through the creation ofblack holeswhose offspring universes have values of physical constants that depend on those of the mother universe.[79] The philosophers of cosmologyJohn Earman,[80]Ernan McMullin,[81]andJesús Mosteríncontend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation".[82]A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: The suggestion that an infinity of objects characterized by certain numbers or properties implies the existence among them of objects with any combination of those numbers or characteristics [...] is mistaken. An infinity does not imply at all that any arrangement is present or repeated. [...] The assumption that all possible worlds are realized in an infinite universe is equivalent to the assertion that any infinite set of numbers contains all numbers (or at least all Gödel numbers of the [defining] sequences), which is obviously false.
https://en.wikipedia.org/wiki/Anthropic_principle
Connectionismis an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks.[1] Connectionism has had many "waves" since its beginnings. The first wave appeared 1943 withWarren Sturgis McCullochandWalter Pittsboth focusing on comprehending neural circuitry through a formal and mathematical approach,[2]andFrank Rosenblattwho published the 1958 paper "The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain" inPsychological Review, while working at the Cornell Aeronautical Laboratory.[3]The first wave ended with the 1969 book about the limitations of the original perceptron idea, written byMarvin MinskyandSeymour Papert, which contributed to discouraging major funding agencies in the US from investing in connectionist research.[4]With a few noteworthy deviations, most connectionist research entered a period of inactivity until the mid-1980s. The termconnectionist modelwas reintroduced in a 1982 paper in the journalCognitive Scienceby Jerome Feldman and Dana Ballard. The second wave blossomed in the late 1980s, following a 1987 book about Parallel Distributed Processing byJames L. McClelland,David E. Rumelhartet al., which introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (now known as "hidden layers") alongside input and output units, and used asigmoidactivation functioninstead of the old "all-or-nothing" function. Their work built upon that ofJohn Hopfield, who was a key figure investigating the mathematical characteristics of sigmoid activation functions.[3]From the late 1980s to the mid-1990s, connectionism took on an almost revolutionary tone when Schneider,[5]Terence Horganand Tienson posed the question of whether connectionism represented afundamental shiftin psychology and so-called "good old-fashioned AI," orGOFAI.[3]Some advantages of the second wave connectionist approach included its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity forgraceful degradation.[6]Its disadvantages included the difficulty in deciphering how ANNs process information or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level.[7] The current (third) wave has been marked by advances indeep learning, which have made possible the creation oflarge language models.[3]The success of deep-learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increasedinterpretability problems.[8] The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could representneuronsand the connections could representsynapses, as in thehuman brain. This principle has been seen as an alternative to GOFAI and the classicaltheories of mindbased on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.[8] Internal states of any network change over time due to neurons sending a signal to a succeeding layer of neurons in the case of a feedforward network, or to a previous layer in the case of a recurrent network. Discovery of non-linear activation functions has enabled the second wave of connectionism. Neural networks follow two basic principles: Most of the variety among the models comes from: Connectionist work in general does not need to be biologically realistic.[10][11][12][13][14][15][16]One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning,[17][18]but error propagation can explain some of the biologically-generated electrical activity seen at the scalp inevent-related potentialssuch as theN400andP600,[19]and this provides some biological support for one of the key assumptions of connectionist learning procedures. Many recurrent connectionist models also incorporatedynamical systems theory. Many researchers, such as the connectionistPaul Smolensky, have argued that connectionist models will evolve toward fullycontinuous, high-dimensional,non-linear,dynamic systemsapproaches. Precursors of the connectionist principles can be traced to early work inpsychology, such as that ofWilliam James.[20]Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologistJohn Hughlings Jacksonargued for multi-level, distributed systems. Following from this lead,Herbert Spencer'sPrinciples of Psychology, 3rd edition (1872), andSigmund Freud'sProject for a Scientific Psychology(composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century,Edward Thorndikewas writing abouthuman learningthat posited a connectionist type network.[21] Hopfield networks had precursors in theIsing modeldue toWilhelm Lenz(1920) andErnst Ising(1925), though the Ising model conceived by them did not involve time.Monte Carlosimulations of Ising model required the advent of computers in the 1950s.[22] The first wave begun in 1943 withWarren Sturgis McCullochandWalter Pittsboth focusing on comprehending neural circuitry through a formal and mathematical approach. McCulloch and Pitts showed how neural systems could implementfirst-order logic: Their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the work ofNicolas Rashevskyin the 1930s and symbolic logic in the style ofPrincipia Mathematica.[23][3] Hebbcontributed greatly to speculations about neural functioning, and proposed a learning principle,Hebbian learning.Lashleyargued for distributed representations as a result of his failure to find anything like a localizedengramin years oflesionexperiments.Friedrich Hayekindependently conceived the model, first in a brief unpublished manuscript in 1920,[24][25]then expanded into a book in 1952.[26] The Perceptron machines were proposed and built byFrank Rosenblatt, who published the 1958 paper “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” inPsychological Review, while working at the Cornell Aeronautical Laboratory. He cited Hebb, Hayek, Uttley, andAshbyas main influences. Another form of connectionist model was therelational networkframework developed by thelinguistSydney Lambin the 1960s. The research group led by Widrow empirically searched for methods to train two-layeredADALINEnetworks (MADALINE), with limited success.[27][28] A method to train multilayered perceptrons with arbitrary levels of trainable weights was published byAlexey Grigorevich Ivakhnenkoand Valentin Lapa in 1965, called theGroup Method of Data Handling. This method employs incremental layer by layer training based onregression analysis, where useless units in hidden layers are pruned with the help of a validation set.[29][30][31] The first multilayered perceptrons trained bystochastic gradient descent[32]was published in 1967 byShun'ichi Amari.[33]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned usefulinternal representationsto classify non-linearily separable pattern classes.[30] In 1972,Shun'ichi Amariproduced an early example ofself-organizing network.[34] There was some conflict among artificial intelligence researchers as to what neural networks are useful for. Around late 1960s, there was a widespread lull in research and publications on neural networks, "the neural network winter", which lasted through the 1970s, during which the field of artificial intelligence turned towards symbolic methods. The publication ofPerceptrons(1969) is typically regarded as a catalyst of this event.[35][36] The second wave begun in the early 1980s. Some key publications included (John Hopfield, 1982)[37]which popularizedHopfield networks, the 1986 paper that popularized backpropagation,[38]and the 1987 two-volume book about theParallel Distributed Processing(PDP) byJames L. McClelland,David E. Rumelhartet al., which has introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as "hidden layers" now) alongside input and output units and usingsigmoidactivation functioninstead of the old 'all-or-nothing' function. Hopfield approached the field from the perspective of statistical mechanics, providing some early forms of mathematical rigor that increased the perceived respectability of the field.[3]Another important series of publications proved that neural networks areuniversal function approximators, which also provided some mathematical respectability.[39] Some early popular demonstration projects appeared during this time.NETtalk(1987) learned to pronounce written English. It achieved popular success, appearing on theTodayshow.[40]TD-Gammon(1992) reached top human level inbackgammon.[41] As connectionism became increasingly popular in the late 1980s, some researchers (includingJerry Fodor,Steven Pinkerand others) reacted against it. They argued that connectionism, as then developing, threatened to obliterate what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach ofcomputationalism. Computationalism is a specific form of cognitivism that argues that mental activity iscomputational, that is, that the mind operates by performing purely formal operations on symbols, like aTuring machine. Some researchers argued that the trend in connectionism represented a reversion towardassociationismand the abandonment of the idea of alanguage of thought, something they saw as mistaken. In contrast, those very tendencies made connectionism attractive for other researchers. Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate, some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. Differences between the two approaches include the following: Despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which organic brains happen to implement the symbol-manipulation system. This is logically possible, as it is well known that connectionist models can implement symbol-manipulation systems of the kind used in computationalist models,[42]as indeed they must be able if they are to explain the human ability to perform symbol-manipulation tasks. Several cognitive models combining both symbol-manipulative and connectionist architectures have been proposed. Among them arePaul Smolensky's Integrated Connectionist/Symbolic Cognitive Architecture (ICS).[8][43]andRon Sun'sCLARION (cognitive architecture). But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example. The debate was largely centred on logical arguments about whether connectionist networks could produce the syntactic structure observed in this sort of reasoning. This was later achieved although using fast-variable binding abilities outside of those standardly assumed in connectionist models.[42][44] Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are in general more opaque, to the extent that they may be describable only in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense, connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e., connectionism), without representing a helpful theory of the particular process that is being modelled. In this sense, the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed. Some researchers suggest that the analysis gap is the consequence of connectionist mechanisms giving rise toemergent phenomenathat may be describable in computational terms.[45] In the 2000s, the popularity ofdynamical systemsinphilosophy of mindhave added a new perspective on the debate;[46][47]some authors[which?]now argue that any split between connectionism and computationalism is more conclusively characterized as a split between computationalism anddynamical systems. In 2014,Alex Gravesand others fromDeepMindpublished a series of papers describing a novel Deep Neural Network structure called theNeural Turing Machine[48]able to read symbols on a tape and store symbols in memory. Relational Networks, another Deep Network module published by DeepMind, are able to create object-like representations and manipulate them to answer complex questions. Relational Networks and Neural Turing Machines are further evidence that connectionism and computationalism need not be at odds. Smolensky's Subsymbolic Paradigm[49][50]has to meet the Fodor-Pylyshyn challenge[51][52][53][54]formulated by classical symbol theory for a convincing theory of cognition in modern connectionism. In order to be an adequate alternative theory of cognition, Smolensky's Subsymbolic Paradigm would have to explain the existence of systematicity or systematic relations in language cognition without the assumption that cognitive processes are causally sensitive to the classical constituent structure of mental representations. The subsymbolic paradigm, or connectionism in general, would thus have to explain the existence of systematicity and compositionality without relying on the mere implementation of a classical cognitive architecture. This challenge implies a dilemma: If the Subsymbolic Paradigm could contribute nothing to the systematicity and compositionality of mental representations, it would be insufficient as a basis for an alternative theory of cognition. However, if the Subsymbolic Paradigm's contribution to systematicity requires mental processes grounded in the classical constituent structure of mental representations, the theory of cognition it develops would be, at best, an implementation architecture of the classical model of symbol theory and thus not a genuine alternative (connectionist) theory of cognition.[55]The classical model of symbolism is characterized by (1) a combinatorial syntax and semantics of mental representations and (2) mental operations as structure-sensitive processes, based on the fundamental principle of syntactic and semantic constituent structure of mental representations as used in Fodor's "Language of Thought (LOT)".[56][57]This can be used to explain the following closely related properties of human cognition, namely its (1) productivity, (2) systematicity, (3) compositionality, and (4) inferential coherence.[58] This challenge has been met in modern connectionism, for example, not only by Smolensky's "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture",[59][60]but also by Werning and Maye's "Oscillatory Networks".[61][62][63]An overview of this is given for example by Bechtel & Abrahamsen,[64]Marcus[65]and Maurer.[66] Recently, Heng Zhang and his colleagues have demonstrated that mainstream knowledge representation formalisms are, in fact, recursively isomorphic, provided they possess equivalent expressive power.[67]This finding implies that there is no fundamental distinction between using symbolic or connectionist knowledge representation formalisms for the realization ofartificial general intelligence(AGI). Moreover, the existence of recursive isomorphisms suggests that different technical approaches can draw insights from one another.
https://en.wikipedia.org/wiki/Connectionism
Inpsychology, a trait (orphenotype) is calledemergenicif it is the result of a specific combination of several interactinggenes(rather than of a simple sum of several independent genes). Emergenic traits will not run in families, butidentical twinswill share them. Traits such as "leadership", "genius" or certainmental illnessesmay be emergenic. Although one may expectepigeneticsto play a significant role in the phenotypic manifestation of twins reared apart, the concordance displayed between them can be attributed to emergenesis.[1]
https://en.wikipedia.org/wiki/Emergenesis
Anemergent algorithmis analgorithmthat exhibitsemergent behavior. In essence an emergent algorithm implements a set of simplebuilding blockbehaviors that when combined exhibit more complex behaviors. One example of this is the implementation offuzzymotion controllers used to adapt robot movement in response to environmental obstacles.[1] Anemergent algorithmhas the following characteristics:[dubious–discuss] Other examples of emergent algorithms and models includecellular automata,[2]artificial neural networksandswarm intelligencesystems (ant colony optimization,bees algorithm, etc.). Thischaos theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Emergent_algorithm
Emergent evolutionis thehypothesisthat, in the course ofevolution, some entirely new properties, such asmindandconsciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The term was originated by the psychologistC. Lloyd Morganin 1922 in hisGifford Lecturesat St. Andrews, which would later be published as the 1923 bookEmergent Evolution.[1][2] The hypothesis has been widely criticized for providing no mechanism to how entirely new properties emerge, and for its historical roots inteleology.[2][3][4]Historically, emergent evolution has been described as an alternative tomaterialismandvitalism.[5]Interest in emergent evolution was revived by biologist Robert G. B. Reid in 1985.[6][7][8] Emergent evolution is distinct from the hypothesis of Emergent Evolutionary Potential (EEP) which was introduced in 2019 by Gene Levinson. In EEP, the scientific mechanism of Darwinian natural selection tends to preserve new, more complex entities that arise from interactions between previously existing entities, when those interactions prove useful, by trial-and error, in the struggle for existence. Biological organization arising via EEP is complementary to organization arising via gradual accumulation of incremental variation.[9] The termemergentwas first used to describe the concept byGeorge Lewesin volume two of his 1875 bookProblems of Life and Mind(p. 412).Henri Bergsoncovered similar themes in his popular 1907 bookCreative Evolutionon theÉlan vital. Emergence was further developed bySamuel Alexanderin hisGifford LecturesatGlasgowduring 1916–18 and published asSpace, Time, and Deity(1920). The related termemergent evolutionwas coined byC. Lloyd Morganin his own Gifford lectures of 1921–22 atSt. Andrewsand published asEmergent Evolution(1923). In an appendix to a lecture in his book, Morgan acknowledged the contributions ofRoy Wood Sellars'sEvolutionary Naturalism(1922). Charles DarwinandAlfred Russel Wallace's presentation ofnatural selection, coupled to the idea of evolution in Western thought, had gained acceptance due to the wealth of observational data provided and the seeming replacement of divine law with natural law in the affairs of men.[10]However, the mechanism ofnatural selectiondescribed at the time only explained how organisms adapted to variation. The cause of genetic variation was unknown at the time. Darwin knew that nature had to produce variations before natural selection could act …The problem had been caught by other evolutionists almost as soon asThe Origin of Specieswas first published.Sir Charles Lyellsaw it clearly in 1860 before he even became an evolutionist…(Reid, p.3)[10] St. George Jackson Mivart'sOn the Genesis of Species(1872) andEdward Cope'sOrigin of the Fittest(1887) raised the need to address the origin of variation between members of a species.William Batesonin 1884 distinguished between the origin of novel variations and the action of natural selection (Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species).[10] Wallace throughout his life continued to support and extend the scope of Darwin's theory of evolution via the mechanism of natural selection. One of his works,Darwinism, was often cited in support of Darwin's theory. He also worked to elaborate and extend Darwin and his ideas on natural selection. However, Wallace also realized that, as Darwin himself had admitted, the scope and claim of the theory was limited: the most prominent feature is that I enter into popular yet critical examination of those underlying fundamental problems which Darwin purposely excluded from his works as being beyond the scope of his enquiry. Such are the nature and cause of Life itself, and more especially of its most fundamental and mysterious powers - growth and reproduction ... Darwin always ... adduced the "laws of Growth with Reproduction," and of "Inheritance with Variability," as being fundamental facts of nature, without which Natural Selection would be powerless or even non-existent ... ... even if it were proved to be an exact representation of the facts, it would not be an explanation... because it would not account for the forces, the directive agency, and the organising power which are essential features of growth …[11] In examining this aspect, excludedab initioby Darwin, Wallace came to the conclusion that Life in its essence cannot be understood except through "an organising and directive Life-Principle." These necessarily involve a "Creative Power" possessed of a "directive Mind" working toward "an ultimate Purpose" (the development of Man). It supports the view ofJohn Hunterthat "life is the cause, not the consequence" of the organisation of matter. Thus, life precedes matter and infuses it to form living matter (protoplasm). A very well-founded doctrine, and one which was often advocated by John Hunter, that life is the cause and not the consequence of organisation ... if so, life must be antecedent to organisation, and can only be conceived as indissolubly connected with spirit and with thought, and with the cause of the directive energy everywhere manifested in the growth of living things ... endowed with the mysterious organising power we term life ...[11] Wallace then refers to the operation of another power called "mind" that utilizes the power of life and is connected with a higher realm than life or matter: evidence of a foreseeing mind which...so directed and organised that life, in all its myriad forms, as, in the far-off future, to provide all that was most essential for the growth and development of man's spiritual nature ...[11] Proceeding from Hunter's view that Life is the directive power above and behind living matter, Wallace argues that logically, Mind is the cause ofconsciousness, which exists in different degrees and kinds in living matter. If, as John Hunter, T.H. Huxley, and other eminent thinkers have declared, "life is the cause, not the consequence, of organisation," so we may believe that mind is the cause, not the consequence, of brain development. ... So there are undoubtedly different degrees and probably also different kinds of mind in various grades of animal life ... And ... so the mind-giver ... enables each class or order of animals to obtain the amount of mind requisite for its place in nature ...[11] The issue of how order emerged from primordial chaos, by chance or necessity, can be found in classical Greek thought.Aristotleasserted that a whole can be greater than the sum of its parts because of emergent properties. The second-century anatomist and physiologistGalenalso distinguished between the resultant and emergent qualities of wholes. (Reid, p. 72)[10] Hegelspoke of the revolutionary progression of life from non-living to conscious and then to the spiritual and Kant perceived that simple parts of an organism interact to produce a progressively complex series of emergences of functional forms, a distinction that carried over toJohn Stuart Mill(1843), who stated that even chemical compounds have novel features that cannot be predicted from their elements. [Reid, p. 72][10] The idea of an entirely novel emergent quality was further taken up byGeorge Henry Lewes(1874–1875), who reiterated Galen's distinction between evolutionary "emergent" qualities and adaptive, additive "resultants."Henry DrummondinThe Descent of Man(1894) stated that emergence can be seen in the fact that the laws of nature are different for the organic or vital compared to inert inorganic matter. When we pass from the inorganic to the organic we come upon a new set of laws - but the reason why the lower set do not seem to operate in the higher sphere is not that they are annhilated, but that they are overruled. (Drummond 1883, p. 405, quoted in Reid)[10] As Reid points out, Drummond also realized that greater complexity brought greater adaptability. (Reid. p. 73)[10] Samuel Alexandertook up the idea that emergences had properties that overruled the demands of the lower levels of organization. And more recently, this theme is taken up by John Holland (1998): If we turn reductionism on its head we add levels. More carefully, we add new laws that satisfy the constraints imposed by laws already in place. Moreover these new laws apply to complex phenomena that are consequences of the original laws; they are at a new level.[12] Another major scientist to question natural selection as the motive force of evolution wasC. Lloyd Morgan, a zoologist and student ofT.H. Huxley, who had a strong influence on Samuel Alexander. HisEmergent Evolution(1923) established the central idea that an emergence might have the appearance ofsaltationbut was best regarded as "a qualitative change of direction or critical turning point."(quoted in Reid, p. 73-74)[10]Morgan, due to his work in animal psychology, had earlier (1894) questioned the continuity view of mental evolution, and held that there were various discontinuities in cross-species mental abilities. To offset any attempt to readanthropomorphisminto his view, he created the famous, but often misunderstood methodological canon: In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale. However, Morgan realizing that this was being misused to advocate reductionism (rather than as a general methodological caution), introduced a qualification into the second edition of hisAn Introduction to Comparative Psychology(1903): To this, however, it should be added, lest the range of the principle be misunderstood, that the canon by no means excludes the interpretation of a particular activity in terms of the higher processes, if we already have independent evidence of the occurrence of these higher processes in the animal under observation. As Reid observes, While the so-called historiographical "rehabilitation of the canon" has been underway for some time now, Morgan's emergent evolutionist position (which was the highest expression of his attempt to place the study of mind back into such a "wider" natural history) is seldom mentioned in more than passing terms even within contemporary history of psychology textbooks.[10] Morgan also fought against thebehaviorist schooland clarified even more his emergent views on evolution: An influential school of 'behaviorists' roundly deny that mental relations, if such there be, are in any sense or in any manner effective... My message is that one may speak of mental relations as effective no less 'scientifically' than... physical relations... HisAnimal Conduct(1930) explicitly distinguishes between three "grades" or "levels of mentality" which he labeled: 'percipient, perceptive, and reflective.' (p. 42) Morgan's idea of a polaric relationship between lower and higher, was taken up by Samuel Alexander, who argued that the mental process is not reducible to the neural processes on which it depends at the physical-material level. Instead, they are two poles of a unity of function. Further, the neural process that expressed mental process itself possesses a quality (mind) that the other neural processes don’t. At the same time, the mental process, because it is functionally identical to this particular neural process, is also a vital one.[13] And mental process is also "something new, "a fresh creation", which precludes a psycho-physiological parallelism. Reductionism is also contrary to empirical fact. At the same time Alexander stated that his view was not one of animism or vitalism, where the mind is an independent entity action on the brain, or conversely, acted upon by the brain. Mental activity is an emergent, new "thing" not reducible to its initial neural parts. All the available evidence of fact leads to the conclusion that the mental element is essential to the neural process which it is said to accompany...and is not accidental to it, nor is it in turn indifferent to the mental feature. Epiphenomenalism is a mere fallacy of observation.[13] For Alexander, the world unfolds in space-time, which has the inherent quality of motion. This motion through space-time results in new “complexities of motion” in the form of a new quality or emergent. The emergent retains the qualities of the prior “complexities of motion” but also has something new that was not there before. This something new comes with its own laws of behavior. Time is the quality that creates motion through Space, and matter is simply motion expressed in forms in Space, or as Alexander says a little later, “complexes of motion.” Matter arises out of the basic ground of Space-Time continuity and has an element of “body” (lower order) and an element of “mind” (higher order), or “the conception that a secondary quality is the mind of its primary substrate.” Mind is an emergent from life and life itself is an emergent from matter. Each level contains and is interconnected with the level and qualities below it, and to the extent that it contains lower levels, these aspects are subject to the laws of that level. All mental functions are living, but not all living functions are mental; all living functions are physico-chemical, but not all physico-chemical processes are living - just as we could say that all people living in Ohio are Americans, but not all Americans live in Ohio. Thus, there are levels of existence, or natural jurisdictions, within a given higher level such that the higher level contains elements of each of the previous levels of existence. The physical level contains the pure dimensionality of Space-Time in addition to the emergent of physico-chemical processes; the next emergent level, life, also contains Space-Time as well as the physico-chemical in addition to the quality of life; the level of mind contains all of the previous three levels, plus consciousness. As a result of this nesting and inter-action of emergents, like fluid Russian dolls, higher emergents cannot be reduced to lower ones, and different laws and methods of inquiry are required for each level. Life is not an epiphenomenon of matter but an emergent from it ... The new character or quality which the vital physico-chemical complex possesses stands to it as soul or mind to the neural basis.[13] For Alexander, the "directing agency" or entelechy is found "in the principle or plan". a given stage of material complexity is characterised by such and such special features…By accepting this we at any rate confine ourselves to noting the facts…and do not invent entities for which there seems to be no other justification than that something is done in life which is not done in matter.[13] While an emergent is a higher complexity, it also results in a new simplicity as it brings a higher order into what was previously less ordered (a new simplex out of a complex). This new simplicity does not carry any of the qualities or aspects of that emergent level prior to it, but as noted, does still carry within it such lower levels so can be understood to that extent through the science of such levels, yet not itself be understood except by a science that is able to reveal the new laws and principles applicable to it. Ascent takes place, it would seem, through complexity.[increasing order] But at each change of quality the complexity as it were gathers itself together and is expressed in a new simplicity. Within a given level of emergence, there are degrees of development. ... There are on one level degrees of perfection or development; and at the same time there is affinity by descent between the existents belonging to the level. This difference of perfection is not the same thing as difference of order or rank such as subsists between matter and life or life and mind ...[13] The concept or idea of mind, the highest emergent known to us, being at our level, extends all the way down to pure dimensionality or Space-Time. In other words, time is the “mind” of motion, materialising is the “mind” of matter, living the “mind” of life. Motion through pure time (or life astronomical, mind ideational) emerges as matter “materialising” (geological time, life geological, mind existential), and this emerges as life “living” (biological time, life biological, mind experiential), which in turn give us mind “minding” (historical time, life historical, mind cognitional). But there is also an extension possible upwards of mind to what we call Deity. let us describe the empirical quality of any kind of finite which performs to it the office of consciousness or mind as its 'mind.' Yet at the same time let us remember that the 'mind' of a living thing is not conscious mind but is life, and has not the empirical character of consciousness at all, and that life is not merely a lower degree of mind or consciousness, but something different. We are using 'mind' metaphorically by transference from real minds and applying it to the finites on each level in virtue of their distinctive quality; down to Space-Time itself whose existent complexes of bare space-time have for their mind bare time in its empirical variations.[13] Alexander goes back to the Greek idea of knowledge being “out there” in the object being contemplated. In that sense, there is not mental object (concept) “distinct” (that is, different in state of being) from the physical object, but only an apparent split between the two, which can then be brought together by proper compresence or participation of the consciousness in the object itself. There is no consciousness lodged, as I have supposed, in the organism as a quality of the neural response; consciousness belongs to the totality of objects, of what are commonly called the objects of consciousness or the field of consciousness ... Consciousness is therefore "out there" where the objects are, by a new version of Berkleyanism ... Obviously for this doctrine as for mine there is no mental object as distinct from a physical object: the image of a tree is a tree in an appropriate form...[13] Because of the interconnectedness of the universe by virtue of Space-Time, and because the mind apprehends space, time and motion through a unity of sense and mind experience, there is a form of knowing that is intuitive (participative) - sense and reason are outgrowths from it. In being conscious of its own space and time, the mind is conscious of the space and time of external things and vice versa. This is a direct consequence of the continuity of Space-Time in virtue of which any point-instant is connected sooner or later, directly or indirectly, with every other... The mind therefore does not apprehend the space of its objects, that is their shape, size and locality, by sensation, for it depends for its character on mere spatio-temporal conditions, though it is not to be had as consciousness in the absence of sensation (or else of course ideation). It is clear without repeating these considerations that the same proposition is true of Time; and of motion ... I shall call this mode of apprehension in its distinction from sensation, intuition. ... Intuition is different from reason, but reason and sense alike are outgrowths from it, empirical determinations of it...[13] In a sense, the universe is a participative one and open to participation by mind as well so that mind can intuitively know an object, contrary to what Kant asserted. Participation (togetherness) is something that is “enjoyed” (experienced) not contemplated, though in the higher level of consciousness, it would be contemplated. The universe for Alexander is essentially in process, with Time as its ongoing aspect, and the ongoing process consists in the formation of changing complexes of motions. These complexes become ordered in repeatable ways displaying what he calls "qualities." There is a hierarchy of kinds of organized patterns of motions, in which each level depends on the subvening level, but also displays qualities not shown at the subvening level nor predictable from it… On this there sometimes supervenes a further level with the quality called "life"; and certain subtle syntheses which carry life are the foundation for a further level with a new quality. "mind." This is the highest level known to us, but not necessarily the highest possible level. The universe has a forward thrust, called its "nisus" (broadly to be identified with the Time aspect) in virtue of which further levels are to be expected...[14] Emergent evolution was revived by Robert G. B. Reid (March 20, 1939 - May 28, 2016), a biology professor at theUniversity of Victoria(in British Columbia, Canada). In his bookEvolutionary Theory: The Unfinished Synthesis(1985), he stated that themodern evolutionary synthesiswith its emphasis onnatural selectionis an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation.[6][7][8]BiologistErnst Mayrheavily criticized the book claiming it was a misinformed attack on natural selection. Mayr commented that Reid was working from an "obsolete conceptual framework", provided no solid evidence and that he was arguing for ateleological process of evolution.[15]In 2004, biologist Samuel Scheiner stated that Reid's "presentation is both a caricature of evolutionary theory and severely out of date."[16] Reid later published the bookBiological Emergences(2007) with a theory on how emergent novelties are generated in evolution.[17][18]According toMassimo Pigliucci"Biological Emergences by Robert Reid is an interesting contribution to the ongoing debate on the status of evolutionary theory, but it is hard to separate the good stuff from the more dubious claims." Pigliucci noted a dubious claim in the book is that natural selection has no role in evolution.[19]It was positively reviewed by biologist Alexander Badyaev who commented that "the book succeeds in drawing attention to an under appreciated aspect of the evolutionary process".[20]Others have criticized Reid's unorthodox views on emergence and evolution.
https://en.wikipedia.org/wiki/Emergent_evolution
Emergent gameplayrefers to complex situations invideo games,board games, orrole-playing gamesthatemergefrom the interaction of relatively simplegame mechanics.[1] Designers have attempted to encourage emergent play by providing tools to players such as placingweb browserswithin the game engine (such as inEve Online,The Matrix Online), providingXMLintegration tools andprogramming languages(Second Life), fixing exchange rates (Entropia Universe), and allowing a player tospawnany object they desire to solve a puzzle (Scribblenauts).[citation needed] Intentional emergence occurs when some creative uses of the game are intended by the game designers. Since the 1970s and 1980s board games and role playing games such asCosmic EncounterorDungeons & Dragonshave featured intentional emergence as a primary game function by supplying players with relatively simple rules or frameworks for play that intentionally encouraged them to explore creative strategies or interactions and exploit them toward victory or goal achievement.[citation needed] Immersive sims, such asDeus ExandSystem Shock, are games built around emergent gameplay. These games give the player-character a range of abilities and tools, and a consistent game world established by rules, but do not enforce any specific solution onto the player, though the player may be guided into suggested solutions. To move past a guard blocking a door, the player could opt to directly attack the guard, sneak up and knock the guard unconscious, distract the guard to move away from their post, or use parkour to reach an alternative opening well out of sight, among other solutions. In such games, it may be possible to complete in-game problems using solutions that the game designers did not foresee; for example inDeus Ex, designers were surprised to find players using wall-mounted mines aspitonsfor climbing walls.[2][3][4]A similar concept exists forroguelikegames, where emergent gameplay is considered a high-value factor by the 2008Berlin Interpretationfor roguelikes.[5] Such emergence may also occur in games throughopen-ended gameplayand sheer weight of simulated content, like inMinecraft,Dwarf FortressorSpace Station 13. These games do not have any endgame criteria though they do, similarly to immersive sims, present a consistent and rule-based world. These games will often present the player with tutorials of what they could do within the game. From this, players may follow the intended way to play the game, or can veer in completely different directions, such as extravagant simulated machines withinMinecraft.[6] Certain classes of open-ended puzzle games can also support emergent gameplay. The line of games produced byZachtronics, such asSpacechemandInfinifactory, are broadly considered programming puzzles, where the player must assemble pieces of a mechanism to produce a specific product from various inputs. The games otherwise have no limits in how many components can be used and how long the process needs to complete, though through in-game leaderboards, players are encouraged to make more efficient solutions than their online friends. While each puzzle is crafted to assure at least one possible solution exists, players frequently find emergent solutions that may be more elegant, use components in unexpected fashions, or otherwise diverge greatly from the envisioned route.[7] Some games do not use a pre-planned story structure, even non-linear. InThe Sims, a story may emerge from the actions of the player. But the player is given so much control that they are more creating a story than interacting with a story.[8]Emergent narrative would only partially be created by the player.Warren Spector, the designer ofDeus Ex, has argued that emergent narrative lacks the emotional impact of linear storytelling.[9] Left 4 Deadfeatures a dynamic system for game dramatics, pacing, and difficulty called the Director. The way the Director works is called "Procedural narrative": instead of having a difficulty which increases to a constant level, theA.I.analyzes how the players fared in the game so far, and tries to add subsequent events that would give them a sense of narrative.[10][11] MinecraftandDwarf Fortressalso have emergent narrative features due to the abstraction of how elements are represented in game, allowing system-wide features to apply across multiple objects without the need to develop specialized assets for each different state; this can create more realistic behavior for non-player controlled entities that aid in the emergent narrative.[12]For example, inDwarf Fortress, any of the living creatures in the game could gain the state of being intoxicated from alcohol, creating random behavior in their movement from the intoxication but not requiring them to display anything uniquely different, in contrast to a more representational game that would need new assets and models for a drunk creature. Because these are abstract and interacting systems, this can then create emergent behavior the developers had never anticipated.[13] Unintentional emergence occurs when creative uses of the video game were not intended by the game designers. Emergent gameplay can arise from agame's AIperforming actions or creating effects unexpected by even the software developers. This may be by either a softwareglitch, the game working normally but producing unexpected results when played in an abnormal way or software that allows for AI development; for example the unplanned genetic diseases that can occur in theCreaturesseries.[14] In several games, especiallyfirst-person shooters, game glitches or physics quirks can become viable strategies, or even spawn their own game types. Inid Software'sQuakeseries,rocket jumpingandstrafe-jumpingare two such examples. In the gameHalo 2, pressing the melee attack button (B) quickly followed by the reload button (X) and the primary fire button (R trigger) would result in the player not having to wait for the gun to be back in position to shoot after a melee attack. Doing this has become known as "BXR-ing". Another example includesGunZ: The Duel, where animation cancels and weapon switching would eventually develop into a means of traveling along the wall known as "butterfly" in the community. Starsiege: Tribeshad a glitch in the physics engine which allowed players to "ski" up and down steep slopes by rapidly pressing the jump key, gaining substantial speed in the process. The exploitation of this glitch became central to the gameplay, supplanting the vehicles that had been originally envisioned by the designers as the primary means of traversing large maps.[15] Thanks to a programming oversight byCapcom, thecombo(or2-1 combo) notion was introduced with the fighting gameStreet Fighter II, when skilled players learned that they could combine several attacks that left no time for their opponents to recover, as long as they were timed correctly.[16] The PlayStation 1 version of the 2004 edition of the FIFA series featured a selection of new attacking skills like off the ball running and touch sensitive passing, all of which were designed for analog controller use. Particularly skilled players had been artificially manipulating these features into the game series since at least the 1999 edition by deft and rapid manipulation of the original non-analogue stick PS controllers. The game was relatively easy to beat on the hardest level on single player, using a series of tricks from the instruction manual which the AI could not replicate consistently or defend against, but long-term players found that in trying to make the game attain to a more realistic football simulation by playing without using these tricks, the simplistic in-game AI would seem to respond by learning osmotically from the player with greater creativity from the opposition teams and an apparent learning intelligence in selecting off the ball players and shot direction, things that were supposed to be impossible with the non-analog controller.[citation needed] Inonlinecarracing games, particularlyProject Gotham Racing, players came up with an alternative objective known as "Cat and Mouse". The racers play on teams of at least two cars. Each team picks one very slow car as the mouse, and their goal is to have their slow car cross the finish line first. Thus the team members in faster cars aim to push their slow car into the lead and ram their opposing teams' slow cars off the road. Completing games without getting certain items or by skipping seemingly required portions of gameplay results insequence breaking, a technique that has developed its own dedicated community. Often, speed of completion and/or minimalist use of items are respectable achievements. This technique has long been used in theMetroidgame series and has developed into a community devoted tospeedruns.NetHackhas over time codified many such challenges as "conduct" and acknowledges players who manage to finish characters with unbroken pacifist or vegetarian disciplines, for example. A comparable form of restricted gameplay has been implemented withinWorld of Warcraft, known as "Iron Man" leveling.[17] A change in gameplay can be used to create ade factominigame, such as the "Green Demon Challenge" inSuper Mario 64, where the object is to avoid collecting a1-upwhich chases the player, even passing through terrain, while the player attempts to collect all red coins on a level.[18]Other challenges have been built around reaching normally unreachable areas or items, sometimes using glitches orgameplaying tools, or by completing a level without using an important game control, such as the 'jump' button orjoystick.[19][20] Machinima, the use ofcomputer animationfrom video game engines to create films, began in 1996. The practice of recordingdeathmatchesinid Software's 1996 computer gameQuakewas extended by adding a narrative, thus changing the objective from winning to creating a film.[21][22]Later, game developers provided increased support for creating machinima; for example,Lionhead Studios' 2005 gameThe Movies, is tailored for it.[23] Traders in MMOs with economic systems play purely to acquire virtual game objects or avatars which they then sell for real-world money onauctionwebsites or game currency exchange sites. This results in the trader's play objective to make real money regardless of the original game designer's objectives. Many games prohibit currency trading in theEULA,[24][25][26][27]but it is still a common practice.[original research?] Some players provide real world services (like website design, web hosting) paid for with in-game currency. This can influence the economy of the game, as players gain wealth/power in the game unrelated to game events. For example, this strategy is used inBlizzard Entertainment'sWorld of Warcraft.[citation needed]
https://en.wikipedia.org/wiki/Emergent_gameplay
Entropic gravity, also known asemergent gravity, is a theory in modern physics that describesgravityas anentropic force—a force with macro-scale homogeneity but which is subject toquantum-leveldisorder—and not afundamental interaction. The theory, based onstring theory,black holephysics, andquantum information theory, describes gravity as anemergentphenomenon that springs from thequantum entanglementof small bits ofspacetimeinformation. As such, entropic gravity is said to abide by thesecond law of thermodynamicsunder which theentropyof a physical system tends to increase over time. The theory has been controversial within the physics community but has sparked research and experiments to test its validity. At its simplest, the theory holds that when gravity becomes vanishingly weak—levels seen only at interstellar distances—it diverges from its classically understood nature and its strength begins to decaylinearly with distancefrom a mass. Entropic gravity provides an underlying framework to explainModified Newtonian Dynamics, or MOND, which holds that at agravitational accelerationthreshold of approximately1.2×10−10m/s2, gravitational strength begins to vary inverselylinearlywith distance from a mass rather than the normalinverse-square lawof the distance. This is an exceedingly low threshold, measuring only 12 trillionthsgravity's strength at Earth's surface; an object dropped from a height of one meter would fall for 36 hours were Earth's gravity this weak. It is also 3,000 times less than the remnant of Earth's gravitational field that exists at the point whereVoyager 1crossed the solar system'sheliopauseand entered interstellar space. The theory claims to be consistent with both the macro-level observations ofNewtonian gravityas well as Einstein'stheory of general relativityand its gravitational distortion of spacetime. Importantly, the theory also explains (without invoking the existence of dark matter and tweaking of its newfree parameters) whygalactic rotation curvesdiffer from the profile expected with visible matter. The theory of entropic gravity posits that what has been interpreted as unobserved dark matter is the product of quantum effects that can be regarded as a form ofpositivedark energythat lifts thevacuum energyof space from its ground state value. A central tenet of the theory is that the positive dark energy leads to a thermal-volume law contribution to entropy that overtakes the area law ofanti-de Sitter spaceprecisely at thecosmological horizon. Thus this theory provides an alternative explanation for what mainstream physics currently attributes todark matter. Since dark matter is believed to compose the vast majority of the universe's mass, a theory in which it is absent has huge implications forcosmology. In addition to continuing theoretical work in various directions, there are many experiments planned or in progress to actually detect or better determine the properties of dark matter (beyond its gravitational attraction), all of which would be undermined by an alternative explanation for the gravitational effects currently attributed to this elusive entity. The thermodynamic description of gravity has a history that goes back at least to research onblack hole thermodynamicsbyJacob BekensteinandStephen Hawkingin the mid-1970s. These studies suggest a deep connection betweengravityand thermodynamics, which describes the behavior of heat. In 1995,Theodore Jacobsondemonstrated that theEinstein field equationsdescribing relativistic gravitation can be derived by combining general thermodynamic considerations with theequivalence principle.[1]Subsequently, other physicists, most notablyThanu PadmanabhanandGinestra Bianconi, began to explore links between gravity andentropy.[2][3][4] In 2009,Erik Verlindeproposed a conceptual model that describes gravity as an entropic force.[5]He argues (similar to Jacobson's result) that gravity is a consequence of the "information associated with the positions of material bodies".[6]This model combines the thermodynamic approach to gravity withGerard 't Hooft'sholographic principle. It implies that gravity is not afundamental interaction, but anemergent phenomenonwhich arises from the statistical behavior of microscopicdegrees of freedomencoded on a holographic screen. The paper drew a variety of responses from the scientific community.Andrew Strominger, a string theorist at Harvard said "Some people have said it can't be right, others that it's right and we already knew it – that it’s right and profound, right and trivial."[7] In July 2011, Verlinde presented the further development of his ideas in a contribution to the Strings 2011 conference, including an explanation for the origin of dark matter.[8] Verlinde's article also attracted a large amount of media exposure,[9][10]and led to immediate follow-up work in cosmology,[11][12]thedark energy hypothesis,[13]cosmological acceleration,[14][15]cosmological inflation,[16]andloop quantum gravity.[17]Also, a specific microscopic model has been proposed that indeed leads to entropic gravity emerging at large scales.[18]Entropic gravity can emerge from quantum entanglement of localRindler horizons.[19] The law of gravitation is derived from classical statistical mechanics applied to theholographic principle, that states that the description of a volume of space can be thought of asN{\displaystyle N}bits of binary information, encoded on a boundary to that region, a closed surface of areaA{\displaystyle A}. The information is evenly distributed on the surface with each bit requiring an area equal toℓP2{\displaystyle \ell _{\text{P}}^{2}}, the so-calledPlanck area, from whichN{\displaystyle N}can thus be computed:N=AℓP2{\displaystyle N={\frac {A}{\ell _{\text{P}}^{2}}}}whereℓP{\displaystyle \ell _{\text{P}}}is thePlanck length. The Planck length is defined as:ℓP=ℏGc3{\displaystyle \ell _{\text{P}}={\sqrt {\frac {\hbar G}{c^{3}}}}}whereG{\displaystyle G}is theuniversal gravitational constant,c{\displaystyle c}is the speed of light, andℏ{\displaystyle \hbar }is the reducedPlanck constant. When substituted in the equation forN{\displaystyle N}we find:N=Ac3ℏG{\displaystyle N={\frac {Ac^{3}}{\hbar G}}} The statisticalequipartition theoremdefines the temperatureT{\displaystyle T}of a system withN{\displaystyle N}degrees of freedom in terms of its energyE{\displaystyle E}such that:E=12NkBT{\displaystyle E={\frac {1}{2}}Nk_{\text{B}}T}wherekB{\displaystyle k_{\text{B}}}is theBoltzmann constant. [Note though that, according to the sameequipartition theorem, this only applies to the quadratic degrees of freedom, that is, to those degrees of freedomQ{\displaystyle Q}whose contribution to the total internal energy is of the formQ2{\displaystyle Q^{2}}. This means that one is assuming a model of matter as formed by a collection of independent harmonic oscillators]. This is theequivalent energyfor a massM{\displaystyle M}according to:E=Mc2.{\displaystyle E=Mc^{2}.} The effective temperature experienced due to a uniform acceleration in avacuum fieldaccording to theUnruh effectis:T=ℏa2πckB,{\displaystyle T={\frac {\hbar a}{2\pi ck_{\text{B}}}},}wherea{\displaystyle a}is that acceleration, which for a massm{\displaystyle m}would be attributed to a forceF{\displaystyle F}according toNewton's second lawof motion:F=ma.{\displaystyle F=ma.} Taking the holographic screen to be a sphere of radiusr{\displaystyle r}, the surface area would be given by:A=4πr2.{\displaystyle A=4\pi r^{2}.} From algebraic substitution of these into the above relations, one derivesNewton's law of universal gravitation:F=m2πckBTℏ=m4πcℏEN=m4πc3ℏMN=m4πGMA=GmMr2.{\displaystyle F=m{\frac {2\pi ck_{\text{B}}T}{\hbar }}=m{\frac {4\pi c}{\hbar }}{\frac {E}{N}}=m{\frac {4\pi c^{3}}{\hbar }}{\frac {M}{N}}=m4\pi {\frac {GM}{A}}=G{\frac {mM}{r^{2}}}.} Note that this derivation assumes that the number of the binary bits of information is equal to the number of the degrees of freedom.AℓP2=N=2EkBT{\displaystyle {\frac {A}{\ell _{\text{P}}^{2}}}=N={\frac {2E}{k_{\text{B}}T}}} Elaborating on Erik Verlinde’s entropic gravity theory, which links gravity to information changes on holographic screens and increasing entropy, a paper from Melvin Vopson presents a distinct framework based on computational optimization and the mass–energy–information equivalence principle.[20]In this view, gravitational attraction emerges as an entropic information force driven by the second law of infodynamics, which demands systems evolve toward states of lower information entropy. Matter moves to reduce its informational imprint on a discretized space, making gravity a natural result of minimizing computational complexity. The paper analytically derives Newton's gravitational law from this framework, showing gravity as an emergent phenomenon, not a fundamental force,[21]reinforcing the plausibility of a computational or simulated universe. The derivation of Newton’s law of gravity using the principles of information dynamics and the mass–energy–information equivalence proposed by Melvin M. Vopson showed that gravitational attraction arises as anentropic forcegoverned by the second law of infodynamics, which postulates that systems evolve to minimize information entropy. In this model, space is treated as a discrete informational structure, with each Planck-scale cell storing one bit of information. The entropic force acting on a particle of massm{\displaystyle m}, approaching a larger massM{\displaystyle M}, is described by: Assuming the change in positionΔr{\displaystyle \Delta r}corresponds to the reduced Compton wavelength: and that the entropy change per movement is approximated by: Vopson connects the massM{\displaystyle M}to information using the M/E/I equivalence principle: Solving forT{\displaystyle T}and substituting into the force equation gives: Using the estimate for the number of Planck-scale cellsN≈R2ℓP2{\displaystyle N\approx {\frac {R^{2}}{\ell _{\text{P}}^{2}}}}, and the definition of Planck lengthℓP=Gℏc3{\displaystyle \ell _{\text{P}}={\sqrt {\frac {G\hbar }{c^{3}}}}}, Vopson arrives at: This result is mathematically identical to Newton’s law of gravitation. In this framework, gravity is interpreted not as a fundamental force, but as a computational optimization effect where matter coalesces to minimize informational and energetic cost. Entropic gravity, as proposed by Verlinde in his original article, reproduces theEinstein field equationsand, in a Newtonian approximation, a1r{\displaystyle \ {\tfrac {\ 1\ }{r}}\ }potential for gravitational forces. Since its results do not differ from Newtonian gravity except in regions of extremely small gravitational fields, testing the theory with Earth-based laboratory experiments does not appear feasible. Spacecraft-based experiments performed atLagrangian pointswithin theSolar Systemwould be expensive and challenging. Even so, entropic gravity in its current form has been severely challenged on formal grounds.Matt Visserhas shown[22]that the attempt to model conservative forces in the general Newtonian case (i.e. for arbitrary potentials and an unlimited number of discrete masses) leads to unphysical requirements for the required entropy and involves an unnatural number of temperature baths of differing temperatures. Visser concludes: There is no reasonable doubt concerning the physical reality of entropic forces, and no reasonable doubt that classical (and semi-classical) general relativity is closely related to thermodynamics [52–55]. Based on the work of Jacobson [1–6],Thanu Padmanabhan[7–12], and others, there are also good reasons to suspect a thermodynamic interpretation of the fully relativistic Einstein equations might be possible. Whether the specific proposals of Verlinde [26] are anywhere near as fundamental is yet to be seen – the rather baroque construction needed to accurately reproducen-body Newtonian gravity in a Verlinde-like setting certainly gives one pause. For the derivation of Einstein's equations from an entropic gravity perspective, Tower Wang shows[23]that the inclusion of energy-momentum conservation and cosmological homogeneity and isotropy requirements severely restricts a wide class of potential modifications of entropic gravity, some of which have been used to generalize entropic gravity beyond the singular case of an entropic model of Einstein's equations. Wang asserts that: As indicated by our results, the modified entropic gravity models of form (2), if not killed, should live in a very narrow room to assure the energy-momentum conservation and to accommodate a homogeneous isotropic universe. Cosmological observations using available technology can be used to test the theory. On the basis of lensing by the galaxy cluster Abell 1689, Nieuwenhuizen concludes that EG is strongly ruled out unless additional (dark) matter-like eV neutrinos is added.[24]A team fromLeiden Observatorystatistically observing thelensing effect of gravitational fieldsat large distances from the centers of more than 33,000 galaxies found that those gravitational fields were consistent with Verlinde's theory.[25][26][27]Using conventional gravitational theory, the fields implied by these observations (as well as from measuredgalaxy rotation curves) could only be ascribed to a particular distribution ofdark matter. In June 2017, a study byPrinceton Universityresearcher Kris Pardo asserted that Verlinde's theory is inconsistent with the observed rotation velocities ofdwarf galaxies.[28][a][29]Another theory of entropy based on geometric considerations (Quantitative Geometrical Thermodynamics, QGT[30]) provides an entropic basis for the holographic principle[31]and also offers another explanation for galaxy rotation curves as being due to the entropic influence[30]of the central supermassive blackhole found in the center of a spiral galaxy. In 2018, Zhi-Wei Wang andSamuel L. Braunsteinshowed that, while spacetime surfaces near black holes (called stretched horizons) do obey an analog of the first law of thermodynamics, ordinary spacetime surfaces — including holographic screens — generally do not, thus undermining the key thermodynamic assumption of the emergent gravity program.[32] In his 1964 lecture on the Relation of Mathematics and Physics,Richard Feynmandescribes a related theory for gravity where the gravitational force is explained due to an entropic force due to unspecified microscopic degrees of freedom.[33]However, he immediately points out that the resulting theory cannot be correct as thefluctuation-dissipation theoremwould also lead to friction which would slow down the motion of the planets which contradicts observations. Another criticism of entropic gravity is that entropic processes should, as critics argue, breakquantum coherence. There is no theoretical framework quantitatively describing the strength of such decoherence effects, though. The temperature of the gravitational field in earth gravity well is very small (on the order of 10−19K). Experiments with ultra-cold neutrons in the gravitational field of Earth are claimed to show that neutrons lie on discrete levels exactly as predicted by theSchrödinger equationconsidering the gravitation to be a conservative potential field without any decoherent factors. Archil Kobakhidze argues that this result disproves entropic gravity,[34]while Chaichianet al. suggest a potential loophole in the argument in weak gravitational fields such as those affecting Earth-bound experiments.[35]
https://en.wikipedia.org/wiki/Entropic_gravity
Anemergent organization(alternativelyemergent organisation) is anorganizationthat spontaneouslyemergesfrom and exists in a complex dynamicenvironmentormarket place, rather than being a construct or copy of something that already exists. The term first appeared in the late 1990s and was the topic of the Seventh Annual WashingtonEvolutionary SystemsConference atUniversity of Ghent,Belgiumin May, 1999. Emergent organizations and their dynamics pose interesting questions; for example, how does such an organization achieve closure and stability? Alternatively, as suggested byJames R. Taylorand Elizabeth J. Van Every in their 2000 seminal text,The Emergent Organization, all organizations emerge from communication, especially from the interplay of conversation and text.[1]This idea concerns human organizations, but is consistent withLeibnizorGabriel Tarde'smonadology, or withAlfred North Whitehead'sprocess philosophy, which explains the macro—both in human and non-human "societies"—from the processes taking place between its constituent parts. This organization-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Emergent_organization
Emergentismis thephilosophical theorythat higher-level properties or phenomenaemergefrom more basic components, and that these emergent properties are not fully reducible to or predictable from those lower-level parts. A property of asystemis said to be emergent if it is a new outcome of some other properties of the system and their interaction, while it is itself different from them.[1]Within thephilosophy of science, emergentism is analyzed both as it contrasts with and parallelsreductionism.[1][2]This philosophical theory suggests that higher-level properties and phenomena arise from the interactions and organization of lower-level entities yet are not reducible to these simpler components. It emphasizes the idea that the whole is more than the sum of its parts. The concept of emergence can be traced back to ancient philosophical traditions.Aristotle, in particular, suggested that the whole could possess properties that its individual parts did not, laying an early foundation for emergentist thought. This idea persisted through the ages, influencing various schools of thought.[3] The term "emergence" was formally introduced in the 19th century by the philosopher George Henry Lewes. He distinguished between "resultant" and "emergent" properties, where resultant properties could be predicted from the properties of the parts, whereas emergent properties could not. This distinction was crucial in differentiating emergent phenomena from simple aggregative effects.[4] In the early 20th century, emergentism gained further traction through the works of British emergentists like C.D. Broad and Samuel Alexander. C.D. Broad, in his 1925 bookThe Mind and Its Place in Nature, argued thatmental stateswere emergent properties of brain processes.[5]Samuel Alexander, in his workSpace, Time, and Deity, suggested that emergent qualities likeconsciousnessandlifecould not be fully explained by the underlying physical processes alone.[6] These philosophers were reacting against the reductionist view that all phenomena could be fully explained by their constituent parts. They argued that emergent properties such as consciousness have their own causal powers and cannot be reduced to or predicted from their base components. This period also saw the influence ofGestalt psychology, which emphasized that psychological phenomena cannot be understood solely by analyzing their component parts, further supporting emergentist ideas.[3] During the mid-20th century, emergentism was somewhat overshadowed by the rise ofbehaviorismand later thecognitive sciences, which often leaned towards more reductionist explanations. However, the concept ofemergencefound renewed interest towards the late 20th century with the advent ofcomplex systemstheory andnon-linear dynamics.[4] In this period, scientists and philosophers began to explore how complex behaviors and properties could arise from relatively simple interactions in systems as diverse as ant colonies, economic markets, andneural networks. This interdisciplinary approach highlighted the ubiquity and importance of emergent phenomena across different domains, fromphysicstobiologytosocial sciences.[3] In recent years, emergentism has continued to evolve, integrating insights from various scientific fields. For example, in physics, the study of phenomena such assuperconductivityand the behavior of complexquantum systemshas provided empirical examples of emergent properties.[7]In biology, the study of complexbiological networksand the dynamics ofecosystemshas further illustrated how emergent properties play a crucial role in natural systems.[8] The resurgence of interest inartificial intelligenceandmachine learninghas also contributed to contemporary discussions on emergentism. Researchers in these fields are particularly interested in how intelligent behavior and consciousness might emerge from artificial systems, providing new perspectives and challenges for emergentist theories.[9] Emergentism can be compatible withphysicalism,[10]the theory that the universe is composed exclusively of physical entities, and in particular with the evidence relating changes in the brain with changes in mental functioning. Some varieties of emergentism are not specifically concerned with themind–body problembut constitute a theory of the nature of the universe comparable topantheism.[11]They suggest ahierarchicalor layered view of the whole of nature, with the layers arranged in terms of increasingcomplexitywith each requiring its ownspecial science. Emergentism is underpinned by several core principles that define its theoretical framework and distinguish it from other philosophical doctrines such asreductionismandholism. Emergence refers to the arising of novel andcoherent structures,patterns, and properties during the process ofself-organizationin complex systems. These emergent properties are not predictable from the properties of the individual components alone. Emergent properties are seen as a result of the interactions and relationships between the components of a system, which produce new behaviors and characteristics that are not present in the isolated parts. This concept is crucial in understanding why certain phenomena cannot be fully explained by analyzing their parts independently.[3] Emergentism distinguishes between two main types of emergence: weak and strong. Emergent properties are characterized by several key features that distinguish them from simple aggregative properties: The theoretical foundations of emergentism are deeply intertwined with various philosophical theories and debates, particularly those concerning the nature ofreality, the relationship between parts and wholes, and the nature ofcausality. Emergentism contrasts sharply withreductionism, which attempts to explain complex phenomena entirely in terms of their simpler components, andholism, which emphasizes the whole without necessarily addressing the emergence of properties.[3] Emergentism stands in contrast to reductionism, which holds that all phenomena can be fully explained by their constituent parts. Reductionists argue that understanding the basic building blocks of a system provides a complete understanding of the system itself. However, emergentists contend that this approach overlooks the novel properties that arise from complex interactions within a system. For example, while the properties of water can be traced back tohydrogenandoxygenatoms, the wetness ofwatercannot be fully explained by examining theseatomsin isolation.[4] Holism, on the other hand, emphasizes the significance of the whole system, suggesting that the properties of the whole are more important than the properties of the parts. Emergentism agrees with holism to some extent but differs in that it specifically focuses on how new properties emerge from the interactions within the system. Holism often overlooks the dynamic processes that lead to the emergence of new properties, which are central to emergentism.[3] Emmecheet al.(1998) state that "there is a very important difference between the vitalists and the emergentists: the vitalist's creative forces were relevant only in organic substances, not in inorganic matter. Emergence hence is creation of new properties regardless of the substance involved." "The assumption of an extra-physical vitalis (vital force,entelechy,élan vital, etc.), as formulated in most forms (old or new) of vitalism, is usually without any genuine explanatory power. It has served altogether too often as anintellectual tranquilizer or verbal sedative—stifling scientific inquiry rather than encouraging it to proceed in new directions."[13] Emergentism can be divided into ontological and epistemological categories, each addressing different aspects of emergent properties. A crucial aspect of emergentism is its treatment ofcausality, particularly the concept ofdownward causation. Downward causation refers to the influence that higher-level properties can exert on the behavior of lower-level entities within a system. This idea challenges the traditional view that causation only works from the bottom up, from simpler to more complex levels.[4] Emergentism finds its scientific support and application across various disciplines, illustrating how complex behaviors and properties arise from simpler interactions. These scientific perspectives demonstrate the practical significance of emergentist theories. Inphysics, emergence is observed in phenomena where macroscopic properties arise from the interactions of microscopic components. A classic example issuperconductivity, where the collective behavior ofelectronsin certain materials leads to the phenomenon of zeroelectrical resistance. This emergent property cannot be fully explained by the properties of individual electrons alone, but rather by their interactions within the lattice structure of the material.[7] Another significant example isquantum entanglement, where particles become interconnected in such a way that the state of one particle instantly influences the state of another, regardless of the distance between them. This non-local property emerges from the quantum interactions and cannot be predicted merely by understanding the individual particles separately. Such emergent properties challenge classical notions of locality and causality, showcasing the profound implications of emergentism in modern physics.[3] Inthermodynamics, emergent behaviors are observed innon-equilibrium systemswhere patterns and structures spontaneously form. For instance,Bénard cells— a phenomenon where heated fluid formshexagonalconvectioncells — arise fromthermal gradientsandfluid dynamics. Thisself-organizationis an emergent property of the system, highlighting how macro-level order can emerge from micro-level interactions.[4] Emergent phenomena are prevalent inbiology, particularly in the study of life and evolutionary processes. One of the most fundamental examples is the emergence oflifefrom non-living chemical compounds. This process, often studied through the lens ofabiogenesis, involves complexchemical reactionsthat lead toself-replicatingmolecules and eventuallyliving organisms. The properties of life — such asmetabolism,growth, andreproduction— emerge from these molecular interactions and cannot be fully understood by examining individual molecules in isolation.[15] In evolutionary biology, the diversity of life forms arises fromgenetic mutations,natural selection, and environmental interactions. Complex traits such as the eye or the brain emerge over time through evolutionary processes. These traits exhibit novel properties that are not predictable from the genetic components alone but result from the dynamic interplay between genes and the environment.[3] Systems biology further illustrates emergent properties in biological networks. For example, metabolic networks whereenzymesand substrates interact exhibit emergent behaviors likerobustnessandadaptability. These properties are crucial for the survival of organisms in changing environments and arise from the complex interconnections within the network.[4] Incognitive science, emergentism plays a crucial role in understandingconsciousnessandcognitive processes. Consciousness is often cited as a paradigmatic example of an emergent property. While neural processes in thebraininvolve electrochemical interactions amongneurons, the subjective experience of consciousness arises from these processes in a way that is not directly reducible to them. This emergence of conscious experience from neural substrates is a central topic in thephilosophy of mindand cognitive science.[16] Artificial intelligence(AI) andmachine learningprovide contemporary examples of emergent behavior in artificial systems. Complex algorithms andneural networkscan learn, adapt, and exhibit intelligent behavior that is not explicitly programmed. For instance,deep learningmodels can recognize patterns and make decisions based on vast amounts of data, demonstrating emergentintelligencefrom simpler computational rules. This emergent behavior in AI systems reflects the principles of emergentism, where higher-level functions arise from the interaction of lower-level components.[9] Emergentism andlanguageare intricately connected through the concept that linguistic properties and structures arise from simpler interactions among cognitive, communicative and social processes. This perspective provides a dynamic view oflanguage development,structure, andevolution, emphasizing the role of interaction andadaptationover innate or static principles. This connection can be explored from several angles: Literary emergentism is a trend in literary theory. It arises as a reaction against traditional interpretive approaches –hermeneutics,structuralism,semiotics, etc., accusing them of analyticalreductionismand lack of hierarchy. Literary emergentism claims to describe the emergence of a text as contemplative logic consisting of seven degrees, similar to the epistemological doctrine ofRudolf Steinerin hisPhilosophy of Freedom.[17]There are also references toTerrence Deacon, author of the theory of Incomplete nature, according to whom the emergent perspective is metaphysical, whereas the human consciousness emerges as an incessant creation of something from nothing.[18]According toDimitar Kalev, in all modern literary-theoretical discourses, there is an epistemological "gap" present between the sensory-imagery phenomena of reading and their proto-phenomena from the text.[19]Therefore, in any attempt at literary reconstructions, certain "destruction" is reached, which, from an epistemological point of view, is a designation of the existing transcendence as some "interruption" of the divine "top-down". The emergentist approach does not interpret the text but rather reconstructs its becoming, identifying itself with the contemplative logic of the writer, claiming that it possesses a being of ideal objectivity and universal accessibility. Emergentism, like any philosophical theory, has been subject to various criticisms and debates. These discussions revolve around the validity of emergent properties, the explanatory power of emergentism, and its implications for other areas of philosophy and science. These criticisms and debates highlight the dynamic and evolving nature of emergentism, reflecting its impact and relevance across various fields of inquiry. By addressing these challenges, proponents of emergentism continue to refine and strengthen their theoretical framework. Emergentism finds applications across various scientific and philosophical domains, illustrating how complex behaviors and properties can arise from simpler interactions. These applications underscore the practical relevance of emergentist theories and their impact on understanding complex systems. These applications of emergentism illustrate its broad relevance and utility in explaining and understanding complex systems across various domains, highlighting the interdisciplinary impact of emergentist theories. Emergentism has been significantly shaped and debated by numerous philosophers and scientists over the years. Here are notable figures who have contributed to the development and discourse of emergentism, providing a rich tapestry of ideas and empirical evidence that support the theory's application across various domains: Contribution: One of the earliest thinkers to suggest that the whole could possess properties that its individual parts did not. This idea laid the foundational groundwork for emergentist thought by emphasizing that certain phenomena cannot be fully explained by their individual components alone. Major Work:Metaphysics[22] Contribution: Formally introduced the term "emergence" in the 19th century. He distinguished between "resultant" and "emergent" properties where emergent properties could not be predicted from the properties of the parts, a critical distinction in emergentist theory. Major Work:Problems of Life and Mind[23] Contribution: Early proponent of emergentism in social and political contexts. Mill's work emphasized the importance of understanding social phenomena as more than the sum of individual actions, highlighting the emergent properties in societal systems. Major Work:A System of Logic[24] Contribution: In his 1925 bookThe Mind and Its Place in Nature, Broad argued that mental states were emergent properties of brain processes. He developed a comprehensive philosophical framework for emergentism, advocating for the irreducibility of higher-level properties. Major Work:The Mind and Its Place in Nature[5] Contribution: In his workSpace, Time, and Deity, Alexander suggested that emergent qualities like consciousness and life could not be fully explained by underlying physical processes alone, emphasizing the novelty and unpredictability of emergent properties. Major Work:Space, Time, and Deity[6] Contribution: A prominent critic and commentator on emergentism. Kim extensively analyzed the limits and scope of emergent properties, particularly in the context of mental causation and the philosophy of mind, questioning the coherence and causal efficacy of emergent properties. Major Work:Mind in a Physical World[14] Contribution: Advanced the idea that emergent properties are irreducible and possess their own causal powers. Polanyi's work in chemistry and philosophy of science provided empirical and theoretical support for emergentist concepts, especially in complex systems and hierarchical structures. Major Work:Personal Knowledge[25] Contribution: Nobel laureate in physics, Anderson's work on condensed matter physics and the theory of superconductivity provided significant empirical examples of emergent phenomena. His famous essay "More is Different" argued for the necessity of emergentist explanations in physics. Major Work:More is Different[26] Contribution: A theoretical biologist whose work in complex systems and self-organization highlighted the role of emergence in biological evolution and the origin of life. Kauffman emphasized the unpredictability and novelty of emergent biological properties. Major Work:The Origins of Order[8] Contribution: Neuropsychologist and Nobel laureate, Sperry's split-brain research contributed to the understanding of consciousness as an emergent property of brain processes. He argued that emergent mental properties have causal efficacy, influencing the lower-level neural processes. Major Work:Science and Moral Priority[27] Contribution: Anthropologist and neuroscientist, Deacon's work on the evolution of language and human cognition explored how emergent properties arise from neural and social interactions. His bookIncomplete Naturedelves into the emergentist explanation of life and mind. Major Work:Incomplete Nature: How Mind Emerged from Matter[28] Contribution: An author and theorist whose popular science books, such asEmergence: The Connected Lives of Ants, Brains, Cities, and Software, have brought the concept of emergentism to a broader audience. Johnson illustrates how complex systems in nature and society exhibit emergent properties. Major Work:Emergence: The Connected Lives of Ants, Brains, Cities, and Software[9] Emergentism offers a valuable framework for understanding complex systems and phenomena that cannot be fully explained by their constituent parts. Its interdisciplinary nature and broad applicability make it a significant area of study in both philosophy and science. Future research will continue to explore the implications and potential of emergent properties, contributing to our understanding of the natural world.
https://en.wikipedia.org/wiki/Emergentism
Ineconomics, anexternalityis anindirect cost(external cost) or benefit (external benefit) to an uninvolved third party that arises as an effect of another party's (or parties') activity. Externalities can be considered as unpriced components that are involved in either consumer or producer consumption.Air pollutionfrommotor vehiclesis one example. Thecost of air pollution to societyis not paid by either the producers or users of motorized transport. Water pollution from mills and factories are another example. All (water) consumers are made worse off by pollution but are not compensated by the market for this damage. The concept of externality was first developed byAlfred Marshallin the 1890s[1]and achieved broader attention in the works of economistArthur Pigouin the 1920s.[2]The prototypical example of a negative externality is environmental pollution. Pigou argued that a tax, equal to the marginal damage or marginal external cost, (later called a "Pigouvian tax") on negative externalities could be used to reduce their incidence to an efficient level.[2]Subsequent thinkers have debated whether it is preferable to tax or to regulate negative externalities,[3]the optimally efficient level of the Pigouvian taxation,[4]and what factors cause or exacerbate negative externalities, such as providing investors in corporations with limited liability for harms committed by the corporation.[5][6][7] Externalities often occur when the production or consumption of a product or service's private priceequilibriumcannot reflect the true costs or benefits of that product or service for society as a whole.[8][9]This causes the externality competitive equilibrium to not adhere to the condition ofPareto optimality. Thus, since resources can be better allocated, externalities are an example ofmarket failure.[10] Externalities can be either positive or negative. Governments and institutions often take actions to internalize externalities, thus market-priced transactions can incorporate all the benefits and costs associated with transactions between economic agents.[11][12]The most common way this is done is by imposing taxes on the producers of this externality. This is usually done similar to a quote where there is no tax imposed and then once the externality reaches a certain point there is a very high tax imposed. However, since regulators do not always have all the information on the externality it can be difficult to impose the right tax. Once the externality is internalized through imposing a tax the competitive equilibrium is now Pareto optimal. The term "externality" was first coined by the British economistAlfred Marshallin his seminal work, "Principles of Economics," published in 1890. Marshall introduced the concept to elucidate the effects of production and consumption activities that extend beyond the immediate parties involved in a transaction. Marshall's formulation of externalities laid the groundwork for subsequent scholarly inquiry into the broader societal impacts of economic actions. While Marshall provided the initial conceptual framework for externalities, it was Arthur Pigou, a British economist, who further developed the concept in his influential work, "The Economics of Welfare," published in 1920. Pigou expanded upon Marshall's ideas and introduced the concept of "Pigovian taxes" or corrective taxes aimed at internalizing externalities by aligning private costs with social costs. His work emphasized the role of government intervention in addressing market failures resulting from externalities.[1] Additionally, the American economistFrank Knightcontributed to the understanding of externalities through his writings on social costs and benefits in the 1920s and 1930s. Knight's work highlighted the inherent challenges in quantifying and mitigating externalities within market systems, underscoring the complexities involved in achieving optimal resource allocation.[13]Throughout the 20th century, the concept of externalities continued to evolve with advancements in economic theory and empirical research. Scholars such asRonald CoaseandHarold Hotellingmade significant contributions to the understanding of externalities and their implications for market efficiency and welfare. The recognition of externalities as a pervasive phenomenon with wide-ranging implications has led to its incorporation into various fields beyond economics, including environmental science, public health, and urban planning. Contemporary debates surrounding issues such asclimate change, pollution, and resource depletion underscore the enduring relevance of the concept of externalities in addressing pressing societal challenges. A negative externality is any difference between the private cost of an action or decision to an economic agent and the social cost. In simple terms, a negative externality is anything that causes anindirect costto individuals. An example is the toxic gases that are released from industries or mines, these gases cause harm to individuals within the surrounding area and have to bear a cost (indirect cost) to get rid of that harm. Conversely, a positive externality is any difference between the private benefit of an action or decision to an economic agent and the social benefit. A positive externality is anything that causes an indirect benefit to individuals and for which the producer of that positive externality is not compensated. For example, planting trees makes individuals' property look nicer and it also cleans the surrounding areas. In microeconomic theory, externalities are factored into competitive equilibrium analysis as the social effect, as opposed to the private market which only factors direct economic effects. The social effect of economic activity is the sum of the indirect (the externalities) and direct factors. The Pareto optimum, therefore, is at the levels in which the social marginal benefit equals the social marginal cost.[citation needed] Externalities are the residual effects of economic activity on persons not directly participating in the transaction. The consequences of producer or consumer behaviors that result in external costs or advantages imposed on others are not taken into account by market pricing and can have both positive and negative effects. To further elaborate on this, when expenses associated with the production or use of an item or service are incurred by others but are not accounted for in the market price, this is known as a negative externality. The health and well-being of local populations may be negatively impacted by environmental deterioration resulting from the extraction of natural resources. Comparably, the tranquility of surrounding inhabitants might be disturbed by noise pollution from industry or transit, which lowers their quality of life. On the other hand, positive externalities occur when the activities of producers or consumers benefit other parties in ways that are not accounted for in market exchanges. A prime example of a positive externality is education, as those who invest in it gain knowledge and production for society as a whole in addition to personal profit.[14] Government involvement is frequently necessary to address externalities. This can be done by enacting laws, Pigovian taxes, or other measures that encourage positive externalities or internalize external costs. Through the integration of externalities into economic research and policy formulation, society may endeavor to get results that optimize aggregate well-being and foster sustainable growth.[14] A voluntary exchange may reduce societal welfare if external costs exist. The person who is affected by the negative externalities in the case ofair pollutionwill see it as loweredutility: either subjective displeasure or potentially explicit costs, such as higher medical expenses. The externality may even be seen as atrespasson their health or violating their property rights (by reduced valuation). Thus, an external cost may pose anethicalorpoliticalproblem. Negative externalities arePareto inefficient, and since Pareto efficiency underpins the justification for private property, they undermine the whole idea of a market economy. For these reasons, negative externalities are more problematic than positive externalities.[15] Although positive externalities may appear to be beneficial, while Pareto efficient, they still represent a failure in the market as it results in the production of the good falling under what is optimal for the market. By allowing producers to recognise and attempt to control their externalities production would increase as they would have motivation to do so.[16]With this comes the free rider problem. Thefree rider problemarises when people overuse a shared resource without doing their part to produce or pay for it. It represents a failure in the market where goods and services are not able to be distributed efficiently, allowing people to take more than what is fair. For example, if a farmer has honeybees a positive externality of owning these bees is that they will also pollinate the surrounding plants. This farmer has a next door neighbour who also benefits from this externality even though he does not have any bees himself. From the perspective of the neighbour he has no incentive to purchase bees himself as he is already benefiting from them at zero cost. But for the farmer, he is missing out on the full benefits of his own bees which he paid for, because they are also being used by his neighbour.[17] There are a number of theoretical means of improving overall social utility when negative externalities are involved. The market-driven approach to correcting externalities is tointernalizethird party costs and benefits, for example, by requiring a polluter to repair any damage caused. But in many cases, internalizing costs or benefits is not feasible, especially if the true monetary values cannot be determined. Laissez-faireeconomists such asFriedrich HayekandMilton Friedmansometimes refer to externalities as "neighborhood effects" or "spillovers", although externalities are not necessarily minor or localized. Similarly,Ludwig von Misesargues that externalities arise from lack of "clear personal property definition." Externalities may arise between producers, between consumers or between consumers and producers. Externalities can be negative when the action of one party imposes costs on another, or positive when the action of one party benefits another. Anegative externality(also called "external cost" or "external diseconomy") is an economic activity that imposes a negative effect on an unrelated third party, not captured by the market price. It can arise either during the production or the consumption of a good or service.[18][better source needed]Pollution is termed an externality because it imposes costs on people who are "external" to the producer and consumer of the polluting product.[19]Barry Commonercommented on the costs of externalities: Clearly, we have compiled a record of serious failures in recent technological encounters with the environment. In each case, the new technology was brought into use before the ultimate hazards were known. We have been quick to reap the benefits and slow to comprehend the costs.[20] Many negative externalities are related to the environmental consequences of production and use. The article onenvironmental economicsalso addresses externalities and how they may be addressed in the context of environmental issues. "The corporation is an externalizing machine (moving its operating costs and risks to external organizations and people), in the same way that a shark is a killing machine." -Robert Monks(2003) Republican candidate for Senate from Maine and corporate governance adviser in the film "The Corporation". Examples fornegative production externalitiesinclude: Examples ofnegative consumption externalitiesinclude: A positive externality (also called "external benefit" or "external economy" or "beneficial externality") is the positive effect an activity imposes on an unrelated third party.[33]Similar to a negative externality, it can arise either on the production side, or on the consumption side.[18] A positive production externality occurs when a firm's production increases the well-being of others but the firm is uncompensated by those others, while a positive consumption externality occurs when an individual's consumption benefits other but the individual is uncompensated by those others.[34] Examples ofpositive production externalities Examples ofpositive consumption externalitiesinclude: Collective solutions orpublic policiesare implemented toregulateactivities with positive or negative externalities. The sociological basis of Positional externalities is rooted in the theories ofconspicuous consumptionandpositional goods.[44] Conspicuous consumption (originally articulated byVeblen, 1899) refers to the consumption of goods or services primarily for the purpose of displaying social status or wealth. In simpler terms, individuals engange in conspicuous consumption to signal their economic standing or to gain social recognition.[45]Positional goods (introduced byHirsch, 1977) are such goods, whose value is heavily contingent upon how they compare to similar goods owned by others. Their desirability is or derived utility is intrinsically tied to their relative scarcity or exclusivity within a particular social context.[46] The economic concept of Positional externalities originates fromDuesenberry'sRelative Income Hypothesis. This hypothesis challenges the conventional microeconomic model, as outlined by the Common Pool Resource (CPR) mechanism, which typically assumes that an individual's utility derived from consuming a particular good or service remains unaffected by other's consumption choices. Instead, Duesenberry posits that individuals gauge the utility of their consumption based on a comparison with other consumption bundles, thus introducing the notion of relative income into economic analysis. Consequently, the consumption of positional goods becomes highly sought after, as it directly impacts one's perceived status relative to others in their social circle.[47] Example: consider a scenario where individuals within a social group vie for the latest luxury cars. As one member acquires a top-of-the-line vehicle, others may feel compelled to upgrade their own cars to preserve their status within the group. This cycle of competitive consumption can result in inefficient allocation of resources and exacerbate income inequality within society. The consumption of positional goods engenders negative externalities, wherein the acquisition of such goods by one individual diminishes the utility or value of similar goods held by others within the same reference group. This positional externality, can lead to a cascade of overconsumption, as individuals strive to maintain or improve their relative position through excessive spending. Positional externalities are related, but not similar to Percuniary externalities. Pecuniary externalitiesare those which affect a third party's profit but not their ability to produce or consume. These externalities "occur when new purchases alter the relevant context within which an existing positional good is evaluated."[48]Robert H. Frankgives the following example: Frank notes that treating positional externalities like other externalities might lead to "intrusive economic and social regulation."[48]He argues, however, that less intrusive and more efficient means of "limiting the costs ofexpenditure cascades"—i.e., the hypothesized increase in spending of middle-income families beyond their means "because of indirect effects associated with increased spending by top earners"—exist; one such method is thepersonal income tax.[48] The effect that rising demand has on prices in marketplaces with intense competition is a typical illustration of pecuniary externalities. Prices rise in response to shifts in consumer preferences or income levels, which raise demand for a product and benefit suppliers by increasing sales and profits. But other customers who now have to pay more for identical goods might also suffer from this price hike. As a result, consumers who were not involved in the initial transaction suffer a monetary externality in the form of diminished buying power, while producers profit from increased prices. Furthermore, markets with economies of scale or network effects may experience pecuniary externalities. For example, when it comes to network products, like social media platforms or communication networks, the more people use the technology or engage in it, the more valuable the product becomes. Consequently, early adopters could gain financially from positive pecuniary externalities such as enhanced network effects or greater resale prices of related products or services. As a conclusion, pecuniary externalities draw attention to the intricate relationships that exist between market players and the effects that market transactions have on distribution. Comprehending pecuniary externalities is essential for assessing market results and formulating policies that advance economic efficiency and equality, even if they might not have the same direct impact on welfare or resource allocation as traditional externalities.[14] The concept of inframarginal externalities was introduced by James Buchanan and Craig Stubblebine in 1962.[49]Inframarginal externalities differ from other externalities in that there is no benefit or loss to the marginal consumer. At the relevant margin to the market, the externality does not affect the consumer and does not cause a market inefficiency. The externality only affects at the inframarginal range outside where the market clears. These types of externalities do not cause inefficient allocation of resources and do not require policy action. Technological externalities directly affect a firm's production and therefore, indirectly influence an individual's consumption; and the overall impact of society; for exampleOpen-source softwareorfree softwaredevelopment by corporations. These externalities occur when technology spillovers from the acts of one economic agent impact the production or consumption potential of another agency. Depending on their nature, these spillovers may produce positive or negative externalities. The creation of new technologies that help people in ways that go beyond the original inventor is one instance of positive technical externalities. Let us examine the instance of research and development (R&D) inside the pharmaceutical sector. In addition to possible financial gain, a pharmaceutical company's R&D investment in the creation of a new medicine helps society in other ways. Better health outcomes, higher productivity, and lower healthcare expenses for both people and society at large might result from the new medication. Furthermore, the information created via research and development frequently spreads to other businesses and sectors, promoting additional innovation and economic expansion. For example, biotechnology advances could have uses in agriculture, environmental cleanup, or renewable energy, not just in the pharmaceutical industry. However, technical externalities can also take the form of detrimental spillovers that cost society money. Pollution from industrial manufacturing processes is a prime example. Businesses might not be entirely responsible for the expenses of environmental deterioration if they release toxins into the air or rivers as a result of their production processes. Rather, these expenses are shifted to society in the form of decreased quality of life for impacted populations, harm to the environment, and health risks. In addition, workers in some industries may experience job displacement and unemployment as a result of disruptive developments in labor markets brought about by technological improvements. For instance, individuals with outdated skills may lose their jobs as a result of the automation of manufacturing processes through robots and artificial intelligence, causing social and economic unrest in the affected areas.[8] The usual economic analysis of externalities can be illustrated using a standardsupply and demanddiagram if the externality can be valued in terms ofmoney. An extra supply or demand curve is added, as in the diagrams below. One of the curves is theprivate costthat consumers pay as individuals for additional quantities of the good, which in competitive markets, is the marginal private cost. The other curve is thetruecost that society as a whole pays for production and consumption of increased production the good, or the marginalsocial cost. Similarly, there might be two curves for the demand or benefit of the good. The social demand curve would reflect the benefit to society as a whole, while the normal demand curve reflects the benefit to consumers as individuals and is reflected aseffective demandin the market. What curve is added depends on the type of externality that is described, but not whether it is positive or negative. Whenever an externality arises on the production side, there will be two supply curves (private and social cost). However, if the externality arises on the consumption side, there will be two demand curves instead (private and social benefit). This distinction is essential when it comes to resolving inefficiencies that are caused by externalities. The graph shows the effects of a negative externality. For example, thesteel industryis assumed to be selling in a competitive market – before pollution-control laws were imposed and enforced (e.g. underlaissez-faire). The marginal private cost is less than the marginal social or public cost by the amount of the external cost, i.e., the cost ofair pollutionandwater pollution. This is represented by the vertical distance between the two supply curves. It is assumed that there are no external benefits, so that social benefitequalsindividual benefit. If the consumers only take into account their own private cost, they will end up at pricePpand quantityQp, instead of the more efficient pricePsand quantityQs. These latter reflect the idea that the marginal social benefit should equal the marginal social cost, that is that production should be increasedonlyas long as the marginal social benefit exceeds the marginal social cost. The result is that afree marketisinefficientsince at the quantityQp, the social benefit is less than the social cost, so society as a whole would be better off if the goods betweenQpandQshad not been produced. The problem is that people are buying and consumingtoo muchsteel. This discussion implies that negative externalities (such as pollution) aremore thanmerely an ethical problem. The problem is one of the disjunctures between marginal private and social costs that are not solved by the free market. It is a problem of societal communication and coordination to balance costs and benefits. This also implies that pollution is not something solved by competitive markets. Somecollectivesolution is needed, such as a court system to allow parties affected by the pollution to be compensated, government intervention banning or discouraging pollution, or economic incentives such asgreen taxes. The graph shows the effects of a positive or beneficial externality. For example, the industry supplying smallpox vaccinations is assumed to be selling in a competitive market. The marginal private benefit of getting the vaccination is less than the marginal social or public benefit by the amount of the external benefit (for example, society as a whole is increasingly protected from smallpox by each vaccination, including those who refuse to participate). This marginal external benefit of getting a smallpox shot is represented by the vertical distance between the two demand curves. Assume there are no external costs, so that social costequalsindividual cost. If consumers only take into account their own private benefits from getting vaccinations, the market will end up at pricePpand quantityQpas before, instead of the more efficient pricePsand quantityQs. This latter again reflect the idea that the marginal social benefit should equal the marginal social cost, i.e., that production should be increased as long as the marginal social benefit exceeds the marginal social cost. The result in anunfettered marketisinefficientsince at the quantityQp, the social benefit is greater than the societal cost, so society as a whole would be better off if more goods had been produced. The problem is that people are buyingtoo fewvaccinations. The issue of external benefits is related to that ofpublic goods, which are goods where it is difficult if not impossible to exclude people from benefits. The production of a public good has beneficial externalities for all, or almost all, of the public. As with external costs, there is a problem here of societal communication and coordination to balance benefits and costs. This also implies that vaccination is not something solved by competitive markets. The government may have to step in with a collective solution, such as subsidizing or legally requiring vaccine use. If the government does this, the good is called amerit good. Examples include policies to accelerate the introduction ofelectric vehicles[50]or promotecycling,[51]both of which benefitpublic health. Externalities often arise from poorly definedproperty rights. While property rights to some things, such as objects, land, and money can be easily defined and protected, air, water, and wild animals often flow freely across personal and political borders, making it much more difficult to assign ownership. This incentivizes agents to consume them without paying the full cost, leading to negative externalities. Positive externalities similarly accrue from poorly defined property rights. For example, a person who gets a flu vaccination cannot own part of theherd immunitythis confers on society, so they may choose not to be vaccinated. When resources are managed poorly or there are no well-defined property rights, externalities frequently result, especially when it comes to common pool resources. Due to their rivalrous usage and non-excludability, common pool resources including fisheries, forests, and grazing areas are vulnerable to abuse and deterioration when access is unrestrained. Without clearly defined property rights or efficient management structures, people or organizations may misuse common pool resources without thinking through the long-term effects, which might have detrimental externalities on other users and society at large. This phenomenon—famously referred to by Garrett Hardin as the "tragedy of the commons"—highlights people's propensity to put their immediate self-interests ahead of the sustainability of shared resources.[52] Imagine, for instance, that there are no rules or limits in place and that several fishers have access to a single fishing area. In order to maintain their way of life and earn income, fishers are motivated to maximize their catches, which eventually causes overfishing and the depletion of fish populations. Fish populations decrease, and as a result, ecosystems are irritated, and the fishing industry experiences financial losses. These consequences have an adverse effect on subsequent generations and other people who depend on the resource. Nevertheless, the reduction of externalities linked to resources in common pools frequently necessitates the adoption of collaborative management approaches, like community-based management frameworks, tradable permits, and quotas. Communities can lessen the tragedy of the commons and encourage sustainable resource use and conservation for the benefit of current and future generations by establishing property rights or controlling access to shared resources.[53] Another common cause of externalities is the presence oftransaction costs.[54]Transaction costs are the cost of making an economic trade. These costs prevent economic agents from making exchanges they should be making. The costs of the transaction outweigh the benefit to the agent. When not all mutually beneficial exchanges occur in a market, that market is inefficient. Without transaction costs, agents could freely negotiate and internalize all externalities. In order to further understand transactional costs, it is crucial to discuss Ronald Coase's methodologies. The standard theory of externalities, which holds that internalizing external costs or benefits requires government action through measures like Pigovian taxes or regulations, has been challenged by Coase. He presents the idea of transaction costs, which include the expenses related to reaching, upholding, and keeping an eye on agreements between parties. In the existence of externalities, transaction costs may hinder the effectiveness of private bargaining and result in worse-than-ideal results, according to Coase. He does, however, contend that private parties can establish mutually advantageous arrangements to internalize externalities without the involvement of the government, provided that there are minimal transaction costs and clearly defined property rights. Nevertheless, Coase uses the example of the distribution of property rights between a farmer and a rancher to support his claims. Assume there is a negative externality because the farmer's crops are harmed by the rancher's livestock. In a society where property rights are well-defined and transaction costs are minimal, the farmer and rancher can work out a voluntary agreement to settle the dispute. For example, the farmer may invest in preventive measures to lessen the impact, or the rancher could pay the farmer back for the harm the cattle caused. Coase's approach emphasizes how crucial it is to take property rights and transaction costs into account when managing externalities. He highlights that voluntary transactions between private parties can allow private parties to internalise externalities and that property rights distribution and transaction cost reduction can help make this possible.[55] There are several general types of solutions to the problem of externalities, including both public- and private-sector resolutions: APigovian tax(also called Pigouvian tax, after economist Arthur C. Pigou) is a tax imposed that is equal in value to the negative externality. In order to fully correct the negative externality, the per unit tax should equal the marginal external cost.[57]The result is that the market outcome would be reduced to the efficient amount. A side effect is that revenue is raised for the government, reducing the amount ofdistortionarytaxes that the government must impose elsewhere. Governments justify the use of Pigovian taxes saying that these taxes help the market reach an efficient outcome because this tax bridges the gap between marginal social costs and marginal private costs.[58] Some arguments against Pigovian taxes say that the tax does not account for all the transfers and regulations involved with an externality. In other words, the tax only considers the amount of externality produced.[59]Another argument against the tax is that it does not take private property into consideration. Under the Pigovian system, one firm, for example, can be taxed more than another firm, even though the other firm is actually producing greater amounts of the negative externality.[60] Further arguments against Pigou disagree with his assumption every externality has someone at fault or responsible for the damages.[61]Coase argues that externalities are reciprocal in nature. Both parties must be present for an externality to exist. He uses the example of two neighbors. One neighbor possesses a fireplace, and often lights fires in his house without issue. Then one day, the other neighbor builds a wall that prevents the smoke from escaping and sends it back into the fire-building neighbor’s home. This illustrates the reciprocal nature of externalities. Without the wall, the smoke would not be a problem, but without the fire, the smoke would not exist to cause problems in the first place. Coase also takes issue with Pigou’s assumption of a “benevolent despot” government. Pigou assumes the government’s role is to see the external costs or benefits of a transaction and assign an appropriate tax or subsidy. Coase argues that the government faces costs and benefits just like any other economic agent, so other factors play into its decision-making. However, the most common type of solution is a tacit agreement through the political process. Governments are elected to represent citizens and to strike political compromises between various interests. Normally governments pass laws and regulations to address pollution and other types of environmental harm. These laws and regulations can take the form of "command and control" regulation (such as enforcing standards and limitingprocess variables), orenvironmental pricing reform(such asecotaxesor other Pigovian taxes,tradable pollution permitsor the creation of markets for ecological services). The second type of resolution is a purely private agreement between the parties involved. Government intervention might not always be needed. Traditional ways of life may have evolved as ways to deal with external costs and benefits. Alternatively, democratically run communities can agree to deal with these costs and benefits in an amicable way. Externalities can sometimes be resolved by agreement between the parties involved. This resolution may even come about because of the threat of government action. The use of taxes and subsidies in solving the problem of externalities Correction tax, respectively subsidy, means essentially any mechanism that increases, respectively decreases, the costs (and thus price) associated with the activities of an individual or company.[62] The private-sector may sometimes be able to drive society to the socially optimal resolution.Ronald Coaseargued that an efficient outcome can sometimes be reached without government intervention. Some take this argument further, and make the political argument that government should restrict its role to facilitating bargaining among the affected groups or individuals and to enforcing any contracts that result. This result, often known as theCoase theorem, requires that If all of these conditions apply, the private parties can bargain to solve the problem of externalities. The second part of theCoase theoremasserts that, when these conditions hold, whoever holds the property rights, aPareto efficientoutcome will be reached through bargaining. This theorem would not apply to the steel industry case discussed above. For example, with a steel factory that trespasses on the lungs of a large number of individuals with pollution, it is difficult if not impossible for any one person to negotiate with the producer, and there are large transaction costs. Hence the most common approach may be to regulate the firm (by imposing limits on the amount of pollution considered "acceptable") while paying for the regulation and enforcement withtaxes. The case of the vaccinations would also not satisfy the requirements of the Coase theorem. Since the potential external beneficiaries of vaccination are the people themselves, the people would have to self-organize to pay each other to be vaccinated. But such an organization that involves the entire populace would be indistinguishable from government action. In some cases, the Coase theorem is relevant. For example, if aloggeris planning to clear-cut aforestin a way that has a negative impact on a nearbyresort, the resort-owner and the logger could, in theory, get together to agree to a deal. For example, the resort-owner could pay the logger not to clear-cut – or could buy the forest. The most problematic situation, from Coase's perspective, occurs when the forest literally does not belong to anyone, or in any example in which there are not well-defined and enforceable property rights; the question of "who" owns the forest is not important, as any specific owner will have an interest in coming to an agreement with the resort owner (if such an agreement is mutually beneficial). However, the Coase theorem is difficult to implement because Coase does not offer a negotiation method.[63]Moreover, Coasian solutions are unlikely to be reached due to the possibility of running into theassignment problem, theholdout problem, thefree-rider problem, ortransaction costs. Additionally, firms could potentially bribe each other since there is little to no government interaction under the Coase theorem.[64]For example, if one oil firm has a high pollution rate and its neighboring firm is bothered by the pollution, then the latter firm may move depending on incentives. Thus, if the oil firm were to bribe the second firm, the first oil firm would suffer no negative consequences because the government would not know about the bribing. In a dynamic setup, Rosenkranz and Schmitz (2007) have shown that the impossibility to rule out Coasean bargaining tomorrow may actually justify Pigouvian intervention today.[65]To see this, note that unrestrained bargaining in the future may lead to an underinvestment problem (the so-calledhold-up problem). Specifically, when investments are relationship-specific and non-contractible, then insufficient investments will be made when it is anticipated that parts of the investments’ returns will go to the trading partner in future negotiations (see Hart and Moore, 1988).[66]Hence, Pigouvian taxation can be welfare-improving precisely because Coasean bargaining will take place in the future. Antràs and Staiger (2012) make a related point in the context of international trade.[67] Kenneth Arrow suggests another private solution to the externality problem.[68]He believes setting up a market for the externality is the answer. For example, suppose a firm produces pollution that harms another firm. A competitive market for the right to pollute may allow for an efficient outcome. Firms could bid the price they are willing to pay for the amount they want to pollute, and then have the right to pollute that amount without penalty. This would allow firms to pollute at the amount where the marginal cost of polluting equals the marginal benefit of another unit of pollution, thus leading to efficiency. Frank Knight also argued against government intervention as the solution to externalities.[69]He proposed that externalities could be internalized with privatization of the relevant markets. He uses the example of road congestion to make his point. Congestion could be solved through the taxation of public roads. Knight shows that government intervention is unnecessary if roads were privately owned instead. If roads were privately owned, their owners could set tolls that would reduce traffic and thus congestion to an efficient level. This argument forms the basis of the traffic equilibrium. This argument supposes that two points are connected by two different highways. One highway is in poor condition, but is wide enough to fit all traffic that desires to use it. The other is a much better road, but has limited capacity. Knight argues that, if a large number of vehicles operate between the two destinations and have freedom to choose between the routes, they will distribute themselves in proportions such that the cost per unit of transportation will be the same for every truck on both highways. This is true because as more trucks use the narrow road, congestion develops and as congestion increases it becomes equally profitable to use the poorer highway. This solves the externality issue without requiring any government tax or regulations. The negative effect of carbon emissions and othergreenhouse gasesproduced in production exacerbate the numerous environmental and human impacts of anthropogenic climate change. These negative effects are not reflected in the cost of producing, nor in the market price of the final goods. There are many public and private solutions proposed to combat this externality An emissions fee, orcarbon tax, is a tax levied on each unit of pollution produced in the production of a good or service. The tax incentivised producers to either lower their production levels or to undertake abatement activities that reduce emissions by switching to cleaner technology or inputs.[70] The cap-and-trade system enables the efficient level of pollution (determined by the government) to be achieved by setting a total quantity of emissions and issuing tradable permits to polluting firms, allowing them to pollute a certain share of the permissible level. Permits will be traded from firms that have low abatement costs to firms with higher abatement costs and therefore the system is both cost-effective and cost-efficient. The cap and trade system has some practical advantages over an emissions fee such as the fact that: 1. it reduces uncertainty about the ultimate pollution level. 2. If firms are profit maximizing, they will utilize cost-minimizing technology to attain the standard which is efficient for individual firms and provides incentives to the research and development market to innovate. 3. The market price of pollution rights would keep pace with the price level while the economy experiences inflation. The emissions fee and cap and trade systems are both incentive-based approaches to solving a negative externality problem. Command-and-control regulations act as an alternative to the incentive-based approach. They require a set quantity of pollution reduction and can take the form of either a technology standard or a performance standard. A technology standard requires pollution producing firms to use specified technology. While it may reduce the pollution, it is not cost-effective and stifles innovation by incentivising research and development for technology that would work better than the mandated one. Performance standards set emissions goals for each polluting firm. The free choice of the firm to determine how to reach the desired emissions level makes this option slightly more efficient than the technology standard, however, it is not as cost-effective as the cap-and-trade system since the burden of emissions reduction cannot be shifted to firms with lower abatement.[71] A 2020 scientific analysis of external climate costs of foods indicates that external greenhouse gas costs are typicallyhighest for animal-based products– conventional and organic to about the same extent within thatecosystem-subdomain – followed by conventional dairy products and lowest fororganicplant-based foodsand concludes that contemporary monetary evaluations are "inadequate" and thatpolicy-making that lead toreductions of these coststo be possible, appropriate and urgent.[73][74][72] Ecological economicscriticizes the concept of externality because there is not enoughsystem thinkingand integration of different sciences in the concept. Ecological economics is founded upon the view that theneoclassical economics(NCE) assumption that environmental and community costs and benefits are mutually cancelling "externalities" is not warranted.Joan Martinez Alier,[75]for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized byFred Pearce[76]and by theStern Report(although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such asClive Spash).[77] Concerning these externalities, some, like the eco-businessmanPaul Hawken, argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations.[78]These arguments are developed further by Hawken,AmoryandHunter Lovinsto promote their vision of an environmental capitalist utopia inNatural Capitalism: Creating the Next Industrial Revolution.[79] In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning.[80]Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work byKarl William Kapp[81]argues that the concept of "externality" is a misnomer.[82]In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits.[83]Charles Eisensteinhas argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive.[84]Social ecological economist Clive Spash argues that externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system.[85]Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'. This is precisely why heterodox economists argue for a heterodox theory of social costs to effectively prevent the problem through the precautionary principle.[86]
https://en.wikipedia.org/wiki/Externality
Free willis the capacity or ability tochoosebetween different possible courses ofaction.[1]There are different theories as to its nature. Free will is closely linked to the concepts ofmoral responsibility,praise,culpability, and other judgements which apply only to actions that are freely chosen. It is also connected with the concepts ofadvice,persuasion,deliberation, andprohibition. Traditionally, only actions that are freelywilledare seen as deserving credit or blame. Whether free will exists and the implications of whether it exists or not constitute some of the longest running debates of philosophy. Some conceive of free will as the ability to act beyond the limits of external influences or wishes. Some conceive free will to be the capacity to make choices undetermined by past events.Determinismsuggests that only one course of events is possible, which is inconsistent with a libertarian model of free will.[2]Ancient Greek philosophyidentified this issue,[3]which remains a major focus of philosophical debate. The view that posits free will as incompatible with determinism is calledincompatibilismand encompasses bothmetaphysical libertarianism(the claim that determinism is false and thus free will is at least possible) andhard determinism(the claim that determinism is true and thus free will is not possible). Another incompatibilist position ishard incompatibilism, which holds not only determinism but alsoindeterminismto be incompatible with free will and thus free will to be impossible whatever the case may be regarding determinism. In contrast,compatibilistshold that free williscompatible with determinism. Some compatibilists even hold that determinism isnecessaryfor free will, arguing that choice involves preference for one course of action over another, requiring a sense ofhowchoices will turn out.[4][5]Compatibilists thus consider the debate between libertarians and hard determinists over free will vs. determinism afalse dilemma.[6]Different compatibilists offer very different definitions of what "free will" means and consequently find different types of constraints to be relevant to the issue. Classical compatibilists considered free will nothing more than freedom of action, considering one free of will simply if,hadone counterfactually wanted to do otherwise, onecouldhave done otherwise without physical impediment. Many contemporary compatibilists instead identify free will as a psychological capacity, such as to direct one's behavior in a way responsive to reason, and there are still further different conceptions of free will, each with their own concerns, sharing only the common feature of not finding the possibility of determinism a threat to the possibility of free will.[7] The problem of free will has been identified inancient Greek philosophicalliterature. The notion of compatibilist free will has been attributed to bothAristotle(4th century BCE) andEpictetus(1st century CE): "it was the fact that nothing hindered us from doing or choosing something that made us have control over them".[3][8]According toSusanne Bobzien, the notion of incompatibilist free will is perhaps first identified in the works ofAlexander of Aphrodisias(3rd century CE): "what makes us have control over things is the fact that we are causally undetermined in our decision and thus can freely decide between doing/choosing or not doing/choosing them". The term "free will" (liberum arbitrium) was introduced by Christian philosophy (4th century CE). It has traditionally meant (untilthe Enlightenmentproposed its own meanings) lack of necessity in human will,[9]so that "the will is free" meant "the will does not have to be such as it is". This requirement was universally embraced by both incompatibilists and compatibilists.[10] The underlying questions are whether we have control over our actions, and if so, what sort of control, and to what extent. These questions predate the early Greekstoics(for example,Chrysippus), and some modern philosophers lament the lack of progress over all these centuries.[11][12] On one hand, humans have a strong sense of freedom, which leads them to believe that they have free will.[13][14]On the other hand, an intuitive feeling of free will could be mistaken.[15][16] It is difficult to reconcile the intuitive evidence that conscious decisions are causally effective with the view that the physical world can be explained entirely byphysical law.[17]The conflict between intuitively felt freedom and natural law arises when eithercausal closureorphysical determinism(nomological determinism) is asserted. With causal closure, no physical event has a cause outside the physical domain, and with physical determinism, the future is determined entirely by preceding events (cause and effect). The puzzle of reconciling 'free will' with a deterministic universe is known as theproblem of free willor sometimes referred to as thedilemma of determinism.[18]This dilemma leads to amoraldilemma as well: the question of how to assignresponsibilityfor actions if they are caused entirely by past events.[19][20] Compatibilists maintain that mental reality is not of itself causally effective.[21][22]Classicalcompatibilistshave addressed the dilemma of free will by arguing that free will holds as long as humans are not externally constrained or coerced.[23]Modern compatibilists make a distinction between freedom of will and freedom ofaction, that is, separatingfreedom of choicefrom the freedom to enact it.[24]Given that humans all experience a sense of free will, some modern compatibilists think it is necessary to accommodate this intuition.[25][26]Compatibilists often associate freedom of will with theabilityto make rational decisions. A different approach to the dilemma is that ofincompatibilists, namely, that if the world is deterministic, then our feeling that we are free to choose an action is simply anillusion.Metaphysical libertarianismis the form of incompatibilism which posits thatdeterminismis false and free will is possible (at least some people have free will).[27]This view is associated withnon-materialistconstructions,[15]including both traditionaldualism, as well as models supporting more minimal criteria; such as the ability to consciously veto an action or competing desire.[28][29]Yet even withphysical indeterminism, arguments have been made against libertarianism in that it is difficult to assignOrigination(responsibility for "free" indeterministic choices). Free will here is predominantly treated with respect tophysical determinismin the strict sense ofnomological determinism, although other forms of determinism are also relevant to free will.[30]For example, logical andtheological determinismchallenge metaphysical libertarianism with ideas ofdestinyandfate, andbiological,culturalandpsychologicaldeterminism feed the development of compatibilist models. Separate classes of compatibilism and incompatibilism may even be formed to represent these.[31] Below are the classic arguments bearing upon the dilemma and its underpinnings. Incompatibilism is the position that free will and determinism are logically incompatible, and that the major question regarding whether or not people have free will is thus whether or not their actions are determined. "Hard determinists", such asd'Holbach, are those incompatibilists who accept determinism and reject free will. In contrast, "metaphysical libertarians", such asThomas Reid,Peter van Inwagen, andRobert Kane, are those incompatibilists who accept free will and deny determinism, holding the view that some form of indeterminism is true.[32]Another view is that of hard incompatibilists, which state that free will is incompatible with bothdeterminismandindeterminism.[33] Traditional arguments for incompatibilism are based on an "intuition pump": if a person is like other mechanical things that are determined in their behavior such as a wind-up toy, a billiard ball, a puppet, or a robot, then people must not have free will.[32][34]This argument has been rejected by compatibilists such asDaniel Dennetton the grounds that, even if humans have something in common with these things, it remains possible and plausible that we are different from such objects in important ways.[35] Another argument for incompatibilism is that of the "causal chain". Incompatibilism is key to the idealist theory of free will. Most incompatibilists reject the idea that freedom of action consists simply in "voluntary" behavior. They insist, rather, that free will means that someone must be the "ultimate" or "originating" cause of his actions. They must becausa sui, in the traditional phrase. Being responsible for one's choices is the first cause of those choices, where first cause means that there is no antecedent cause of that cause. The argument, then, is that if a person has free will, then they are the ultimate cause of their actions. If determinism is true, then all of a person's choices are caused by events and facts outside their control. So, if everything someone does is caused by events and facts outside their control, then they cannot be the ultimate cause of their actions. Therefore, they cannot have free will.[36][37][38]This argument has also been challenged by various compatibilist philosophers.[39][40] A third argument for incompatibilism was formulated byCarl Ginetin the 1960s and has received much attention in the modern literature. The simplified argument runs along these lines: if determinism is true, then we have no control over the events of the past that determined our present state and no control over the laws of nature. Since we can have no control over these matters, we also can have no control over theconsequencesof them. Since our present choices and acts, under determinism, are the necessary consequences of the past and the laws of nature, then we have no control over them and, hence, no free will. This is called theconsequence argument.[41][42]Peter van Inwagenremarks that C.D. Broad had a version of the consequence argument as early as the 1930s.[43] The difficulty of this argument for some compatibilists lies in the fact that it entails the impossibility that one could have chosen other than one has. For example, if Jane is a compatibilist and she has just sat down on the sofa, then she is committed to the claim that she could have remained standing, if she had so desired. But itfollows fromthe consequence argument that, if Jane had remained standing, she would have either generated a contradiction, violated the laws of nature or changed the past. Hence, compatibilists are committed to the existence of "incredible abilities", according to Ginet and van Inwagen. One response to this argument is that it equivocates on the notions of abilities and necessities, or that the free will evoked to make any given choice is really an illusion and the choice had been made all along, oblivious to its "decider".[42]David Lewissuggests that compatibilists are only committed to the ability to do something otherwise ifdifferent circumstanceshad actually obtained in the past.[44] UsingT,Ffor "true" and "false" and?for undecided, there are exactly nine positions regarding determinism/free will that consist of any two of these three possibilities:[45] Incompatibilismmay occupy any of the nine positions except (5), (8) or (3), which last corresponds tosoft determinism. Position (1) ishard determinism, and position (2) islibertarianism. The position (1) of hard determinism adds to the table the contention thatDimpliesFWis untrue, and the position (2) of libertarianism adds the contention thatFWimpliesDis untrue. Position (9) may be calledhard incompatibilismif one interprets?as meaning both concepts are of dubious value.Compatibilismitself may occupy any of the nine positions, that is, there is no logical contradiction between determinism and free will, and either or both may be true or false in principle. However, the most common meaning attached tocompatibilismis that some form of determinism is true and yet we have some form of free will, position (3).[46] Alex Rosenbergmakes an extrapolation of physical determinism as inferred on the macroscopic scale by the behaviour of a set of dominoes to neural activity in the brain where; "If the brain is nothing but a complex physical object whose states are as much governed by physical laws as any other physical object, then what goes on in our heads is as fixed and determined by prior events as what goes on when one domino topples another in a long row of them."[47]Physical determinismis currently disputed by prominentinterpretations of quantum mechanics, and while not necessarily representative of intrinsicindeterminismin nature, fundamental limits of precision in measurement are inherent in theuncertainty principle.[48]The relevance of such prospective indeterminate activity to free will is, however, contested,[49]even when chaos theory is introduced to magnify the effects of such microscopic events.[29][50] Below these positions are examined in more detail.[45] Determinism can be divided into causal, logical and theological determinism.[51]Corresponding to each of these different meanings, there arises a different problem for free will.[52]Hard determinism is the claim thatdeterminismis true, and that it isincompatible with free will, so free will does not exist. Although hard determinism generally refers tonomological determinism(see causal determinism below), it can include all forms of determinism that necessitate the future in its entirety.[53]Relevant forms of determinism include: Other forms of determinism are more relevant to compatibilism, such asbiological determinism, the idea that all behaviors, beliefs, and desires are fixed by our genetic endowment and our biochemical makeup, the latter of which is affected by both genes and environment,cultural determinismandpsychological determinism.[52]Combinations and syntheses of determinist theses, such as bio-environmental determinism, are even more common. Suggestions have been made that hard determinism need not maintain strict determinism, where something near to, like that informally known asadequate determinism, is perhaps more relevant.[30]Despite this, hard determinism has grown less popular in present times, given scientific suggestions that determinism is false – yet the intention of their position is sustained by hard incompatibilism.[27] One kind of incompatibilism, metaphysical libertarianism holds onto a concept of free will that requires that theagentbe able to take more than one possible course of action under a given set of circumstances.[62] Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, which requires that the world is not closed under physics. This includesinteractionist dualism, which claims that some non-physicalmind, will, orsouloverrides physicalcausality. Physical determinism implies there is only one possible future and is therefore not compatible with libertarian free will. As consequent of incompatibilism, metaphysical libertarian explanations that do not involve dispensing withphysicalismrequire physical indeterminism, such as probabilistic subatomic particle behavior – theory unknown to many of the early writers on free will. Incompatibilist theories can be categorised based on the type of indeterminism they require; uncaused events, non-deterministically caused events, and agent/substance-caused events.[59] Non-causal accounts of incompatibilist free will do not require a free action to be caused by either an agent or a physical event. They either rely upon a world that is not causally closed, or physical indeterminism. Non-causal accounts often claim that each intentional action requires a choice or volition – a willing, trying, or endeavoring on behalf of the agent (such as the cognitive component of lifting one's arm).[63][64]Such intentional actions are interpreted as free actions. It has been suggested, however, that such acting cannot be said to exercise control over anything in particular. According to non-causal accounts, the causation by the agent cannot be analysed in terms of causation by mental states or events, including desire, belief, intention of something in particular, but rather is considered a matter of spontaneity and creativity. The exercise of intent in such intentional actions is not that which determines their freedom – intentional actions are rather self-generating. The "actish feel" of some intentional actions do not "constitute that event's activeness, or the agent's exercise of active control", rather they "might be brought about by direct stimulation of someone's brain, in the absence of any relevant desire or intention on the part of that person".[59]Another question raised by such non-causal theory, is how an agent acts upon reason, if the said intentional actions are spontaneous. Some non-causal explanations involve invokingpanpsychism, the theory that a quality ofmindis associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here asdeliberative indeterminism,centred accounts, andefforts of will theory.[59]The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world. Deliberative indeterminismasserts that the indeterminism is confined to an earlier stage in the decision process.[65][66]This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction ofluck(random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced byDaniel Dennett[67]andJohn Martin Fischer.[68]An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model. Centred accountspropose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen.[60][69][70][71][72][73][74]An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision. Efforts of will theoryis related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events – and the outcomes of these events could therefore be considered caused by the agent. Models ofvolitionhave been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that ofRobert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes – a hindrance or obstacle in the form of resistance within her will which must be overcome by effort."[29]According to Robert Kane such "ultimate responsibility" is a required condition for free will.[75]An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (seenon-reductive physicalism). Although at the timequantum mechanics(and physicalindeterminism) was only in the initial stages of acceptance, in his bookMiracles: A preliminary studyC.S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality.[76]Indeterministicphysical models (particularly those involvingquantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allowincompatibilistfree will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in consciousvolition) map to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable,[77]and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in theneuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality.[48]Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.[49] Agent/substance-causal accounts of incompatibilist free will rely upon substance dualism in their description of mind. The agent is assumed power to intervene in the physical world.[78][79][80][81][82][83][84][85]Agent (substance)-causal accounts have been suggested by bothGeorge Berkeley[86]andThomas Reid.[87]It is required that what the agent causes is not causally determined by prior events. It is also required that the agent's causing of that event is not causally determined by prior events. A number of problems have been identified with this view. Firstly, it is difficult to establish the reason for any given choice by the agent, which suggests they may be random or determined byluck(without an underlying basis for the free will decision). Secondly, it has been questioned whether physical events can be caused by an external substance or mind – a common problem associated withinteractionalist dualism. Hard incompatibilism is the idea that free will cannot exist, whether the world is deterministic or not.Derk Pereboomhas defended hard incompatibilism, identifying a variety of positions where free will is irrelevant to indeterminism/determinism, among them the following: Pereboom calls positions 3 and 4soft determinism, position 1 a form ofhard determinism, position 6 a form ofclassical libertarianism, and any position that includes having F ascompatibilism. John Lockedenied that the phrase "free will" made any sense (compare withtheological noncognitivism, a similar stance on theexistence of God). He also took the view that the truth of determinism was irrelevant. He believed that the defining feature of voluntary behavior was that individuals have the ability topostponea decision long enough to reflect or deliberate upon the consequences of a choice: "...the will in truth, signifies nothing but a power, or ability, to prefer or choose".[88] The contemporary philosopherGalen Strawsonagrees with Locke that the truth or falsity of determinism is irrelevant to the problem.[89]He argues that the notion of free will leads to an infinite regress and is therefore senseless. According to Strawson, if one is responsible for what one does in a given situation, then one must be responsible for the way one is in certain mental respects. But it is impossible for one to be responsible for the way one is in any respect. This is because to be responsible in some situationS, one must have been responsible for the way one was atS−1. To be responsible for the way one was atS−1, one must have been responsible for the way one was atS−2, and so on. At some point in the chain, there must have been an act of origination of a new causal chain. But this is impossible. Man cannot create himself or his mental statesex nihilo. This argument entails that free will itself is absurd, but not that it is incompatible with determinism. Strawson calls his own view "pessimism" but it can be classified ashard incompatibilism.[89] Causal determinism is the concept thateventswithin a givenparadigmare bound bycausalityin such a way that any state (of an object or event) is completely determined by prior states. Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. Causal determinists believe that there is nothing uncaused orself-caused. The most common form of causal determinism is nomological determinism (or scientific determinism), the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events.Quantum mechanicsposes a serious challenge to this view. Fundamental debate continues over whether the physical universe is likely to bedeterministic. Although the scientific method cannot be used to rule outindeterminismwith respect to violations ofcausal closure, it can be used to identify indeterminism in natural law.Interpretations of quantum mechanicsat present are bothdeterministicandindeterministic, and are being constrained by ongoing experimentation.[90] Destiny or fate is a predetermined course of events. It may be conceived as a predetermined future, whether in general or of an individual. It is a concept based on the belief that there is a fixed natural order to the cosmos. Although often used interchangeably, the words "fate" and "destiny" have distinct connotations. Fategenerally implies there is a set course that cannot be deviated from, and over which one has no control. Fate is related todeterminism, but makes no specific claim of physical determinism. Even with physical indeterminism an event could still be fated externally (see for instancetheological determinism). Destiny likewise is related to determinism, but makes no specific claim of physical determinism. Even with physical indeterminism an event could still be destined to occur. Destinyimplies there is a set course that cannot be deviated from, but does not of itself make any claim with respect to the setting of that course (i.e., it does not necessarily conflict withincompatibilistfree will). Free will if existent could be the mechanism by which that destined outcome is chosen (determined to represent destiny).[91] Discussion regarding destiny does not necessitate the existence of supernatural powers. Logicaldeterminismor determinateness is the notion that all propositions, whether about the past, present, or future, are either true or false. This creates a unique problem for free will given that propositions about the future already have a truth value in the present (that is it is already determined as either true or false), and is referred to as theproblem of future contingents. Omniscienceis the capacity to know everything that there is to know (included in which are all future events), and is a property often attributed to a creator deity. Omniscience implies the existence of destiny. Some authors have claimed that free will cannot coexist with omniscience. One argument asserts that an omniscient creator not only implies destiny but a form of high levelpredeterminismsuch as hardtheological determinismorpredestination– that they have independently fixed all events and outcomes in the universe in advance. In such a case, even if an individual could have influence over their lower level physical system, their choices in regard to this cannot be their own, as is the case with libertarian free will. Omniscience features as anincompatible-properties argumentfor the existence ofGod, known as theargument from free will, and is closely related to other such arguments, for example the incompatibility ofomnipotencewith a good creator deity (i.e. if a deity knew what they were going to choose, then they are responsible for letting them choose it). Predeterminismis the idea that all events are determined in advance.[92][93]Predeterminism is thephilosophythat all events ofhistory, past, present and future, have been decided or are known (byGod,fate, or some other force), including human actions. Predeterminism is frequently taken to mean that human actions cannot interfere with (or have no bearing on) the outcomes of a pre-determined course of events, and that one's destiny was established externally (for example, exclusively by a creator deity). The concept of predeterminism is often argued by invokingcausal determinism, implying that there is an unbrokenchain of prior occurrencesstretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain. Predeterminism can be used to mean such pre-established causal determinism, in which case it is categorised as a specific type ofdeterminism.[92][94]It can also be used interchangeably with causal determinism – in the context of its capacity to determine future events.[92][95]Despite this, predeterminism is often considered as independent of causal determinism.[96][97]The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form ofbiological determinism.[98] The term predeterminism suggests not just a determining of all events, but the prior and deliberately conscious determining of all events (therefore done, presumably, by a conscious being). While determinism usually refers to a naturalistically explainable causality of events, predeterminism seems by definition to suggest a person or a "someone" who is controlling or planning the causality of events before they occur and who then perhaps resides beyond the natural, causal universe.Predestinationasserts that a supremely powerful being has indeed fixed all events and outcomes in the universe in advance, and is a famous doctrine of theCalvinistsinChristian theology. Predestination is often considered a form of hardtheological determinism. Predeterminism has therefore been compared tofatalism.[99]Fatalism is the idea that everything is fated to happen, so that humans have no control over their future. Theological determinismis a form ofdeterminismstating that all events that happen are pre-ordained, orpredestinedto happen, by amonotheisticdeity, or that they are destined to occur given itsomniscience. Two forms of theological determinism exist, here referenced as strong and weak theological determinism.[100] There exist slight variations on the above categorisation. Some claim that theological determinism requirespredestinationof all events and outcomes by the divinity (that is, they do not classify the weaker version as 'theological determinism' unless libertarian free will is assumed to be denied as a consequence), or that the weaker version does not constitute 'theological determinism' at all.[53]Theological determinism can also be seen as a form ofcausal determinism, in which the antecedent conditions are the nature and will of God.[54]With respect to free will and the classification of theological compatibilism/incompatibilism below, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism.[30] There are various implications formetaphysical libertarianfree will as consequent of theological determinism and its philosophical interpretation. The basic argument for theological fatalism in the case of weak theological determinism is as follows: This argument is very often accepted as a basis for theological incompatibilism: denying either libertarian free will or divine foreknowledge (omniscience) and therefore theological determinism. On the other hand, theological compatibilism must attempt to find problems with it. The formal version of the argument rests on a number of premises, many of which have received some degree of contention. Theological compatibilist responses have included: In the definition ofcompatibilismandincompatibilism, the literature often fails to distinguish between physical determinism and higher level forms of determinism (predeterminism, theological determinism, etc.) As such, hard determinism with respect to theological determinism (or "Hard Theological Determinism" above) might be classified as hard incompatibilism with respect to physical determinism (if no claim was made regarding the internal causality or determinism of the universe), or even compatibilism (if freedom from the constraint of determinism was not considered necessary for free will), if not hard determinism itself. By the same principle, metaphysical libertarianism (a form of incompatibilism with respect to physical determinism) might be classified as compatibilism with respect to theological determinism (if it was assumed such free will events were pre-ordained and therefore were destined to occur, but of which whose outcomes were not "predestined" or determined by God). If hard theological determinism is accepted (if it was assumed instead that such outcomes were predestined by God), then metaphysical libertarianism is not, however, possible, and would require reclassification (as hard incompatibilism for example, given that the universe is still assumed to be indeterministic – although the classification of hard determinism is technically valid also).[53] The idea offree willis one aspect of themind–body problem, that is, consideration of the relation betweenmind(for example, consciousness, memory, and judgment) and body (for example, thehuman brainandnervous system).Philosophical models of mindare divided intophysicaland non-physical expositions. Cartesian dualismholds that the mind is a nonphysical substance, the seat of consciousness and intelligence, and is not identical with physical states of the brain or body. It is suggested that although the two worlds do interact, each retains some measure of autonomy. Under cartesian dualism external mind is responsible for bodily action, although unconscious brain activity is often caused by external events (for example, the instantaneous reaction to being burned).[107]Cartesian dualism implies that the physical world is not deterministic – and in which external mind controls (at least some) physical events, providing an interpretation ofincompatibilistfree will. Stemming from Cartesian dualism, a formulation sometimes calledinteractionalist dualismsuggests a two-way interaction, that some physical events cause some mental acts and some mental acts cause some physical events. One modern vision of the possible separation of mind and body is the"three-world" formulationofPopper.[108]Cartesian dualism and Popper's three worlds are two forms of what is calledepistemological pluralism, that is the notion that different epistemological methodologies are necessary to attain a full description of the world. Other forms of epistemological pluralist dualism includepsychophysical parallelismandepiphenomenalism. Epistemological pluralism is one view in which the mind-body problem isnotreducible to the concepts of the natural sciences. A contrasting approach is calledphysicalism. Physicalism is aphilosophical theoryholding that everything thatexistsis no more extensive than itsphysical properties; that is, that there are no non-physical substances (for example physically independent minds). Physicalism can be reductive or non-reductive.Reductive physicalismis grounded in the idea that everything in the world can actually be reduced analytically to its fundamental physical, or material, basis. Alternatively,non-reductive physicalismasserts that mental properties form a separate ontological class to physical properties: that mental states (such asqualia) are not ontologically reducible to physical states. Although one might suppose that mental states and neurological states are different in kind, that does not rule out the possibility that mental states are correlated with neurological states. In one such construction,anomalous monism, mental eventssuperveneon physical events, describing theemergenceof mental properties correlated with physical properties – implying causal reducibility. Non-reductive physicalism is therefore often categorised asproperty dualismrather thanmonism, yet other types of property dualism do not adhere to the causal reducibility of mental states (see epiphenomenalism). Incompatibilismrequires a distinction between the mental and the physical, being a commentary on the incompatibility of (determined) physical reality and one's presumably distinct experience of will. Secondarily,metaphysical libertarianfree will must assert influence on physical reality, and where mind is responsible for such influence (as opposed to ordinary system randomness), it must be distinct from body to accomplish this. Both substance and property dualism offer such a distinction, and those particular models thereof that are not causally inert with respect to the physical world provide a basis for illustrating incompatibilist free will (i.e. interactionalist dualism and non-reductive physicalism). It has been noted that thelaws of physicshave yet to resolve thehard problem of consciousness:[109]"Solving the hard problem of consciousness involves determining how physiological processes such as ions flowing across the nerve membranecauseus to have experiences."[110]According to some, "Intricately related to the hard problem of consciousness, the hard problem of free will representsthecore problem of conscious free will: Does conscious volition impact the material world?"[15]Others however argue that "consciousnessplays a far smaller role in human life than Western culture has tended to believe."[111] Compatibilists maintain that determinism is compatible with free will. They believe freedom can be present or absent in a situation for reasons that have nothing to do with metaphysics. For instance,courts of lawmake judgments about whether individuals are acting under their own free will under certain circumstances without bringing in metaphysics. Similarly,political libertyis a non-metaphysical concept.[112]Likewise, some compatibilists define free will as freedom to act according to one's determined motives without hindrance from other individuals. So for example Aristotle in hisNicomachean Ethics,[113]and the Stoic Chrysippus.[114]In contrast, theincompatibilistpositions are concerned with a sort of "metaphysically free will", which compatibilists claim has never been coherently defined. Compatibilists argue that determinism does not matter; though they disagree among themselves about what, in turn,doesmatter. To be a compatibilist, one need not endorse any particular conception of free will, but only deny that determinism is at odds with free will.[115] Although there are various impediments to exercising one's choices, free will does not imply freedom of action. Freedom of choice (freedom to select one's will) is logically separate from freedom toimplementthat choice (freedom to enact one's will), although not all writers observe this distinction.[24]Nonetheless, some philosophers have defined free will as the absence of various impediments. Some "modern compatibilists", such asHarry FrankfurtandDaniel Dennett, argue free will is simply freely choosing to do what constraints allow one to do. In other words, a coerced agent's choices can still be free if such coercion coincides with the agent's personal intentions and desires.[35][116] Most "classical compatibilists", such asThomas Hobbes, claim that a person is acting on the person's own will only when it is the desire of that person to do the act, and also possible for the person to be able to do otherwise,if the person had decided to. Hobbes sometimes attributes such compatibilist freedom to each individual and not to some abstract notion ofwill, asserting, for example, that "no liberty can be inferred to the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop, in doing what he has the will, desire, or inclination to doe [sic]."[117]In articulating this crucial proviso,David Humewrites, "this hypothetical liberty is universally allowed to belong to every one who is not a prisoner and in chains."[118]Similarly,Voltaire, in hisDictionnaire philosophique, claimed that "Liberty then is only and can be only the power to do what one will." He asked, "would you have everything at the pleasure of a million blind caprices?" For him, free will or liberty is "only the power of acting, what is this power? It is the effect of the constitution and present state of our organs." Compatibilism often regards the agent free as virtue of their reason. Some explanations of free will focus on the internal causality of the mind with respect to higher-order brain processing – the interaction between conscious and unconscious brain activity.[119]Likewise, some modern compatibilists inpsychologyhave tried to revive traditionally accepted struggles of free will with the formation of character.[120]Compatibilist free will has also been attributed to our naturalsense of agency, where one must believe they are an agent in order to function and develop atheory of mind.[121][122] The notion of levels of decision is presented in a different manner by Frankfurt.[116]Frankfurt argues for a version of compatibilism called the "hierarchical mesh". The idea is that an individual can have conflicting desires at a first-order level and also have a desire about the various first-order desires (a second-order desire) to the effect that one of the desires prevails over the others. A person's will is identified with their effective first-order desire, that is, the one they act on, and this will is free if it was the desire the person wanted to act upon, that is, the person's second-order desire was effective. So, for example, there are "wanton addicts", "unwilling addicts" and "willing addicts". All three groups may have the conflicting first-order desires to want to take the drug they are addicted to and to not want to take it. The first group,wanton addicts, have no second-order desire not to take the drug. The second group, "unwilling addicts", have a second-order desire not to take the drug, while the third group, "willing addicts", have a second-order desire to take it. According to Frankfurt, the members of the first group are devoid of will and therefore are no longer persons. The members of the second group freely desire not to take the drug, but their will is overcome by the addiction. Finally, the members of the third group willingly take the drug they are addicted to. Frankfurt's theory can ramify to any number of levels. Critics of the theory point out that there is no certainty that conflicts will not arise even at the higher-order levels of desire and preference.[123]Others argue that Frankfurt offers no adequate explanation of how the various levels in the hierarchy mesh together.[124] InElbow Room, Dennett presents an argument for a compatibilist theory of free will, which he further elaborated in the bookFreedom Evolves.[125]The basic reasoning is that, if one excludes God, an infinitely powerfuldemon, and other such possibilities, then because ofchaosand epistemic limits on the precision of our knowledge of the current state of the world, the future is ill-defined for all finite beings. The only well-defined things are "expectations". The ability to do "otherwise" only makes sense when dealing with these expectations, and not with some unknown and unknowable future. According to Dennett, because individuals have the ability to act differently from what anyone expects, free will can exist.[125]Incompatibilists claim the problem with this idea is that we may be mere "automata responding in predictable ways to stimuli in our environment". Therefore, all of our actions are controlled by forces outside ourselves, or by random chance.[126]More sophisticated analyses of compatibilist free will have been offered, as have other critiques.[115] In the philosophy ofdecision theory, a fundamental question is: From the standpoint of statistical outcomes, to what extent do the choices of a conscious being have the ability to influence the future?Newcomb's paradoxand other philosophical problems pose questions about free will and predictable outcomes of choices. Compatibilistmodels of free will often consider deterministic relationships as discoverable in the physical world (including the brain). Cognitivenaturalism[127]is aphysicalistapproach to studying humancognitionandconsciousnessin which the mind is simply part of nature, perhaps merely a feature of many very complex self-programming feedback systems (for example,neural networksandcognitive robots), and so must be studied by the methods of empirical science, such as thebehavioralandcognitive sciences(i.e.neuroscienceandcognitive psychology).[107][128]Cognitive naturalism stresses the role of neurological sciences. Overall brain health,substance dependence,depression, and variouspersonality disordersclearly influence mental activity, and their impact uponvolitionis also important.[119]For example, anaddictmay experience a conscious desire to escape addiction, but be unable to do so. The "will" is disconnected from the freedom to act. This situation is related to an abnormal production and distribution ofdopaminein the brain.[129]The neuroscience of free will places restrictions on both compatibilist and incompatibilist free will conceptions. Compatibilist models adhere to models of mind in which mental activity (such as deliberation) can be reduced to physical activity without any change in physical outcome. Although compatibilism is generally aligned to (or is at least compatible with) physicalism, some compatibilist models describe the natural occurrences of deterministic deliberation in the brain in terms of the first person perspective of the conscious agent performing the deliberation.[15]Such an approach has been considered a form of identity dualism. A description of "how conscious experience might affect brains" has been provided in which "the experience of conscious free will is the first-person perspective of the neural correlates of choosing."[15] Recently,[when?]Claudio Costadeveloped a neocompatibilist theory based on the causal theory of action that is complementary to classical compatibilism. According to him, physical, psychological and rational restrictions can interfere at different levels of the causal chain that would naturally lead to action. Correspondingly, there can be physical restrictions to the body, psychological restrictions to the decision, and rational restrictions to the formation of reasons (desires plus beliefs) that should lead to what we would call a reasonable action. The last two are usually called "restrictions of free will". The restriction at the level of reasons is particularly important since it can be motivated by external reasons that are insufficiently conscious to the agent. One example was the collective suicide led byJim Jones. The suicidal agents were not conscious that their free will have been manipulated by external, even if ungrounded, reasons.[130] Alternatives to strictlynaturalistphysics, such asmind–body dualismpositing a mind or soul existing apart from one's body while perceiving, thinking, choosing freely, and as a result acting independently on the body, include both traditional religious metaphysics and less common newer compatibilist concepts.[131]Also consistent with both autonomy andDarwinism,[132]they allow for free personal agency based on practical reasons within the laws of physics.[133]While less popular among 21st-century philosophers, non-naturalist compatibilism is present in most if not almost all religions.[134] Some philosophers' views are difficult to categorize as either compatibilist or incompatibilist, hard determinist or libertarian. For example,Nietzschecriticized common conceptions of free will- arguing that effects of destiny are inescapable- while at the same time criticizing determinism and compatibilism. He wrote of "amor fati" - loving ones fate.Ted Honderichholds the view that "determinism is true, compatibilism and incompatibilism are both false" and the real problem lies elsewhere. Honderich maintains that determinism is true because quantum phenomena are not events or things that can be located in space and time, but areabstractentities. Further, even if they were micro-level events, they do not seem to have any relevance to how the world is at the macroscopic level. He maintains that incompatibilism is false because, even if indeterminism is true, incompatibilists have not provided, and cannot provide, an adequate account of origination. He rejects compatibilism because it, like incompatibilism, assumes a single, fundamental notion of freedom. There are really two notions of freedom: voluntary action and origination. Both notions are required to explain freedom of will and responsibility. Both determinism and indeterminism are threats to such freedom. To abandon these notions of freedom would be to abandon moral responsibility. On the one side, we have our intuitions; on the other, the scientific facts. The "new" problem is how to resolve this conflict.[135] David Humediscussed the possibility that the entire debate about free will is nothing more than a merely "verbal" issue. He suggested that it might be accounted for by "a false sensation or seeming experience" (avelleity), which is associated with many of our actions when we perform them. On reflection, we realize that they were necessary and determined all along.[137] According toArthur Schopenhauer, the actions of humans, asphenomena, are subject to theprinciple of sufficient reasonand thus liable to necessity. Thus, he argues, humans do not possess free will as conventionally understood. However, thewill[urging, craving, striving, wanting, and desiring], as thenoumenonunderlying the phenomenal world, is in itself groundless: that is, not subject to time, space, and causality (the forms that governs the world of appearance). Thus, the will, in itself and outside of appearance, is free. Schopenhauer discussed the puzzle of free will and moral responsibility inThe World as Will and Representation, Book 2, Sec. 23: But the fact is overlooked that the individual, the person, is not will asthing-in-itself, but isphenomenonof the will, is as such determined, and has entered the form of the phenomenon, the principle of sufficient reason. Hence we get the strange fact that everyone considers himself to bea prioriquite free, even in his individual actions, and imagines he can at any moment enter upon a different way of life... Buta posteriorithrough experience, he finds to his astonishment that he is not free, but liable to necessity; that notwithstanding all his resolutions and reflections he does not change his conduct, and that from the beginning to the end of his life he must bear the same character that he himself condemns, and, as it were, must play to the end the part he has taken upon himself.[138] Schopenhauer elaborated on the topic in Book IV of the same work and in even greater depth in his later essayOn the Freedom of the Will.In this work, he stated, "You can do what you will, but in any given moment of your life you canwillonly one definite thing and absolutely nothing other than that one thing."[139] Rudolf Steiner, who collaborated in a complete edition of Arthur Schopenhauer's work,[140]wroteThe Philosophy of Freedom, which focuses on the problem of free will. Steiner (1861–1925) initially divides this into the two aspects of freedom:freedom of thoughtandfreedom of action. The controllable and uncontrollable aspects of decision making thereby are made logically separable, as pointed out in the introduction. This separation ofwillfromactionhas a very long history, going back at least as far asStoicismand the teachings ofChrysippus(279–206 BCE), who separated externalantecedentcauses from the internal disposition receiving this cause.[141] Steiner then argues that inner freedom is achieved when we integrate our sensory impressions, which reflect the outer appearance of the world, with our thoughts, which lend coherence to these impressions and thereby disclose to us an understandable world. Acknowledging the many influences on our choices, he nevertheless points out that they do not preclude freedom unless we fail to recognise them. Steiner argues that outer freedom is attained by permeating our deeds withmoral imagination."Moral" in this case refers to action that is willed, while "imagination" refers to the mental capacity to envision conditions that do not already hold. Both of these functions are necessarily conditions for freedom. Steiner aims to show that these two aspects of inner and outer freedom are integral to one another, and that true freedom is only achieved when they are united.[142] William James' views were ambivalent. While he believed in free will on "ethical grounds", he did not believe that there was evidence for it on scientific grounds, nor did his own introspections support it.[143]Ultimately he believed that the problem of free will was a metaphysical issue and, therefore, could not be settled by science. Moreover, he did not accept incompatibilism as formulated below; he did not believe that the indeterminism of human actions was a prerequisite of moral responsibility. In his workPragmatism, he wrote that "instinct and utility between them can safely be trusted to carry on the social business of punishment and praise" regardless of metaphysical theories.[144]He did believe that indeterminism is important as a "doctrine of relief" – it allows for the view that, although the world may be in many respects a bad place, it may, through individuals' actions, become a better one. Determinism, he argued, underminesmeliorism– the idea that progress is a real concept leading to improvement in the world.[144] In 1739,David Humein hisA Treatise of Human Natureapproached free will via the notion of causality. It was his position that causality was a mental construct used to explain the repeated association of events, and that one must examine more closely the relation between thingsregularly succeedingone another (descriptions of regularity in nature) and things thatresultin other things (things that cause or necessitate other things).[145]According to Hume, 'causation' is on weak grounds: "Once we realise that 'A must bring about B' is tantamount merely to 'Due to their constant conjunction, we are psychologically certain that B will follow A,' then we are left with a very weak notion of necessity."[146] This empiricist view was often denied by trying to prove the so-calledapriorityof causal law (i.e. that it precedes all experience and is rooted in the construction of the perceivable world): In the 1780sImmanuel Kantsuggested at a minimum our decision processes with moral implications lie outside the reach of everyday causality, and lie outside the rules governing material objects.[149]"There is a sharp difference between moral judgments and judgments of fact... Moral judgments... must bea priorijudgments."[150] Freeman introduces what he calls "circular causality" to "allow for the contribution of self-organizing dynamics", the "formation of macroscopic population dynamics that shapes the patterns of activity of the contributing individuals", applicable to "interactions between neurons and neural masses... and between the behaving animal and its environment".[151]In this view, mind and neurological functions are tightly coupled in a situation where feedback between collective actions (mind) and individual subsystems (for example,neuronsand theirsynapses) jointly decide upon the behaviour of both. Thirteenth century philosopherThomas Aquinasviewed humans as pre-programmed (by virtue of being human) to seek certain goals, but able to choose between routes to achieve these goals (our Aristoteliantelos). His view has been associated with both compatibilism and libertarianism.[152][153] In facing choices, he argued that humans are governed byintellect,will, andpassions. The will is "the primary mover of all the powers of the soul... and it is also the efficient cause of motion in the body."[154]Choice falls into five stages: (i) intellectual consideration of whether an objective is desirable, (ii) intellectual consideration of means of attaining the objective, (iii) will arrives at an intent to pursue the objective, (iv) will and intellect jointly decide upon choice of means (v) will elects execution.[155]Free will enters as follows: Free will is an "appetitive power", that is, not a cognitive power of intellect (the term "appetite" from Aquinas's definition "includes all forms of internal inclination").[156]He states that judgment "concludes and terminates counsel. Now counsel is terminated, first, by the judgment of reason; secondly, by the acceptation of the appetite [that is, the free-will]."[157] A compatibilist interpretation of Aquinas's view is defended thus: "Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature."[158][159] Historically, most of the philosophical effort invested in resolving the dilemma has taken the form of close examination of definitions and ambiguities in the concepts designated by "free", "freedom", "will", "choice" and so forth. Defining 'free will' often revolves around the meaning of phrases like "ability to do otherwise" or "alternative possibilities". This emphasis upon words has led some philosophers to claim the problem is merely verbal and thus a pseudo-problem.[160]In response, others point out the complexity of decision making and the importance of nuances in the terminology.[citation needed] Buddhismaccepts both freedom and determinism (or something similar to it), but despite its focus on humanagency, it rejects the western concept of a total agent from external sources.[161]According tothe Buddha, "There is free action, there is retribution, but I see no agent that passes out from one set of momentary elements into another one, except the [connection] of those elements."[161]Buddhists believe in neither absolute free will, nor determinism. It preaches a middle doctrine, namedpratītyasamutpādainSanskrit, often translated as "dependent origination", "dependent arising" or "conditioned genesis". It teaches that every volition is a conditioned action as a result of ignorance. In part, it states that free will is inherently conditioned and not "free" to begin with. It is also part of the theory ofkarma in Buddhism. The concept of karma in Buddhism is different from the notion ofkarmain Hinduism. In Buddhism, the idea of karma is much less deterministic. The Buddhist notion of karma is primarily focused on the cause and effect of moral actions in this life, while in Hinduism the concept of karma is more often connected with determining one'sdestinyinfuture lives. In Buddhism it is taught that the idea of absolute freedom of choice (that is that any human being could be completely free to make any choice) is unwise, because it denies the reality of one's physical needs and circumstances. Equally incorrect is the idea that humans have no choice in life or that their lives are pre-determined. To deny freedom would be to deny the efforts of Buddhists to make moral progress (through our capacity to freely choose compassionate action).Pubbekatahetuvada, the belief that all happiness and suffering arise from previous actions, is considered a wrong view according to Buddhist doctrines. Because Buddhists also reject agenthood, the traditional compatibilist strategies are closed to them as well. Instead, the Buddhist philosophical strategy is to examine the metaphysics of causality. Ancient India had many heated arguments about the nature of causality withJains,Nyayists,Samkhyists,Cārvākans, and Buddhists all taking slightly different lines. In many ways, the Buddhist position is closer to a theory of "conditionality" (idappaccayatā) than a theory of "causality", especially as it is expounded byNagarjunain theMūlamadhyamakakārikā.[161] The six orthodox (astika) schools of thought inHindu philosophydo not agree with each other entirely on the question of free will. For theSamkhya, for instance, matter is without any freedom, and soul lacks any ability to control the unfolding of matter. The only real freedom (kaivalya) consists in realizing the ultimate separateness of matter and self.[162]For theYogaschool, onlyIshvarais truly free, and its freedom is also distinct from all feelings, thoughts, actions, or wills, and is thus not at all a freedom of will. The metaphysics of theNyayaandVaisheshikaschools strongly suggest a belief in determinism, but do not seem to make explicit claims about determinism or free will.[163] Quotations fromSwami Vivekananda, aVedantist, offer a perspective on free will in the Hindu tradition: "The will is not free, it is a phenomenon bound by cause and effect, but there is something behind the will which is free."[164] "To acquire freedom we have to get beyond the limitations of this universe; it cannot be found here."[165] Within Vedanta, Madhvacharya argues that souls do not have any free will as Lord Vishnu prescribes all their actions.[166] Science has contributed to the free will problem in at least three ways. First, physics has addressed the question of whether nature is deterministic, which is viewed as crucial by incompatibilists (compatibilists, however, view it as irrelevant). Second, although free will can be defined in various ways, all of them involve aspects of the way people make decisions and initiate actions, which have been studied extensively by neuroscientists. Third, psychologists have studied the beliefs that the majority of ordinary people hold about free will and its role in assigning moral responsibility. From an anthropological perspective, free will can be regarded as an explanation for human behavior that justifies a socially sanctioned system of rewards and punishments. Under this definition, free will may be described as a political ideology. In a society where people are taught to believe that humans have free will, free will may be described as a political doctrine. Early scientific thought often portrayed the universe as deterministic – for example in the thought ofDemocritusor theCārvākans– and some thinkers claimed that the simple process of gathering sufficient information would allow them to predict future events with perfect accuracy. Modern science, on the other hand, is a mixture of deterministic andstochastictheories.[167]Quantum mechanicspredicts events only in terms of probabilities, casting doubt on whether the universe is deterministic at all, although evolution of the universal state vector[further explanation needed]is completely deterministic. Current physical theories cannot resolve the question of whether determinism is true of the world, being very far from a potentialtheory of everything, and open to many differentinterpretations.[168][169] Assuming that an indeterministic interpretation of quantum mechanics is correct, one may still object that such indeterminism is for all practical purposes confined to microscopic phenomena.[170]This is not always the case: many macroscopic phenomena are based on quantum effects. For instance, somehardware random number generatorswork by amplifying quantum effects into practically usable signals. A more significant question is whether the indeterminism of quantum mechanics allows for the traditional idea of free will (based on a perception of free will). If a person's action is, however, only a result of complete quantum randomness, mental processes as experienced have no influence on the probabilistic outcomes (such as volition).[29]According to many interpretations, indeterminism enables free will to exist,[171]while others assert the opposite (because the action was not controllable by the physical being who claims to possess the free will).[172] Like physicists,biologistshave frequently addressed questions related to free will. One of the most heated debates in biology is that of "nature versus nurture", concerning the relative importance of genetics and biology as compared to culture and environment in human behavior.[173]The view of many researchers is that many human behaviors can be explained in terms of humans' brains, genes, and evolutionary histories.[174][175][176]This point of view raises the fear that such attribution makes it impossible to hold others responsible for their actions.Steven Pinker's view is that fear of determinism in the context of "genetics" and "evolution" is a mistake, that it is "a confusion ofexplanationwithexculpation". Responsibility does not require that behavior be uncaused, as long as behavior responds to praise and blame.[177]Moreover, it is not certain that environmental determination is any less threatening to free will than genetic determination.[178] It has become possible to study the livingbrain, and researchers can now watch the brain's decision-making process at work. A seminal experiment in this field was conducted byBenjamin Libetin the 1980s, in which he asked each subject to choose a random moment to flick their wrist while he measured the associated activity in their brain; in particular, the build-up of electrical signal called thereadiness potential(after GermanBereitschaftspotential, which was discovered byKornhuber&Deeckein 1965.[179]). Although it was well known that the readiness potential reliably preceded the physical action, Libet asked whether it could be recorded before the conscious intention to move. To determine when subjects felt the intention to move, he asked them to watch the second hand of a clock. After making a movement, the volunteer reported the time on the clock when they first felt the conscious intention to move; this became known as Libet's W time.[180] Libet found that theunconsciousbrain activity of the readiness potential leading up to subjects' movements began approximately half a second before the subject was aware of a conscious intention to move.[180][181] These studies of the timing between actions and the conscious decision bear upon the role of the brain in understanding free will. A subject's declaration of intention to move a finger appearsafterthe brain has begun to implement the action, suggesting to some that unconsciously the brain has made the decisionbeforethe conscious mental act to do so. Some believe the implication is that free will was not involved in the decision and is an illusion. The first of these experiments reported the brain registered activity related to the move about 0.2 s before movement onset.[182]However, these authors also found that awareness of action wasanticipatoryto activity in the muscle underlying the movement; the entire process resulting in action involves more steps than just theonsetof brain activity. The bearing of these results upon notions of free will appears complex.[183][184] Some argue that placing the question of free will in the context of motor control is too narrow. The objection is that the time scales involved in motor control are very short, and motor control involves a great deal of unconscious action, with much physical movement entirely unconscious. On that basis "...free will cannot be squeezed into time frames of 150–350ms; free will is a longer term phenomenon" and free will is a higher level activity that "cannot be captured in a description of neural activity or of muscle activation..."[185]The bearing of timing experiments upon free will is still under discussion. More studies have since been conducted, including some that try to: Benjamin Libet's results are quoted[186]in favor of epiphenomenalism, but he believes subjects still have a "conscious veto", since the readiness potential does not invariably lead to an action. InFreedom Evolves,Daniel Dennettargues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Kornhuber and Deecke underlined that absence of conscious will during the early Bereitschaftspotential (termed BP1) is not a proof of the non-existence of free will, as also unconscious agendas may be free and non-deterministic. According to their suggestion, man has relative freedom, i.e. freedom in degrees, that can be increased or decreased through deliberate choices that involve both conscious and unconscious (panencephalic) processes.[187] Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with early epiphenomenalism, which according to Huxley is the broad claim that consciousness is "completely without any power... as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery".[188] Adrian G. Guggisberg and Annaïs Mottaz have also challenged those findings.[189] A study by Aaron Schurger and colleagues published in the Proceedings of the National Academy of Sciences[190]challenged assumptions about the causal nature of the readiness potential itself (and the "pre-movement buildup" of neural activity in general), casting doubt on conclusions drawn from studies such as Libet's[180]and Fried's.[191] A study that compared deliberate and arbitrary decisions, found that the early signs of decision are absent for the deliberate ones.[192] It has been shown that in several brain-related conditions, individuals cannot entirely control their own actions, though the existence of such conditions does not directly refute the existence of free will. Neuroscientific studies are valuable tools in developing models of how humans experience free will. For example, people withTourette syndromeand relatedtic disordersmake involuntary movements and utterances (calledtics) despite the fact that they would prefer not to do so when it is socially inappropriate. Tics are described as semi-voluntary orunvoluntary,[193]because they are not strictlyinvoluntary: they may be experienced as avoluntaryresponse to an unwanted, premonitory urge. Tics are experienced as irresistible and must eventually be expressed.[193]People with Tourette syndrome are sometimes able to suppress their tics for limited periods, but doing so often results in an explosion of tics afterward. The control exerted (from seconds to hours at a time) may merely postpone and exacerbate the ultimate expression of the tic.[194] Inalien hand syndrome, the affected individual's limb will produce unintentional movements without the will of the person. The affected limb effectively demonstrates 'a will of its own.' Thesense of agencydoes not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearance of the readiness potential recordable on the scalp several hundred milliseconds before the overt appearance of a spontaneous willed movement. Usingfunctional magnetic resonance imagingwith specialized multivariate analyses to study the temporal dimension in the activation of the cortical network associated with voluntary movement in human subjects, an anterior-to-posterior sequential activation process beginning in the supplementary motor area on the medial surface of the frontal lobe and progressing to the primary motor cortex and then to parietal cortex has been observed.[195]The sense of agency thus appears to normally emerge in conjunction with this orderly sequential network activation incorporating premotor association cortices together with primary motor cortex. In particular, the supplementary motor complex on the medial surface of the frontal lobe appears to activate prior to primary motor cortex presumably in associated with a preparatory pre-movement process. In a recent study using functional magnetic resonance imaging, alien movements were characterized by a relatively isolated activation of the primary motor cortex contralateral to the alien hand, while voluntary movements of the same body part included the natural activation of motor association cortex associated with the premotor process.[196]The clinical definition requires "feeling that one limb is foreign or has awill of its own,together with observable involuntary motor activity" (emphasis in original).[197]This syndrome is often a result of damage to thecorpus callosum, either when it is severed to treat intractableepilepsyor due to astroke. The standard neurological explanation is that the felt will reported by the speaking left hemisphere does not correspond with the actions performed by the non-speaking right hemisphere, thus suggesting that the two hemispheres may have independent senses of will.[198][199] In addition, one of the most important ("first rank") diagnostic symptoms ofschizophreniais the patient's delusion of being controlled by an external force.[200]People with schizophrenia will sometimes report that, although they are acting in the world, they do not recall initiating the particular actions they performed. This is sometimes likened to being a robot controlled by someone else. Although the neural mechanisms of schizophrenia are not yet clear, one influential hypothesis is that there is a breakdown in brain systems that compare motor commands with the feedback received from the body (known asproprioception), leading to attendanthallucinationsand delusions of control.[201] Experimental psychology's contributions to the free will debate have come primarily through social psychologistDaniel Wegner's work on conscious will. In his book,The Illusion of Conscious Will,[202]Wegner summarizes what he believes isempirical evidencesupporting the view that human perception of conscious control is an illusion. Wegner summarizes some empirical evidence that may suggest that the perception of conscious control is open to modification (or even manipulation). Wegner observes that one event is inferred to have caused a second event when two requirements are met: For example, if a person hears an explosion and sees a tree fall down that person is likely to infer that the explosion caused the tree to fall over. However, if the explosion occurs after the tree falls down (that is, the first requirement is not met), or rather than an explosion, the person hears the ring of a telephone (that is, the second requirement is not met), then that person is not likely to infer that either noise caused the tree to fall down. Wegner has applied this principle to the inferences people make about their own conscious will. People typically experience a thought that is consistent with a behavior, and then they observe themselves performing this behavior. As a result, people infer that their thoughts must have caused the observed behavior. However, Wegner has been able to manipulate people's thoughts and behaviors so as to conform to or violate the two requirements for causal inference.[202][203]Through such work, Wegner has been able to show that people often experience conscious will over behaviors that they have not, in fact, caused – and conversely, that people can be led to experience a lack of will over behaviors they did cause. For instance,primingsubjects with information about an effect increases the probability that a person falsely believes is the cause.[204]The implication for such work is that the perception of conscious will (which he says might be more accurately labelled as 'the emotion of authorship') is not tethered to the execution of actual behaviors, but is inferred from various cues through an intricate mental process,authorship processing. Although many interpret this work as a blow against the argument for free will, both psychologists[205][206]and philosophers[207][208]have criticized Wegner's theories. Emily Proninhas argued that the subjective experience of free will is supported by theintrospection illusion. This is the tendency for people to trust the reliability of their own introspections while distrusting the introspections of other people. The theory implies that people will more readily attribute free will to themselves rather than others. This prediction has been confirmed by three of Pronin and Kugler's experiments. When college students were asked about personal decisions in their own and their roommate's lives, they regarded their own choices as less predictable. Staff at a restaurant described their co-workers' lives as more determined (having fewer future possibilities) than their own lives. When weighing up the influence of different factors on behavior, students gave desires and intentions the strongest weight for their own behavior, but rated personality traits as most predictive of other people.[209] Caveats have, however, been identified in studying a subject's awareness of mental events, in that the process of introspection itself may alter the experience.[210] Regardless of the validity of belief in free will, it may be beneficial to understand where the idea comes from. One contribution is randomness.[211]While it is established that randomness is not the only factor in the perception of the free will, it has been shown that randomness can be mistaken as free will due to its indeterminacy. This misconception applies both when considering oneself and others. Another contribution is choice.[212]It has been demonstrated that people's belief in free will increases if presented with a simple level of choice. The specificity of the amount of choice is important, as too little or too great a degree of choice may negatively influence belief. It is also likely that the associative relationship between level of choice and perception of free will is influentially bidirectional. It is also possible that one's desire for control, or other basic motivational patterns, act as a third variable. Since at least 1959,[213]free will belief in individuals has been analysed with respect to traits in social behaviour. In general, the concept of free will researched to date in this context has been that of the incompatibilist, or more specifically, the libertarian, that is freedom from determinism. Whether people naturally adhere to an incompatibilist model of free will has been questioned in the research. Eddy Nahmias has found that incompatibilism is not intuitive – it was not adhered to, in that determinism does not negate belief in moral responsibility (based on an empirical study of people's responses to moral dilemmas under a deterministic model of reality).[214]Edward Cokely has found that incompatibilism is intuitive – it was naturally adhered to, in that determinism does indeed negate belief in moral responsibility in general.[215]Joshua Knobe and Shaun Nichols have proposed that incompatibilism may or may not be intuitive, and that it is dependent to some large degree upon the circumstances; whether or not the crime incites an emotional response – for example if it involves harming another human being.[216]They found that belief in free will is a cultural universal, and that the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism.[217] Studies indicate that peoples' belief in free will is inconsistent. Emily Pronin and Matthew Kugler found that people believe they have more free will than others.[218] Studies also reveal a correlation between the likelihood of accepting a deterministic model of mind and personality type. For example, Adam Feltz and Edward Cokely found that people of an extrovert personality type are more likely to dissociate belief in determinism from belief in moral responsibility.[219] Roy Baumeisterand colleagues reviewed literature on the psychological effects of a belief (or disbelief) in free will and found that most people tend to believe in a sort of "naive compatibilistic free will".[220][221] The researchers also found that people consider acts more "free" when they involve a person opposing external forces, planning, or making random actions.[222]Notably, the last behaviour, "random" actions, may not be possible; when participants attempt to perform tasks in a random manner (such as generating random numbers), their behaviour betrays many patterns.[223][224] A recent 2020 survey has shown that compatibilism is quite a popular stance among those who specialize in philosophy (59.2%). Belief in libertarianism amounted to 18.8%, while a lack of belief in free will equaled 11.2%.[225] 79 percent of evolutionary biologists said that they believe in free will according to a survey conducted in 2007, 14 percent chose no free will, and 7 percent did not answer the question.[226] Baumeister and colleagues found that provoking disbelief in free will seems to cause various negative effects. The authors concluded, in their paper, that it is belief indeterminismthat causes those negative effects.[220]Kathleen Vohs has found that those whose belief in free will had been eroded were more likely to cheat.[227]In a study conducted by Roy Baumeister, after participants read an article arguing against free will, they were more likely to lie about their performance on a test where they would be rewarded with cash.[228]Provoking a rejection of free will has also been associated with increased aggression and less helpful behaviour.[228]However, although these initial studies suggested that believing in free will is associated with more morally praiseworthy behavior, more recent studies (including direct, multi-site replications) with substantially larger sample sizes have reported contradictory findings (typically, no association between belief in free will and moral behavior), casting doubt over the original findings.[229][230][231][232][233] An alternative explanation builds on the idea that subjects tend to confuse determinism with fatalism... What happens then when agents' self-efficacy is undermined? It is not that their basic desires and drives are defeated. It is rather, I suggest, that they become skeptical that they can control those desires; and in the face of that skepticism, they fail to apply the effort that is needed even to try. If they were tempted to behave badly, then coming to believe in fatalism makes them less likely to resist that temptation. Moreover, whether or not these experimental findings are a result of actual manipulations in belief in free will is a matter of debate.[234]First of all, free will can at least refer to eitherlibertarian (indeterministic) free willorcompatibilistic (deterministic) free will. Having participants read articles that simply "disprove free will" is unlikely to increase their understanding of determinism, or the compatibilistic free will that it still permits.[234]In other words, experimental manipulations purporting to "provoke disbelief in free will" may instead cause a belief infatalism, which may provide an alternative explanation for previous experimental findings.[234][235]To test the effects of belief in determinism, it has been argued that future studies would need to provide articles that do not simply "attack free will", but instead focus on explaining determinism and compatibilism.[234][236] Baumeister and colleagues also note that volunteers disbelieving in free will are less capable ofcounterfactual thinking.[220]This is worrying because counterfactual thinking ("If I had done something different...") is an important part of learning from one's choices, including those that harmed others.[237]Again, this cannot be taken to mean that belief in determinism is to blame; these are the results we would expect from increasing people's belief in fatalism.[234] Along similar lines, Tyler Stillman has found that belief in free will predicts better job performance.[238] The notions of free will and predestination are heavily debated among Christians. Free will in the Christian sense is the ability to choose between good or evil. Among Catholics, there are those holding toThomism, adopted from whatThomas Aquinasput forth in theSumma Theologica.There are also some holding toMolinismwhich was put forth by Jesuit priestLuis de Molina. Among Protestants there isArminianism, held primarily by theMethodist Churches, and formulated by Dutch theologianJacobus Arminius; and there is alsoCalvinismheld by most in theReformed traditionwhich was formulated by the French Reformed theologian,John Calvin. John Calvin was heavily influenced byAugustine of Hippoviews on predestination put forth in his workOn the Predestination of the Saints.Martin Lutherseems to have held views on predestination similar to Calvinism in hisOn the Bondage of the Will,thus rejecting free will. In condemnation of Calvin and Luther views, the Roman CatholicCouncil of Trentdeclared that "the free will of man, moved and excited by God, can by its consent co-operate with God, Who excites and invites its action; and that it can thereby dispose and prepare itself to obtain the grace of justification. The will can resist grace if it chooses. It is not like a lifeless thing, which remains purely passive. Weakened and diminished by Adam's fall, free will is yet not destroyed in the race (Sess. VI, cap. i and v)."John Wesley, the father of the Methodist tradition, taught that humans, enabled byprevenient grace, have free will through which they can choose God and to do good works, with the goal ofChristian perfection.[239]Upholdingsynergism(the belief that God and man cooperate in salvation), Methodism teaches that "Our Lord Jesus Christ did so die for all men as to make salvation attainable by every man that cometh into the world. If men are not saved that fault is entirely their own, lying solely in their own unwillingness to obtain the salvation offered to them. (John 1:9; I Thess. 5:9; Titus 2:11-12)."[240] Paul the Apostlediscusses Predestination in some of his Epistles. "For whom He foreknew, He also predestined to become conformed to the image of His Son, that He might be the first-born among many brethren; and whom He predestined, these He also called; and whom He called, these He also justified; and whom He justified, these He also glorified." —Romans8:29–30 "He predestined us to adoption as sons through Jesus Christ to Himself, according to the kind intention of His will." —Ephesians1:5 There are also mentions of moral freedom in what are now termed as 'Deuterocanonical' works which the Orthodox and Catholic Churches use. In Sirach 15 the text states: "Do not say: "It was God's doing that I fell away," for what he hates he does not do. Do not say: "He himself has led me astray," for he has no need of the wicked. Abominable wickedness the Lord hates and he does not let it happen to those who fear him. God in the beginning created human beings and made them subject to their own free choice. If you choose, you can keep the commandments; loyalty is doing the will of God. Set before you are fire and water; to whatever you choose, stretch out your hand. Before everyone are life and death, whichever they choose will be given them. Immense is the wisdom of the Lord; mighty in power, he sees all things. The eyes of God behold his works, and he understands every human deed. He never commands anyone to sin, nor shows leniency toward deceivers." - Ben Sira 15:11-20 NABRE The exact meaning of these verses has been debated by Christian theologians throughout history. InJewish thoughtthe concept of "Free will" (Hebrew:בחירה חפשית,romanized:bechirah chofshit;בחירה,bechirah) is foundational. The most succinct statement is byMaimonides, in a two part treatment, where human free will is specified as part of the universe'sGodly design: InIslamthe theological issue is not usually how to reconcile free will with God's foreknowledge, but with God'sjabr, or divine commanding power.al-Ash'arideveloped an "acquisition" or "dual-agency" form of compatibilism, in which human free will and divinejabrwere both asserted, and which became a cornerstone of the dominantAsh'ariposition.[243][244]InShiaIslam, Ash'aris understanding of a higher balance towardpredestinationis challenged by most theologians.[245]Free will, according to Islamic doctrine is the main factor for man's accountability in his/her actions throughout life. Actions taken by people exercising free will are counted on theDay of Judgementbecause they are their own; however, the free will happens with the permission of God.[246] In contrast, theMu'tazila, known as the rationalist school of Islam, has a position that is opposite to the Ash'arite and other Islamic theology in its view of free will and divine justice.[247]Because the Mu'tazila have a doctrine that emphasizes God's justice ('Adl).[248][249]The Mu'tazila believe that humans themselves create their will and actions, so human actions and movements are not destiny that are solelydriven by Godand do not necessarily require God's permission. For the Mu'tazila, humans themselves create their actions and behavior consciously through free will which is formulated and carried out by thebrainandnervous system.[250][251]Thus, this condition guarantees God's justice when judging every human being in the Day of Judgement.[252] The philosopherSøren Kierkegaardclaimed that divine omnipotence cannot be separated from divine goodness.[253]As a truly omnipotent and good being, God could create beings with true freedom over God. Furthermore, God would voluntarily do so because "the greatest good... which can be done for a being, greater than anything else that one can do for it, is to be truly free."[254]Alvin Plantinga's free-will defenseis a contemporary expansion of this theme, adding how God, free will, andevilare consistent.[255] Some philosophers followWilliam of Ockhamin holding that necessity and possibility are defined with respect to a given point in time and a given matrix of empirical circumstances, and so something that is merely possible from the perspective of one observer may be necessary from the perspective of an omniscient.[256]Some philosophers followPhilo of Alexandria, a philosopher known for hisanthropocentrism, in holding that free will is a feature of a human'ssoul, and thus that non-humananimalslack free will.[257] This article incorporates material from theCitizendiumarticle "Free will", which is licensed under theCreative Commons Attribution-ShareAlike 3.0 Unported Licensebut not under theGFDL.
https://en.wikipedia.org/wiki/Free_will
Generative scienceis an area of research that explores the naturalworldand its complex behaviours. It explores ways "to generate apparently unanticipated and infinite behaviour based ondeterministicandfiniterules and parameters reproducing or resembling the behavior of natural and social phenomena".[1]By modelling such interactions, it can suggest that properties exist in the system that had not been noticed in the real world situation.[2]An example field of study is howunintended consequencesarise in social processes. Generative sciences often explore natural phenomena at several levels of organization.[3][4]Self-organizingnatural systems are a central subject, studied both theoretically and by simulation experiments. The study of complex systems in general has been grouped under the heading of "general systems theory", particularly byLudwig von Bertalanffy,Anatol Rapoport,Ralph Gerard, andKenneth Boulding. The development of computers andautomata theorylaid a technical foundation for the growth of the generative sciences. For example: One of the most influential advances in the generative sciences as related tocognitive sciencecame fromNoam Chomsky's (1957) development ofgenerative grammar, which separated language generation from semantic content, and thereby revealed important questions about human language. It was also in the early 1950s that psychologists at the MIT includingKurt Lewin,Jacob Levy MorenoandFritz Heiderlaid the foundations forgroup dynamicsresearch which later developed intosocial networkanalysis.
https://en.wikipedia.org/wiki/Generative_science
Irreducible complexity(IC) is the argument that certainbiological systemswith multiple interacting parts would not function if one of the parts were removed, so supposedly could not haveevolvedby successive small modifications from earlier less complex systems throughnatural selection, which would need all intermediate precursor systems to have been fully functional.[1]This negative argument is then complemented by the claim that the only alternative explanation is a "purposeful arrangement of parts" inferring design by an intelligent agent.[2]Irreducible complexity has become central to thecreationistconcept ofintelligent design(ID), but the concept of irreducible complexity has been rejected by thescientific community,[3]which regards intelligent design aspseudoscience.[4]Irreducible complexity andspecified complexity, are the two main arguments used by intelligent-design proponents to support their version of the theologicalargument from design.[2][5] The central concept, that complex biological systems which require all their parts to function could not evolve by the incremental changes of natural selection so must have been produced by an intelligence, was already featured increation science.[6][7]The 1989 school textbookOf Pandas and Peopleintroduced the alternative terminology ofintelligent design, a revised section in the 1993 edition of the textbook argued that a blood-clotting system demonstrated this concept.[8][9] This section was written byMichael Behe, a professor of biochemistry atLehigh University. He subsequently introduced the expressionirreducible complexityalong with a full account of his arguments, in his 1996 bookDarwin's Black Box, and said it made evolution through natural selection of random mutations impossible, or extremely improbable.[2][1]This was based on the mistaken assumption that evolution relies on improvement of existing functions, ignoring how complex adaptations originate from changes in function, and disregarding published research.[2]Evolutionary biologistshave published rebuttals showing how systems discussed by Behe can evolve.[10][11] In the 2005Kitzmiller v. Dover Area School Districttrial, Behe gave testimony on the subject of irreducible complexity. The court found that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large."[3] Michael Behedefined irreducible complexity in natural selection in terms of well-matched parts in his 1996 bookDarwin's Black Box: ... a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.[1] A second definition given by Behe in 2000 (his "evolutionary definition") states: An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.[12] Intelligent-design advocateWilliam A. Dembskiassumed an "original function" in his 2002 definition: A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system.[13] The argument from irreducible complexity is a descendant of theteleological argumentfor God (the argument from design or from complexity). This states that complex functionality in the natural world which looks designed is evidence of an intelligent creator.William Paleyfamously argued, in his 1802watchmaker analogy, that complexity in nature implies a God for the same reason that the existence of a watch implies the existence of a watchmaker.[14]This argument has a long history, and one can trace it back at least as far asCicero'sDe Natura Deorumii.34,[15][16]written in 45 BC. Galen(1st and 2nd centuries AD) wrote about the large number of parts of the body and their relationships, which observation was cited as evidence for creation.[17]The idea that the interdependence between parts would have implications for the origins of living things was raised by writers starting withPierre Gassendiin the mid-17th century[18]and byJohn Wilkins(1614–1672), who wrote (citing Galen), "Now to imagine, that all these things, according to their several kinds, could be brought into this regular frame and order, to which such an infinite number of Intentions are required, without the contrivance of some wise Agent, must needs be irrational in the highest degree."[19][20]In the late 17th-century,Thomas Burnetreferred to "a multitude of pieces aptly joyn'd" to argue against theeternityof life.[21]In the early 18th century,Nicolas Malebranche[22]wrote "An organized body contains an infinity of parts that mutually depend upon one another in relation to particular ends, all of which must be actually formed in order to work as a whole", arguing in favor ofpreformation, rather thanepigenesis, of the individual;[23]and a similar argument about the origins of the individual was made by other 18th-century students of natural history.[24]In his 1790 book,The Critique of Judgment,Kantis said by Guyer to argue that "we cannot conceive how a whole that comes into being only gradually from its parts can nevertheless be the cause of the properties of those parts".[25][26] Chapter XV of Paley'sNatural Theologydiscusses at length what he called "relations" of parts of living things as an indication of their design.[14] Georges Cuvierapplied his principle of thecorrelation of partsto describe an animal from fragmentary remains. For Cuvier, this related to another principle of his, theconditions of existence, which excluded the possibility oftransmutation of species.[27] While he did not originate the term,Charles Darwinidentified the argument as a possible way to falsify a prediction of the theory of evolution at the outset. InThe Origin of Species(1859), he wrote, "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case."[28]Darwin's theory of evolution challenges the teleological argument by postulating an alternative explanation to that of an intelligent designer—namely, evolution by natural selection. By showing how simple unintelligent forces can ratchet up designs of extraordinary complexity without invoking outside design, Darwin showed that an intelligent designer was not the necessary conclusion to draw from complexity in nature. The argument from irreducible complexity attempts to demonstrate that certain biological features cannot be purely the product of Darwinian evolution.[29] In the late 19th century, in a dispute between supporters of the adequacy ofnatural selectionand those who held forinheritance of acquired characteristics, one of the arguments made repeatedly byHerbert Spencer, and followed by others, depended on what Spencer referred to asco-adaptationofco-operativeparts, as in: "We come now to ProfessorWeismann's endeavour to disprove my second thesis—that it is impossible to explain by natural selection alone the co-adaptation of co-operative parts. It is thirty years since this was set forth in 'The Principles of Biology.' In § 166, I instanced the enormous horns of the extinctIrish elk, and contended that in this and in kindred cases, where for the efficient use of some one enlarged part many other parts have to be simultaneously enlarged, it is out of the question to suppose that they can have all spontaneously varied in the required proportions."[30][31] Darwin responded to Spencer's objections in chapter XXV ofThe Variation of Animals and Plants Under Domestication(1868).[32]The history of this concept in the dispute has been characterized: "An older and more religious tradition of idealist thinkers were committed to the explanation of complex adaptive contrivances by intelligent design. ... Another line of thinkers, unified by the recurrent publications of Herbert Spencer, also sawco-adaptationas a composed, irreducible whole, but sought to explain it by the inheritance of acquired characteristics."[33] St. George Jackson Mivartraised the objection to natural selection that "Complex and simultaneous co-ordinations ... until so far developed as to effect the requisite junctions, are useless".[34]In the 2012 bookEvolution and Belief, Confessions of a Religious Paleontologist, Robert J. Asher said this "amounts to the concept of 'irreducible complexity' as defined by ... Michael Behe".[35] Hermann Muller, in the early 20th century, discussed a concept similar to irreducible complexity. However, far from seeing this as a problem for evolution, he described the "interlocking" of biological features as a consequence to be expected of evolution, which would lead to irreversibility of some evolutionary changes.[36]He wrote, "Being thus finally woven, as it were, into the most intimate fabric of the organism, the once novel character can no longer be withdrawn with impunity, and may have become vitally necessary."[37] In 1975Thomas H. Frazzettapublished a book-length study of a concept similar to irreducible complexity, explained by gradual, step-wise, non-teleological evolution. Frazzetta wrote: "A complex adaptation is one constructed ofseveralcomponents that must blend together operationally to make the adaptation 'work'. It is analogous to a machine whose performance depends upon careful cooperation among its parts. In the case of the machine, no single part can greatly be altered without changing the performance of the entire machine." The machine that he chose as an analog is thePeaucellier–Lipkin linkage, and one biological system given extended description was the jaw apparatus of a python. The conclusion of this investigation, rather than that evolution of a complex adaptation was impossible, "awed by the adaptations of living things, to be stunned by their complexity and suitability", was "to accept the inescapable but not humiliating fact that much of mankind can be seen in a tree or a lizard."[38] In 1985Cairns-Smithwrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding calledcentering—used tobuild an archthen removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean togetherthey had to lean on something else."[39][40]However, neither Muller or Cairns-Smith claimed their ideas as evidence of something supernatural.[41] An early concept of irreducibly complex systems comes fromLudwig von Bertalanffy(1901–1972), an Austrian biologist.[42]He believed that complex systems must be examined as complete,irreduciblesystems in order to fully understand how they work. He extended his work on biological complexity into a general theory of systems in a book titledGeneral Systems Theory. AfterJames WatsonandFrancis Crickpublished the structure ofDNAin the early 1950s, General Systems Theory lost many of its adherents in the physical and biological sciences.[43]However,systems theoryremained popular in the social sciences long after its demise in the physical and biological sciences. Versions of the irreducible complexity argument have been common inyoung Earth creationist(YEC)creation sciencejournals. For example, in the July 1965 issue ofCreation Research SocietyQuarterlyHarold W. Clarkdescribed the complex interaction in whichyucca mothshave an "inherited action pattern" or instinct to fertilize plants: "Before the pattern can be inherited, it must be formed. But how could yucca plants mature seeds while waiting for the moths to learn the process and set the pattern? The whole procedure points so strongly to intelligent design that it is difficult to escape the conclusion that the hand of a wise and beneficent Creator has been involved." Similarly, honeybees pollinate apple blossom: "Again we may well ask how such an arrangement could have come about by accident, or how either the flowers or the bees could have survived alone. Intelligent design is again evident."[2][44] In 1974 the YECHenry M. Morrisintroduced an irreducible complexity concept in his creation science bookScientific Creationism, in which he wrote; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident."[45]He continued; "This issue can actually be attacked quantitatively, using simple principles of mathematical probability. The problem is simply whether a complex system, in which many components function unitedly together, and in which each component is uniquely necessary to the efficient functioning of the whole, could ever arise by random processes."[46][47]In 1975Duane Gishwrote inThe Amazing Story of Creation from Science and the Bible; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident."[45] A 1980 article in the creation science magazineCreationby the YECAriel A. Rothsaid "Creation and various other views can be supported by the scientific data that reveal that the spontaneous origin of thecomplex integrated biochemical systemsof even the simplest organisms is, at best, a most improbable event".[46]In 1981, defending the creation science position in the trialMcLean v. Arkansas, Roth said of "complex integrated structures": "This system would not be functional until all the parts were there ... How did these parts survive during evolution ...?"[48] In 1985, countering the creationist claims that all the changes would be needed at once,Cairns-Smithwrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding calledcentering—used tobuild an archthen removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean togetherthey had to lean on something else."[39][49]Neither Muller or Cairns-Smith said their ideas were evidence of anything supernatural.[41] Thebacterial flagellumfeatured in creation science literature. Morris later claimed that one of theirInstitute for Creation Research"scientists (the late Dr. Dick Bliss) was using this example in his talks on creation a generation ago". In December 1992 the creation science magazineCreationcalled bacterial flagella "rotary engines", and dismissed the possibility that these "incredibly complicated arrangements of matter" could have "evolved by selection of chance mutations. The alternative explanation, that they were created, is much more reasonable."[2][50]An article in theCreation Research SocietyMagazine for June 1994 called a flagellum a "bacterial nanomachine", forming the "bacterial rotor-flagellar complex" where "it is clear from the details of their operation that nothing about them works unless every one of their complexly fashioned and integrated components are in place", hard to explain by natural selection. The abstract said that in "terms of biophysical complexity, the bacterial rotor-flagellum is without precedent in the living world. ... To evolutionists, the system presents an enigma; to creationists, if offers clear and compelling evidence of purposeful intelligent design."[7] The biology supplementary textbook for schoolsOf Pandas and Peoplewas drafted presentingcreation sciencearguments, but shortly after theEdwards v. Aguillardruling, that it was unconstitutional to teach creationism in public school science classes, the authors changed the wording to "intelligent design", introducing the new meaning of this term when the book was published in 1989.[51]In a separate response to the same ruling, law professorPhillip E. JohnsonwroteDarwin on Trial, published in 1991, and at a conference in March 1992 brought together key figures in what he later called the 'wedge movement', including biochemistry professorMichael Behe. According to Johnson, around 1992 Behe developed his ideas of what he later called his "irreducible complexity" concept, and first presented these ideas in June 1993 when the "Johnson-Behe cadre of scholars" met at Pajaro Dunes in California.[52] The second edition ofOf Pandas and People, published in 1993, had extensive revisions to Chapter 6Biochemical Similaritieswith new sections on the complex mechanism of blood clotting and on the origin of proteins, written by Behe though he was not initially acknowledged as their author. He argued that "all of the proteins had to be present simultaneously for the blood clotting system to function", so it could not have evolved. In later publications, he named the argument "irreducibly complexity", but changed his definition of this specific system.[53][9]InDoubts About Darwin: A History of Intelligent Design(2003), historian Thomas Woodward wrote that "Michael Behe assisted in the rewriting of a chapter on biochemistry in a revised edition of Pandas. The book stands as one of the milestones in the infancy of Design."[54][55] OnAccess Research Network, Behe posted (on 3 February 1999) "Molecular Machines: Experimental Support for the Design Inference" with a note that "This paper was originally presented in the Summer of 1994 at the meeting of theC. S. LewisSociety, Cambridge University." An "Irreducible Complexity" section quoted Darwin, then discussed "the humble mousetrap", and "Molecular Machines", going into detail aboutciliabefore saying "Other examples of irreducible complexity abound, including aspects of protein transport, blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis, transcription regulation, and much more. Examples of irreducible complexity can be found on virtually every page of a biochemistry textbook." Suggesting "these things cannot be explained by Darwinian evolution," he said they had been neglected by the scientific community.[56][57] Behe first published the term "irreducible complexity" in his 1996 bookDarwin's Black Box, where he set out his ideas about theoretical properties of some complex biochemicalcellularsystems, now including the bacterial flagellum. He posits that evolutionary mechanisms cannot explain the development of such "irreducibly complex" systems. Notably, Behe credits philosopherWilliam Paleyfor the original concept (alone among the predecessors). Intelligent design advocates argue that irreducibly complex systems must have been deliberately engineered by some form ofintelligence. In 2001, Behe wrote: "[T]here is an asymmetry between my current definition of irreducible complexity and the task facing natural selection. I hope to repair this defect in future work." Behe specifically explained that the "current definition puts the focus on removing a part from an already functioning system", but the "difficult task facing Darwinian evolution, however, would not be to remove parts from sophisticated pre-existing systems; it would be to bring together components to make a new system in the first place".[58]In the 2005Kitzmiller v. Dover Area School Districttrial, Behe testified under oath that he "did not judge [the asymmetry] serious enough to [have revised the book] yet."[59] Behe additionally testified that the presence of irreducible complexity in organisms would not rule out the involvement of evolutionary mechanisms in the development of organic life. He further testified that he knew of no earlier "peer reviewed articles in scientific journals discussing the intelligent design of the blood clotting cascade," but that there were "probably a large number of peer reviewed articles in science journals that demonstrate that the blood clotting system is indeed a purposeful arrangement of parts of great complexity and sophistication."[60](The judge ruled that "intelligent design is not science and is essentially religious in nature".)[61] Thescientific theoryofevolutionincorporates evidence that genetic variations occur, but makes no assumptions ofpurposeful designor intent. The environment "selects" the variants which have the highest fitness for conditions at the time, and these heritable variations are then passed on to the next generation of organisms. Change occurs by the gradual operation of natural forces over time, perhaps slowly, perhaps more quickly (seepunctuated equilibrium). This process is able toadaptcomplex structures from simpler beginnings, or convert complex structures from one function to another (seespandrel). Most intelligent design advocates accept that evolution occurs through mutation and natural selection at the "micro level", such as changing the relative frequency of various beak lengths in finches, but assert that it cannot account for irreducible complexity, because none of the parts of an irreducible system would be functional or advantageous until the entire system is in place. Behe uses the mousetrap as an illustrative example of this concept. A mousetrap consists of five interacting pieces: the base, the catch, the spring, the hammer, and the hold-down bar. All of these must be in place for the mousetrap to work, as the removal of any one piece destroys the function of the mousetrap. Likewise, he asserts that biological systems require multiple parts working together in order to function. Intelligent design advocates claim that natural selection could not create from scratch those systems for which science is currently unable to find a viable evolutionary pathway of successive, slight modifications, because the selectable function is only present when all parts are assembled. In his 2008 bookOnly A Theory, biologistKenneth R. Millerchallenges Behe's claim that the mousetrap is irreducibly complex.[63]Miller observes that various subsets of the five components can be devised to form cooperative units, ones that have different functions from the mousetrap and so, in biological terms, could form functionalspandrelsbefore being adapted to the new function of catching mice. In an example taken from his high school experience, Miller recalls that one of his classmates ...struck upon the brilliant idea of using an old, broken mousetrap as a spitball catapult, and it worked brilliantly.... It had worked perfectly as something other than a mousetrap.... my rowdy friend had pulled a couple of parts—probably the hold-down bar and catch—off the trap to make it easier to conceal and more effective as a catapult... [leaving] the base, the spring, and the hammer. Not much of a mousetrap, but a helluva spitball launcher.... I realized why [Behe's] mousetrap analogy had bothered me. It was wrong. The mousetrap is not irreducibly complex after all.[63] Other systems identified by Miller that include mousetrap components include the following:[63] The point of the reduction is that—in biology—most or all of the components were already at hand, by the time it became necessary to build a mousetrap. As such, it required far fewer steps to develop a mousetrap than to design all the components from scratch. Thus, the development of the mousetrap, said to consist of five different parts which had no function on their own, has been reduced to one step: the assembly from parts that are already present, performing other functions. Supporters of intelligent design argue that anything less than the complete form of such a system or organ would not work at all, or would in fact be adetrimentto the organism, and would therefore never survive the process of natural selection. Although they accept that some complex systems and organscanbe explained by evolution, they claim that organs and biological features which areirreducibly complexcannot be explained by current models, and that an intelligent designer must have created life or guided its evolution. Accordingly, the debate on irreducible complexity concerns two questions: whether irreducible complexity can be found in nature, and what significance it would have if it did exist in nature.[64] Behe's original examples of irreducibly complex mechanisms included the bacterialflagellumofE. coli,the blood clotting cascade,cilia, and theadaptive immune system. Behe argues that organs and biological features which are irreducibly complex cannot be wholly explained by current models ofevolution. In explicating his definition of "irreducible complexity" he notes that: An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. Irreducible complexity is not an argument that evolution does not occur, but rather an argument that it is "incomplete". In the last chapter ofDarwin's Black Box, Behe goes on to explain his view that irreducible complexity is evidence forintelligent design. Mainstream critics, however, argue that irreducible complexity, as defined by Behe, can be generated by known evolutionary mechanisms. Behe's claim that no scientific literature adequately modeled the origins of biochemical systems through evolutionary mechanisms has been challenged byTalkOrigins.[65][66]The judge in theDovertrial wrote "By defining irreducible complexity in the way that he has, Professor Behe attempts to exclude the phenomenon ofexaptationby definitional fiat, ignoring as he does so abundant evidence which refutes his argument. Notably, theNAShas rejected Professor Behe's claim for irreducible complexity..."[67] Behe and others have suggested a number of biological features that they believed to be irreducibly complex. The process of blood clotting orcoagulationcascade in vertebrates is a complex biological pathway which is given as an example of apparent irreducible complexity.[68] The irreducible complexity argument assumes that the necessary parts of a system have always been necessary, and therefore could not have been added sequentially. However, in evolution, something which is at first merely advantageous can later become necessary.[46]Natural selectioncan lead to complex biochemical systems being built up from simpler systems, or to existing functional systems being recombined as a new system with a different function.[67]For example, one of the clotting factors that Behe listed as a part of the clotting cascade (Factor XII, also called Hageman factor) was later found to be absent in whales, demonstrating that it is not essential for a clotting system.[69]Many purportedly irreducible structures can be found in other organisms as much simpler systems that utilize fewer parts. These systems, in turn, may have had even simpler precursors that are now extinct. Behe has responded to critics of his clotting cascade arguments by suggesting thathomologyis evidence for evolution, but not for natural selection.[70] The "improbability argument" also misrepresents natural selection. It is correct to say that a set of simultaneous mutations that form a complex protein structure is so unlikely as to be unfeasible, but that is not what Darwin advocated. His explanation is based on small accumulated changes that take place without a final goal. Each step must be advantageous in its own right, although biologists may not yet understand the reason behind all of them—for example,jawless fishaccomplish blood clotting with just six proteins instead of the full ten.[71] Theeyeis frequently cited by intelligent design and creationism advocates as a purported example of irreducible complexity. Behe used the "development of the eye problem" as evidence for intelligent design inDarwin's Black Box. Although Behe acknowledged that the evolution of the larger anatomical features of the eye have been well-explained, he pointed out that the complexity of the minute biochemical reactions required at a molecular level for light sensitivity still defies explanation. CreationistJonathan Sarfatihas described the eye as evolutionary biologists' "greatest challenge as an example of superb 'irreducible complexity' in God's creation", specifically pointing to the supposed "vast complexity" required for transparency.[72][failed verification][non-primary source needed] In an often misquoted[73]passage fromOn the Origin of Species,Charles Darwinappears to acknowledge the eye's development as a difficulty for his theory. However, the quote in context shows that Darwin actually had a very good understanding of the evolution of the eye (seefallacy of quoting out of context). He notes that "to suppose that the eye ... could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree". Yet this observation was merely arhetorical devicefor Darwin. He goes on to explain that if gradual evolution of the eye could be shown to be possible, "the difficulty of believing that a perfect and complex eye could be formed by natural selection ... can hardly be considered real". He then proceeded to roughly map out a likely course for evolution using examples of gradually more complex eyes of various species.[74] Since Darwin's day, the eye's ancestry has become much better understood. Although learning about the construction of ancient eyes through fossil evidence is problematic due to the soft tissues leaving no imprint or remains, genetic and comparative anatomical evidence has increasingly supported the idea of a common ancestry for all eyes.[75][76][77] Current evidence does suggest possible evolutionary lineages for the origins of the anatomical features of the eye. One likely chain of development is that the eyes originated as simple patches ofphotoreceptor cellsthat could detect the presence or absence of light, but not its direction. When, via random mutation across the population, the photosensitive cells happened to have developed on a small depression, it endowed the organism with a better sense of the light's source. This small change gave the organism an advantage over those without the mutation. This genetic trait would then be "selected for" as those with the trait would have an increased chance of survival, and therefore progeny, over those without the trait. Individuals with deeper depressions would be able to discern changes in light over a wider field than those individuals with shallower depressions. As ever deeper depressions were advantageous to the organism, gradually, this depression would become a pit into which light would strike certain cells depending on its angle. The organism slowly gained increasingly precise visual information. And again, this gradual process continued as individuals having a slightly shrunkenapertureof the eye had an advantage over those without the mutation as an aperture increases howcollimatedthe light is at any one specific group of photoreceptors. As this trait developed, the eye became effectively apinhole camerawhich allowed the organism to dimly make out shapes—thenautilusis a modern example of an animal with such an eye. Finally, via this same selection process, a protective layer of transparent cells over the aperture was differentiated into a crudelens, and the interior of the eye was filled with humours to assist in focusing images.[78][79][80]In this way, eyes are recognized by modern biologists as actually a relatively unambiguous and simple structure to evolve, and many of the major developments of the eye's evolution are believed to have taken place over only a few million years, during theCambrian explosion.[81]Behe asserts that this is only an explanation of the gross anatomical steps, however, and not an explanation of the changes in discrete biochemical systems that would have needed to take place.[82] Behe maintains that the complexity of light sensitivity at the molecular level and the minute biochemical reactions required for those first "simple patches of photoreceptor[s]" still defies explanation, and that the proposed series of infinitesimal steps to get from patches of photoreceptors to a fully functional eye would actually be considered great, complex leaps in evolution if viewed on the molecular scale. Other intelligent design proponents claim that the evolution of the entire visual system would be difficult rather than the eye alone.[83] Theflagellaof certain bacteria constitute amolecular motorrequiring the interaction of about 40 different protein parts. The flagellum (or cilium) developed from the pre-existing components of the eukaryotic cytoskeleton.[84][85]In bacterial flagella, strong evidence points to an evolutionary pathway from a Type III secretory system, a simpler bacterial secretion system.[86]Despite this, Behe presents this as a prime example of an irreducibly complex structure defined as "a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning", and argues that since "an irreducibly complex system that is missing a part is by definition nonfunctional", it could not have evolved gradually throughnatural selection.[87]However, each of the three types of flagella—eukaryotic, bacterial, and archaeal—has been shown to have evolutionary pathways. For archaeal flagella, there is a molecular homology with bacterial Type IV pili, pointing to an evolutionary link.[88]In all these cases, intermediary, simpler forms of the structures are possible and provide partial functionality. Reducible complexity. In contrast to Behe's claims, many proteins can be deleted or mutated and the flagellum still works, even though sometimes at reduced efficiency.[89]In fact, the composition of flagella is surprisingly diverse across bacteria with many proteins only found in some species but not others.[90]Hence the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. Further studies have shown that, contrary to claims of "irreducible complexity", flagella and thetype-III secretion systemshare several components which provides strong evidence of a shared evolutionary history (see below). In fact, this example shows how a complex system can evolve from simpler components.[91][92]Multiple processes were involved in the evolution of the flagellum, includinghorizontal gene transfer.[93] Evolution from type three secretion systems. The basal body of the flagella has been found to be similar to theType III secretion system(TTSS), a needle-like structure that pathogenic germs such asSalmonellaandYersinia pestisuse to injecttoxinsinto livingeukaryotecells.[87][94]The needle's base has ten elements in common with the flagellum, but it is missing forty of the proteins that make a flagellum work.[95]The TTSS system negates Behe's claim that taking away any one of the flagellum's parts would prevent the system from functioning. On this basis,Kenneth Millernotes that, "The parts of this supposedly irreducibly complex system actually have functions of their own."[96][97]Studies have also shown that similar parts of the flagellum in different bacterial species can have different functions despite showing evidence of common descent, and that certain parts of the flagellum can be removed without eliminating its functionality.[98]Behe responded to Miller by asking "why doesn't he just take an appropriate bacterial species, knock out the genes for its flagellum, place the bacterium under selective pressure (for mobility, say), and experimentally produce a flagellum—or any equally complex system—in the laboratory?"[99]However a laboratory experiment has been performed where "immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps".[100] Dembski has argued that phylogenetically, the TTSS is found in a narrow range of bacteria which makes it seem to him to be a late innovation, whereas flagella are widespread throughout many bacterial groups, and he argues that it was an early innovation.[101][102]Against Dembski's argument, different flagella use completely different mechanisms, and publications show a plausible path in which bacterial flagella could have evolved from a secretion system.[103] Theciliumconstruction ofaxonememicrotubules movement by the sliding ofdyneinprotein was cited by Behe as an example of irreducible complexity.[104]He further said that the advances in knowledge in the subsequent 10 years had shown that the complexity ofintraflagellar transportfor two hundred components cilium and many other cellular structures is substantially greater than was known earlier.[105] Like intelligent design, the concept it seeks to support, irreducible complexity has failed to gain any notable acceptance within thescientific community. Researchers have proposed potentially viable evolutionary pathways for allegedly irreducibly complex systems such as blood clotting, the immune system[106]and the flagellum[107][108]—the three examples Behe proposed. John H. McDonald even showed his example of a mousetrap to be reducible.[62]If irreducible complexity is an insurmountable obstacle to evolution, it should not be possible to conceive of such pathways.[109] Niall Shanks and Karl H. Joplin, both ofEast Tennessee State University, have shown that systems satisfying Behe's characterization of irreducible biochemical complexity can arise naturally and spontaneously as the result of self-organizing chemical processes.[11]They also assert that what evolved biochemical and molecular systems actually exhibit is "redundant complexity"—a kind of complexity that is the product of an evolved biochemical process. They claim that Behe overestimated the significance of irreducible complexity because of his simple, linear view of biochemical reactions, resulting in his taking snapshots of selective features of biological systems, structures, and processes, while ignoring the redundant complexity of the context in which those features are naturally embedded. They also criticized his over-reliance on overly simplistic metaphors, such as his mousetrap. A computer model of the co-evolution of proteins binding to DNA in the peer-reviewed journalNucleic Acids Researchconsisted of several parts (DNA binders and DNA binding sites) which contribute to the basic function; removal of either one leads immediately to the death of the organism. This model fits the definition of irreducible complexity exactly, yet it evolves.[110](The program can be run fromEv program.) One can compare a mousetrap with a cat in this context. Both normally function so as to control the mouse population. The cat has many parts that can be removed leaving it still functional; for example, its tail can be bobbed, or it can lose an ear in a fight. Comparing the cat and the mousetrap, then, one sees that the mousetrap (which is not alive) offers better evidence, in terms of irreducible complexity, for intelligent design than the cat. Even looking at the mousetrap analogy, several critics have described ways in which the parts of the mousetrap could have independent uses or could develop in stages, demonstrating that it is not irreducibly complex.[62][63] Moreover, even cases where removing a certain component in an organic system will cause the system to fail do not demonstrate that the system could not have been formed in a step-by-step, evolutionary process. By analogy, stone arches are irreducibly complex—if you remove any stone the arch will collapse—yet humansbuild themeasily enough, one stone at a time, by building overcenteringthat is removed afterward. Similarly,naturally occurring archesof stone form by the weathering away of bits of stone from a large concretion that has formed previously. Evolution can act to simplify as well as to complicate. This raises the possibility that seemingly irreducibly complex biological features may have been achieved with a period of increasing complexity, followed by a period of simplification. A team led byJoseph Thornton, assistant professor of biology at theUniversity of Oregon's Center for Ecology and Evolutionary Biology, using techniques for resurrecting ancient genes, reconstructed the evolution of an apparently irreducibly complex molecular system. The April 7, 2006 issue ofSciencepublished this research.[10][111] Irreducible complexity may not actually exist in nature, and the examples given by Behe and others may not in fact represent irreducible complexity, but can be explained in terms of simpler precursors. The theory offacilitated variationchallenges irreducible complexity.Marc W. Kirschner, a professor and chair of Department of Systems Biology atHarvard Medical School, andJohn C. Gerhart, a professor in Molecular and Cell Biology,University of California, Berkeley, presented this theory in 2005. They describe how certain mutation and changes can cause apparent irreducible complexity. Thus, seemingly irreducibly complex structures are merely "very complex", or they are simply misunderstood or misrepresented. The precursors of complex systems, when they are not useful in themselves, may be useful to perform other, unrelated functions. Evolutionary biologists argue that evolution often works in this kind of blind, haphazard manner in which the function of an early form is not necessarily the same as the function of the later form. The term used for this process isexaptation. Themammalian middle ear(derived from a jawbone) and thepanda's thumb (derived from a wrist bone spur) provide classic examples. A 2006 article inNaturedemonstrates intermediate states leading toward the development of the ear in aDevonianfish (about 360 million years ago).[112]Furthermore, recent research shows that viruses play a heretofore unexpected role in evolution by mixing and matching genes from various hosts.[113] Arguments for irreducibility often assume that things started out the same way they ended up—as we see them now. However, that may not necessarily be the case. In theDovertrial an expert witness for the plaintiffs, Ken Miller, demonstrated this possibility using Behe's mousetrap analogy. By removing several parts, Miller made the object unusable as a mousetrap, but he pointed out that it was now a perfectly functional, if unstylish,tie clip.[63][114] Irreducible complexity can be seen as equivalent to an "uncrossable valley" in afitness landscape.[115]A number of mathematical models of evolution have explored the circumstances under which such valleys can, nevertheless, be crossed.[116][117][115][118] An example of a structure that is claimed in Dembski's bookNo Free Lunchto be irreducibly complex, but evidently has evolved, is the protein T-urf13,[119]which is responsible for thecytoplasmic male sterilityofwaxy cornand is due to a completely new gene.[120]It arose from the fusion of several non-protein-coding fragments of mitochondrial DNA and the occurrence of several mutations, all of which were necessary. Behe's bookDarwin Devolvesclaims that things like this would take billions of years and could not arise from random tinkering, but the corn was bred during the 20th century. When presented with T-urf13 as an example for the evolvability of irreducibly complex systems, the Discovery Institute resorted to its flawed probability argument based on false premises, akin to theTexas sharpshooter fallacy.[121] Some critics, such asJerry Coyne(professor ofevolutionary biologyat theUniversity of Chicago) andEugenie Scott(aphysical anthropologistand former executive director of theNational Center for Science Education) have argued that the concept of irreducible complexity and, more generally,intelligent designis notfalsifiableand, therefore, notscientific.[citation needed] Behe argues that the theory that irreducibly complex systems could not have evolved can be falsified by an experiment where such systems are evolved. For example, he posits taking bacteria with noflagellumand imposing a selective pressure for mobility. If, after a few thousand generations, the bacteria evolved the bacterial flagellum, then Behe believes that this would refute his theory.[122][non-primary source needed]This has been done: a laboratory experiment has been performed where "immotile strains of the bacteriumPseudomonas fluorescensthat lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps".[100][needs update] Other critics take a different approach, pointing to experimental evidence that they consider falsification of the argument for intelligent design from irreducible complexity. For example,Kenneth Millerdescribes the lab work of Barry G. Hall onE. colias showing that "Behe is wrong".[123] Other evidence that irreducible complexity is not a problem for evolution comes from the field ofcomputer science, which routinely uses computer analogues of the processes of evolution in order to automatically design complex solutions to problems. The results of suchgenetic algorithmsare frequently irreducibly complex since the process, like evolution, both removes non-essential components over time as well as adding new components. The removal of unused components with no essential function, like the natural process where rock underneath anatural archis removed, can produce irreducibly complex structures without requiring the intervention of a designer. Researchers applying these algorithms automatically produce human-competitive designs—but no human designer is required.[124] Intelligent design proponents attribute to an intelligent designer those biological structures they believe are irreducibly complex and therefore they say a natural explanation is insufficient to account for them.[125]However, critics view irreducible complexity as a special case of the "complexity indicates design" claim, and thus see it as anargument from ignoranceand as aGod-of-the-gapsargument.[126] Eugenie ScottandGlenn Branchof theNational Center for Science Educationnote that intelligent design arguments from irreducible complexity rest on the false assumption that a lack of knowledge of a natural explanation allows intelligent design proponents to assume an intelligent cause, when the proper response of scientists would be to say that we do not know, and further investigation is needed.[127]Other critics describe Behe as saying that evolutionary explanations are not detailed enough to meet his standards, while at the same time presenting intelligent design as exempt from having to provide any positive evidence at all.[128][129] Irreducible complexity is at its core an argument against evolution. If truly irreducible systems are found, the argument goes, thenintelligent designmust be the correct explanation for their existence. However, this conclusion is based on the assumption that currentevolutionarytheory and intelligent design are the only two valid models to explain life, afalse dilemma.[130][131] At the 2005Kitzmiller v. Dover Area School Districttrial, expert witness testimony defending ID and IC was given by Behe and Scott Minnich, who had been one of the "Johnson-Behe cadre of scholars" at Pajaro Dunes in 1993, was prominent in ID,[132]and was now a tenured associate professor in microbiology at theUniversity of Idaho.[133]Behe conceded that there are no peer-reviewed papers supporting his claims that complexmolecularsystems, like the bacterial flagellum, the blood-clotting cascade, and the immune system, were intelligently designed nor are there any peer-reviewed articles supporting his argument that certain complex molecular structures are "irreducibly complex."[134]There was extensive discussion of IC arguments about the bacterial flagellum, first published in Behe's1996 book, and when Minnich was asked if similar claims in a 1994Creation Research Societyarticle presented the same argument, Minnich said he did not have any problem with that statement.[7][135] In the final ruling ofKitzmiller v. Dover Area School District, Judge Jones specifically singled out irreducible complexity:[134]
https://en.wikipedia.org/wiki/Irreducible_complexity
TheOmega Pointis a theorized future event in which the entirety of theuniversespirals toward a final point ofunification. The term was invented by the FrenchJesuitCatholicpriestPierre Teilhard de Chardin(1881–1955).[1]Teilhard argued that the Omega Point resembles the ChristianLogos, namelyChrist, who draws all things into himself, who in the words of theNicene Creed, is "God from God", "Light from Light", "True God from True God", and "through him all things were made".[2]In theBook of Revelation, Christ describes himself three times as "the Alpha and the Omega, the beginning and the end". Several decades after Teilhard's death, the idea of the Omega Point was expanded upon in the writings of John David Garcia (1971),Paolo Soleri(1981),Frank Tipler(1994), andDavid Deutsch(1997).[3][4][5] Teilhard de Chardinwas apaleontologistand Roman Catholic priest in the Jesuit order. In France in the 1920s, he began incorporating his theories of the universe into lectures that placedCatholicismandevolutionin the same conversation. Because of these lectures, he was suspected by theHoly Officeof denying the doctrine oforiginal sin. This caused Teilhard to be exiled to China and banned from publication by Church authorities.[6]It was not until one year after his death in 1955 that his writings were published for the world to read. His works were also supported by the writings of a group of Catholic thinkers, which includesPope Benedict XVI.[6]His bookThe Phenomenon of Manhas been dissected by astrophysicists and cosmologists, and is now viewed as a work positing a theological or philosophicaltheorythat cannot be scientifically proven. Teilhard, who was not acosmologist, opens his books with the statement: ... if this book is to be properly understood, it must be read not as a work on metaphysics, still less as a sort of theological essay, but purely and simply as a scientific treatise.[7] According to Teilhard,evolutiondoes not end with mankind, and Earth'sbiosphereevolved before humans existed. He described evolution as a progression that begins with inanimate matter to a future state of Divine consciousness through Earth's "hominization".[8]He also maintained that one-cellorganismsdevelop into metazoans or animals, but some of the members of this classification develop organisms with complexnervous systems. This group has the capability to acquireintelligence. WhenHomo sapiensinhabited Earth through evolution, anoosphere, the cognitive layer of existence, was created. As evolution continues, the noosphere gains coherence. Teilhard explained that this noosphere can be moved toward or constructed to be the Omega Point or the final evolutionary stage with the help of science.[9]Teilhard refers to this process as "planetization." Eventually, the noosphere gains total dominance over the biosphere and reaches a point of complete independence from tangential energy forming a metaphysical being, called the Omega Point.[10] Energyexists in two basic modes: Teilhard defines Radial Energy as becoming more concentrated and available as it is a critical element in man's evolution. The theory applies to all forms of matter, concluding that everything with existence has some sort of life. In regard to Teilhard'sThe Phenomenon of Man,Peter Medawarwrote, "Teilhard's radial, spiritual, or psychic energy may be equated to 'information' or 'information content' in the sense that has been made reasonably precise by communication engineers."[11] Teilhard's theory is based on four "properties": ... what would have become of humanity, if, by some remote chance, it had been free to spread indefinitely on an unlimited surface, that is to say, left only to the devices of its internal affinities? Something unimaginable. ... Perhaps even nothing at all, when we think of the extreme importance of the role played in its development by the forces of compression.[12] Teilhard calls the contributing universal energy that generates the Omega Point "forces of compression". Unlike the scientific definition, which incorporatesgravityandmass, Teilhard's forces of compression are sourced from communication and contact between human beings. This value is limitless and directly correlated with entropy. It suggests that as humans continue to interact,consciousnessevolves and grows. For the theory to occur, humans must also be bound to the finite earth. The creation of this boundary forces the world's convergence upon itself which he theorizes to result in time ending in communion with the Omega Point-God. This portion of Teilhard's thinking shows his lack of expectation for humans to engage in space travel and transcend the bounds of Earth.[10] Mathematical physicistFrank Tiplergeneralized[13]Teilhard's termOmega Pointto describe what he alleges is theultimate fate of the universeas required by thelaws of physics: roughly, Tipler argues that quantum mechanics is inconsistent unless the future of every point in spacetime contains an intelligent observer to collapse the wavefunction and that the only way for this to happen is if the Universe is closed (that is, it will collapse to a single point) and yet contains observers with a "God-like" ability to perform an unbounded series of observations in finite time.[14]Tipler's conception of the Omega Point is regarded as pseudoscience by some scientists.[15][16][better source needed] The originator of quantum computing, Oxford'sDavid Deutsch, wrote about how a universal quantum computer could bring about Tipler's salvation in his 1997 book,The Fabric of Reality. Pierre Teilhard de Chardin's life (1881–1955) was bracketed by theFirst Vatican Council(1869) and theSecond Vatican Council(1965). He was born 20 years after the publication ofCharles Darwin'sOn the Origin of Species; soon after, the claims of scientific theories and those of traditional theological teachings became of great interest to the Vatican.[17] In 1946,Pope Pius XIIstated his concern about thetheory of evolution, albeit without condemning it: If such a doctrine were to be spread, what will become of the unchangeable Catholic dogmas, what of the unity and the stability of the Creed?[18] Teilhard's theory was a personal attempt in creating a new Christianity in which science and theology coexist[citation needed]. The outcome was that his theory of the Omega Point was not perfectly scientific as examined by physicists, and not perfectly Christian either. By 1962,The Society of Jesushad strayed from Spanish Jesuit PriestFrancisco Suarez's philosophies on Man in favor of "Teilhardian evolutionary cosmogenesis." Teilhard's Christ is the "Cosmic Christ" or the "Omega" of revelation. He is an emanation of God which is made of matter and experienced the nature of evolution by being born into this world and dying. His resurrection from the dead was not to heaven, but to the noosphere, the area of convergence of all spirituality and spiritual beings, where Christ will be waiting at the end of time. When the earth reaches its Omega Point, everything that exists will become one with divinity.[19] Teilhard reaffirmed the role of the Church in the following letter to Auguste Valensin. It is important to note that he defines evolution as a scientific phenomenon set in motion by God – that science and the divine are interconnected and acting through one another: I believe in the Church,mediatrixbetween God and the world[.] ... The Church, the reflectively christified portion of the world, the Church, the principal focus of inter-human affinities through super-charity, the Church, the central axis of universal convergence and the precise point of contact between the universe and Omega Point. ... The Catholic Church, however, must not simply seek to affirm its primacy and authority but quite simply to present the world with the Universal Christ, Christ in human-cosmic dimension, as the animator of evolution.[20] In 1998, a value measured from observations ofType Ia supernovaeseemed to indicate that what was once assumed to be temporary cosmological expansion was actually accelerating.[21]The apparent acceleration has caused further dismissal of the validity of Tipler's Omega Point, since the necessity of a final big crunch singularity is key to the Omega Point's workability. However, Tipler believes that the Omega Point is still workable, arguing that a big crunch/ final singularity is still required under many current universal models.[22][23] Thetechnological singularityis the hypothetical advent ofartificial general intelligencebecoming capable of recursive self-improvement, resulting in an irreversible machineintelligence explosion, with unknown impact on humanity.[24]Eric Steinhart, a proponent of "Christian transhumanism," argues there is a significant overlap of ideas between the secular singularity andTeilhard's religious Omega Point.[3]Steinhart quotesRay Kurzweil, who stated that "evolution moves inexorably toward our conception of God, albeit never reaching this ideal."[3][25]Like Kurzweil,Teilhardpredicted a period of rapid technological change that results in a merger of humanity and technology. He believes that this marks the birth of thenoosphereand the emergence of the "spirit of the Earth," but theTeilhardian Singularitycomes later. Unlike Kurzweil,Teilhard's singularity is marked by the evolution of human intelligence reaching a critical point in which humans ascend from "transhuman" to "posthuman." He identifies this with the Christian"parousia."[3] The Spanish painterSalvador Dalíwas familiar with Teilhard de Chardin's Omega Point theory. His 1959 paintingThe Ecumenical Councilis said to represent the "interconnectedness" of the Omega Point.[26]Point OmegabyDon DeLillotakes its name from the theory and involves a character who is studying Teilhard de Chardin.[27]Flannery O'Connor's acclaimed collection of short stories refers to the Omega Point theory in its title,Everything That Rises Must Converge, and science fiction writerFrederik Pohlreferences Frank Tipler and the Omega Point in his 1998 short story "The Siege of Eternity".[28]Scottish writer / counterculture figureGrant Morrisonhas used the Omega Point as a plot line in several of his Justice League of America and Batman stories.[29][30][31] Dan Simmons references Teilhard and the Omega Point throughout theHyperion Cantos, with extended discussions about the feasibility of the concept driving much of the plot. Julian May'sGalactic Milieu Seriesincludes multiple references to Chardin, the Omega Point and the Noosphere. Part of the driving force for the Milieu of the title is to promote an increase in the population of various intelligent species, including humans, in order to enable them to reach a point of psychic Unity. Arthur C. Clarke and Stephen Baxter'sThe Light of Other Daysreferences Teilhard de Chardin and includes a brief explanation of the Omega Point.[32]Italian writerValerio Evangelistihas used the Omega Point as main theme of hisIl Fantasma di Eymerichnovel.[33]In William Peter Blatty's novelThe Exorcist, the character of Father Merrin references Omega Point. In 2021, Dutch symphonic metal bandEpicareleased their eighth studio album,Omega, which features concepts related to the Omega Point theory. Epica's guitarist and vocalist,Mark Jansen, specifically referenced Teilhard's theory when describing the album's concept.[34] Charles Sheffield's 1997 novelTomorrow and Tomorrowalso uses the concept in the concluding act of the novel. Related concepts: Related people:
https://en.wikipedia.org/wiki/Law_of_Complexity-Consciousness
Libertarianismis one of the mainphilosophicalpositions related to the problems offree willanddeterminismwhich are part of the larger domain ofmetaphysics.[1]In particular, libertarianism is anincompatibilistposition[2][3]which argues that free will is logically incompatible with a deterministic universe. Libertarianism states that since agents have free will, determinism must be false.[4] One of the first clear formulations of libertarianism is found inJohn Duns Scotus. In a theological context, metaphysical libertarianism was notablydefendedby Jesuit authors likeLuis de MolinaandFrancisco Suárezagainst the rathercompatibilistThomistBañecianism. Other important metaphysical libertarians in theearly modern periodwereRené Descartes,George Berkeley,Immanuel KantandThomas Reid.[5] Roderick Chisholmwas a prominent defender of libertarianism in the 20th century[6]and contemporary libertarians includeRobert Kane,Geert Keil,Peter van InwagenandRobert Nozick. The first recorded use of the termlibertarianismwas in 1789 byWilliam Belshamin a discussion of free will and in opposition tonecessitarianordeterministviews.[7][8] Metaphysical libertarianism is one philosophical viewpoint under that of incompatibilism. Libertarianism holds onto a concept of free will that requires theagentto be able to take more than one possible course of action under a given set of circumstances. Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, and consequently the world is not closed under physics. Suchinteractionist dualistsbelieve that some non-physicalmind, will, orsouloverrides physicalcausality. Explanations of libertarianism that do not involve dispensing withphysicalismrequire physical indeterminism, such as probabilistic subatomic particle behavior—a theory unknown to many of the early writers on free will. Physical determinism, under the assumption of physicalism, implies there is only one possible future and is therefore not compatible with libertarian free will. Some libertarian explanations involve invokingpanpsychism, the theory that a quality ofmindis associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Other approaches do not require free will to be a fundamental constituent of the universe; ordinary randomness is appealed to as supplying the "elbow room" believed to be necessary by libertarians. Freevolitionis regarded as a particular kind of complex, high-level process with an element of indeterminism. An example of this kind of approach has been developed byRobert Kane,[9]where he hypothesizes that, In each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which has to be overcome by effort. Although at the timequantum mechanics(and physicalindeterminism) was only in the initial stages of acceptance, in his bookMiracles: A preliminary studyC. S. Lewis stated the logical possibility that if the physical world were proved indeterministic this would provide an entry point to describe an action of a non-physical entity on physical reality.[10]Indeterministicphysical models (particularly those involvingquantum indeterminacy) introduce random occurrences at an atomic or subatomic level. These events might affect brain activity, and could seemingly allowincompatibilistfree will if the apparent indeterminacy of some mental processes (for instance, subjective perceptions of control in consciousvolition) maps to the underlying indeterminacy of the physical construct. This relationship, however, requires a causative role over probabilities that is questionable,[11]and it is far from established that brain activity responsible for human action can be affected by such events. Secondarily, these incompatibilist models are dependent upon the relationship between action and conscious volition, as studied in theneuroscience of free will. It is evident that observation may disturb the outcome of the observation itself, rendering limited our ability to identify causality.[12]Niels Bohr, one of the main architects of quantum theory, suggested, however, that no connection could be made between indeterminism of nature and freedom of will.[13] In non-physical theories of free will, agents are assumed to have power to intervene in the physical world, a view known asagent causation.[14][15][16][17][18][19][20][21]Proponents of agent causation includeGeorge Berkeley,[22]Thomas Reid,[23]andRoderick Chisholm.[24] Most events can be explained as the effects of prior events. When a tree falls, it does so because of the force of the wind, its own structural weakness, and so on. However, when a person performs a free act, agent causation theorists say that the action was not caused by any other events or states of affairs, but rather was caused by the agent. Agent causation isontologicallyseparate from event causation. The action was not uncaused, because the agent caused it. But the agent's causing it was not determined by the agent's character, desires, or past, since that would just be event causation.[25]As Chisholm explains it, humans have "a prerogative which some would attribute only to God: each of us, when we act, is aprime moverunmoved. In doing what we do, we cause certain events to happen, and nothing—or no one—causes us to cause those events to happen."[26] This theory involves a difficulty which has long been associated with the idea of an unmoved mover. If a free action was not caused by any event, such as a change in the agent or an act of the will, then what is the difference between saying that an agent caused the event and simply saying that the event happened on its own? AsWilliam Jamesput it, "If a 'free' act be a sheer novelty, that comes not from me, the previous me, but ex nihilo, and simply tacks itself on to me, how can I, the previous I, be responsible? How can I have any permanent character that will stand still long enough for praise or blame to be awarded?"[27]Agent causation advocates respond that agent causation is actually more intuitive than event causation. They point toDavid Hume's argument that when we see two events happen in succession, our belief that one event caused the other cannot be justified rationally (known as theproblem of induction). If that is so, where does our belief in causality come from? According to Thomas Reid, "the conception of an efficient cause may very probably be derived from the experience we have had ... of our own power to produce certain effects."[28]Our everyday experiences of agent causation provide the basis for the idea of event causation.[29] Event-causal accounts of incompatibilist free will typically rely upon physicalist models of mind (like those of the compatibilist), yet they presuppose physical indeterminism, in which certain indeterministic events are said to be caused by the agent. A number of event-causal accounts of free will have been created, referenced here asdeliberative indeterminism,centred accounts, andefforts of will theory.[30]The first two accounts do not require free will to be a fundamental constituent of the universe. Ordinary randomness is appealed to as supplying the "elbow room" that libertarians believe necessary. A first common objection to event-causal accounts is that the indeterminism could be destructive and could therefore diminish control by the agent rather than provide it (related to the problem of origination). A second common objection to these models is that it is questionable whether such indeterminism could add any value to deliberation over that which is already present in a deterministic world. Deliberative indeterminismasserts that the indeterminism is confined to an earlier stage in the decision process.[31][32]This is intended to provide an indeterminate set of possibilities to choose from, while not risking the introduction ofluck(random decision making). The selection process is deterministic, although it may be based on earlier preferences established by the same process. Deliberative indeterminism has been referenced byDaniel Dennett[33]andJohn Martin Fischer.[34]An obvious objection to such a view is that an agent cannot be assigned ownership over their decisions (or preferences used to make those decisions) to any greater degree than that of a compatibilist model. Centred accountspropose that for any given decision between two possibilities, the strength of reason will be considered for each option, yet there is still a probability the weaker candidate will be chosen.[35][36][37][38][39][40][41]An obvious objection to such a view is that decisions are explicitly left up to chance, and origination or responsibility cannot be assigned for any given decision. Efforts of will theoryis related to the role of will power in decision making. It suggests that the indeterminacy of agent volition processes could map to the indeterminacy of certain physical events—and the outcomes of these events could therefore be considered caused by the agent. Models ofvolitionhave been constructed in which it is seen as a particular kind of complex, high-level process with an element of physical indeterminism. An example of this approach is that ofRobert Kane, where he hypothesizes that "in each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposes—a hindrance or obstacle in the form of resistance within her will which must be overcome by effort."[9]According to Robert Kane such "ultimate responsibility" is a required condition for free will.[42]An important factor in such a theory is that the agent cannot be reduced to physical neuronal events, but rather mental processes are said to provide an equally valid account of the determination of outcome as their physical processes (seenon-reductive physicalism). Epicurus, an ancientHellenistic philosopher, argued that as atoms moved through the void, there were occasions when they would "swerve" (clinamen) from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused. Epicurus did not say the swerve was directly involved in decisions. But followingAristotle, Epicurus thought human agents have the autonomous ability to transcend necessity and chance (both of which destroy responsibility), so that praise and blame are appropriate. Epicurus finds atertium quid, beyond necessity and beyond chance. Histertium quidis agent autonomy, what is "up to us." [S]ome things happen of necessity (ἀνάγκη), others by chance (τύχη), others through our own agency (παρ' ἡμᾶς). [...]. [N]ecessity destroys responsibility and chance is inconstant; whereas our own actions are autonomous, and it is to them that praise and blame naturally attach.[43] TheEpicureanphilosopherLucretius(1st century BC) saw the randomness as enabling free will, even if he could not explain exactly how, beyond the fact that random swerves would break the causal chain of determinism. Again, if all motion is always one long chain, and new motion arises out of the old in order invariable, and if the first-beginnings do not make by swerving a beginning of motion such as to break the decrees of fate, that cause may not follow cause from infinity, whence comes this freedom (libera) in living creatures all over the earth, whence I say is this will (voluntas) wrested from the fates by which we proceed whither pleasure leads each, swerving also our motions not at fixed times and fixed places, but just where our mind has taken us? For undoubtedly it is his own will in each that begins these things, and from the will movements go rippling through the limbs. However, the interpretation of these ancient philosophers is controversial. Tim O'Keefe has argued that Epicurus and Lucretius were not libertarians at all, but compatibilists.[44] Robert Nozickput forward an indeterministic theory of free will inPhilosophical Explanations(1981).[45] When human beings become agents through reflexive self-awareness, they express their agency by having reasons for acting, to which they assign weights. Choosing the dimensions of one's identity is a special case, in which the assigning of weight to a dimension is partly self-constitutive. But all acting for reasons is constitutive of the self in a broader sense, namely, by its shaping one's character and personality in a manner analogous to the shaping that law undergoes through the precedent set by earlier court decisions. Just as a judge does not merely apply the law but to some degree makes it through judicial discretion, so too a person does not merely discover weights but assigns them; one not only weighs reasons but also weights them. Set in train is a process of building a framework for future decisions that we are tentatively committed to. The lifelong process of self-definition in this broader sense is construedindeterministicallyby Nozick. The weighting is "up to us" in the sense that it is undetermined by antecedent causal factors, even though subsequent action is fully caused by the reasons one has accepted. He compares assigning weights in this deterministic sense to "the currently orthodox interpretation of quantum mechanics", followingvon Neumannin understanding a quantum mechanical system as in a superposition or probability mixture of states, which changes continuously in accordance with quantum mechanical equations of motion and discontinuously via measurement or observation that "collapses the wave packet" from a superposition to a particular state. Analogously, a person before decision has reasons without fixed weights: he is in a superposition of weights. The process of decision reduces the superposition to a particular state that causes action. One particularly influential contemporary theory of libertarian free will is that ofRobert Kane.[30][46][47]Kane argued that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and that (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)".[48]The crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will.[49]It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility". Ultimate responsibility entails that agents must be the ultimate creators (or originators) and sustainers of their own ends and purposes. There must be more than one way for a person's life to turn out (AP). More importantly, whichever way it turns out must be based in the person's willing actions. Kane defines it as follows: (UR) An agent isultimately responsiblefor some (event or state) E's occurring only if (R) the agent is personally responsible for E's occurring in a sense which entails that something the agent voluntarily (or willingly) did or omitted either was, or causally contributed to, E's occurrence and made a difference to whether or not E occurred; and (U) for every X and Y (where X and Y represent occurrences of events and/or states) if the agent is personally responsible for X and if Y is anarche(sufficient condition, cause or motive) for X, then the agent must also be personally responsible for Y. In short, "an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring."[50] What allows for ultimacy of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs—those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refraining in the life histories of agents that are required for UR. UR does not require thateveryact done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), they are responsible for the actions that are a result of their character. Randolph Clarkeobjects that Kane's depiction of free will is not truly libertarian but rather a form ofcompatibilism. The objection asserts that although the outcome of an SFA is not determined, one's history up to the eventis; so the fact that an SFA will occur is also determined. The outcome of the SFA is based on chance, and from that point on one's life is determined. This kind of freedom, says Clarke, is no different from the kind of freedom argued for by compatibilists, who assert that even though our actions are determined, they are free because they are in accordance with our own wills, much like the outcome of an SFA.[51] Kane responds that the difference between causal indeterminism and compatibilism is "ultimate control—the originative control exercised by agents when it is 'up to them' which of a set of possible choices or actions will now occur, and up to no one and nothing else over which the agents themselves do not also have control".[52]UR assures that the sufficient conditions for one's actions do not lie before one's own birth. Galen Strawsonholds that there is a fundamental sense in whichfree willis impossible, whetherdeterminismis true or not. He argues for this position with what he calls his "basic argument", which aims to show that no-one is ever ultimately morally responsible for their actions, and hence that no one has free will in the sense that usually concerns us. In his book defending compatibilism,Freedom Evolves, Daniel Dennett spends a chapter criticising Kane's theory.[53]Kane believes freedom is based on certain rare and exceptional events, which he calls self-forming actions or SFAs. Dennett notes that there is no guarantee such an event will occur in an individual's life. If it does not, the individual does not in fact have free will at all, according to Kane. Yet they will seem the same as anyone else. Dennett finds an essentiallyindetectablenotion of free will to be incredible. Metaphysical libertarianism has faced significant criticism from both scientific and philosophical perspectives. One major objection comes from neuroscience. Experiments by Benjamin Libet and others suggest that the brain may initiate decisions before subjects become consciously aware of them[54], raising questions about whether conscious free will exists at all. Critics argue this challenges the libertarian notion of uncaused or agent-caused actions. Another prominent critique is the "luck objection." This argument claims that if an action is not determined by prior causes, then it seems to happen by chance. In this view, libertarian freedom risks reducing choice to randomness, undermining meaningful moral responsibility.[55] Compatibilists, such as Daniel Dennett, argue that free will is compatible with determinism and that libertarianism wrongly assumes that causal determinism automatically negates responsibility. They maintain that what matters is whether a person's actions stem from their internal motivations—not whether those actions are ultimately uncaused.[56] Some philosophers also raise metaphysical concerns about agent-causation, arguing that positing the agent as a "first cause" introduces mysterious or incoherent forms of causation into an otherwise naturalistic worldview.[57]
https://en.wikipedia.org/wiki/Libertarianism_(metaphysics)
Mass actioninsociologyrefers to the situations where numerous people behave simultaneously in a similar way but individually and without coordination. For example, at any given moment, many thousands of people are shopping - without any coordination between themselves, they are nonetheless performing the same mass action. Another, more complicated example would be one based on a work of 19th-centuryGermansociologistMax Weber,The Protestant Ethic and the Spirit of Capitalism: Weber wrote thatcapitalismevolved when theProtestantethicinfluenced large number of people to create their ownenterprisesand engage intradeand gathering ofwealth. In other words, the Protestant ethic was a force behind an unplanned and uncoordinated mass action that led to the development of capitalism. Abank runis mass action with sweeping implications. Upon hearing news of a bank's anticipated insolvency, many bank depositors may simultaneously rush down to a bank branch to withdraw their deposits.[1] More developed forms of mass actions aregroup behaviorandgroup action. In epidemiological (disease) models, assuming the "law of mass action" means assuming that individuals are homogeneously mixed and every individual is about as likely to interact with every other individual. This is a common assumption in models such as theSIR model. This idea serves as the main plot theme in authorIsaac Asimov's work,Foundation. In the early books of the series, the main character,Hari Seldon, uses the principle of mass action to foresee the imminent fall of the Galactic Empire, which encompasses the entire Milky Way, and a dark age lasting thirty thousand years before a second great empire arises. (Inlater booksthe principle is augmented with more recent developments inmathematical sociology.) With this, he hopes to reduce that dark age to only one thousand years, ostensibly by creating anEncyclopedia Galacticato retain all current knowledge. Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mass_action_(sociology)
George Edward MooreOMFBA(4 November 1873 – 24 October 1958) was an English philosopher, who withBertrand Russell,Ludwig Wittgensteinand earlierGottlob Fregewas among the initiators ofanalytic philosophy. He and Russell began de-emphasizing theidealismwhich was then prevalent among British philosophers and became known for advocatingcommon-senseconcepts and contributing toethics,epistemologyandmetaphysics. He was said to have had an "exceptional personality and moral character".[6]Ray Monkdubbed him "the most revered philosopher of his era".[7] As Professor of Philosophy at theUniversity of Cambridge, he influenced but abstained from theBloomsbury Group, an informal set of intellectuals. He edited the journalMind. He was a member of theCambridge Apostlesfrom 1894 to 1901,[8]a fellow of theBritish Academyfrom 1918, and was chairman of the Cambridge University Moral Sciences Club in 1912–1944.[9][10]Ahumanist, he presided over the British Ethical Union (nowHumanists UK) in 1935–1936.[11] George Edward Moore was born inUpper Norwood, in south-east London, on 4 November 1873, the middle child of seven of Daniel Moore, a medical doctor, and Henrietta Sturge.[12][13][14]His grandfather was the authorGeorge Moore. His eldest brother wasThomas Sturge Moore, a poet, writer and engraver.[12][15][16] He was educated atDulwich College[17]and, in 1892, began attendingTrinity College, Cambridge, to learnclassicsandmoral sciences. Histriposresults were adouble first.[18]He became a Fellow of Trinity in 1898 and was laterUniversity of CambridgeProfessor of Mental Philosophy and Logicfrom 1925 to 1939. Moore is known best now for defendingethical non-naturalism, his emphasis oncommon sensefor philosophical method, and theparadox that bears his name. He was admired by and influenced by other philosophers and some of theBloomsbury Group. But unlike his colleague and admirer Bertrand Russell, who for some years thought Moore fulfilled his "ideal of genius",[19]he is mostly unknown presently except among academic philosophers. Moore's essays are known for their clarity and circumspection of writing style and methodical and patient treatment of philosophical problems. He was critical of modern philosophy for lack ofprogress, which he saw as a stark contrast to the dramatic advances in thenatural sciencessince theRenaissance. Among Moore's most famous works are hisPrincipia Ethica,[20]and his essays, "The Refutation of Idealism", "A Defence of Common Sense", and "A Proof of the External World". Moore was an important and admired member of the secretiveCambridge Apostles, a discussion group drawn from the British intellectual elite. At the time another member, 22-year-old Bertrand Russell, wrote "I almost worship him as if he were a god. I have never felt such an extravagant admiration for anybody",[7]and would later write that "for some years he fulfilled my ideal of genius. He was in those days beautiful and slim, with a look almost of inspiration as deeply passionate asSpinoza's".[21] From 1918 to 1919, Moore was chairman of theAristotelian Society, a group committed to the systematic study of philosophy, its historical development and its methods and problems.[22]He was appointed to theOrder of Meritin 1951.[23] Moore died in England in theEvelyn Nursing Homeon 24 October 1958.[24]He was cremated at Cambridge Crematorium on 28 October 1958 and his ashes interred at theParish of the Ascension Burial Groundin the city. His wife, Dorothy Ely (1892–1977), was buried there. Together, they had two sons, the poetNicholas Mooreand the composer Timothy Moore.[25][26] His influential workPrincipia Ethicais one of the main inspirations of the reaction againstethical naturalism(seeethical non-naturalism) and is partly responsible for the twentieth-century concern withmeta-ethics.[27] Moore asserted that philosophical arguments can suffer from a confusion between the use of a term in a particular argument and the definition of that term (in all arguments). He named this confusion thenaturalistic fallacy. For example, an ethical argument may claim that if an item has certain properties, then that item is 'good.' Ahedonistmay argue that 'pleasant' items are 'good' items. Other theorists may argue that 'complex' things are 'good' things. Moore contends that, even if such arguments are correct, they do not provide definitions for the term 'good'. The property of 'goodness' cannot be defined. It can only be shown and grasped. Any attempt to define it (X is good if it has property Y) will simply shift the problem (Why is Y-ness good in the first place?). Moore'sargumentfor the indefinability of 'good' (and thus for the fallaciousness in the "naturalistic fallacy") is often termed theopen-question argument; it is presented in§13 ofPrincipia Ethica. The argument concerns the nature of statements such as "Anything that is pleasant is also good" and the possibility of asking questions such as "Is itgoodthat x is pleasant?". According to Moore, these questions areopenand these statements aresignificant; and they will remain so no matter what is substituted for "pleasure". Moore concludes from this that any analysis of value is bound to fail. In other words, if value could be analysed, then such questions and statements would be trivial and obvious. Since they are anything but trivial and obvious, value must be indefinable. Critics of Moore's arguments sometimes claim that he is appealing to general puzzles concerning analysis (cf. theparadox of analysis), rather than revealing anything special about value. The argument clearly depends on the assumption that if 'good' were definable, it would be ananalytic truthabout 'good', an assumption that many contemporary moral realists likeRichard BoydandPeter Railtonreject. Other responses appeal to theFregeandistinction betweensense and reference, allowing that value concepts are special andsui generis, but insisting that value properties are nothing but natural properties (this strategy is similar to that taken bynon-reductive materialistsinphilosophy of mind). Moore contended that goodness cannot be analysed in terms of any other property. InPrincipia Ethica, he writes: Therefore, we cannot define 'good' by explaining it in other words. We can only indicate athingor anactionand say "That is good". Similarly, we cannot describe to a person born totally blind exactly what yellow is. We can only show a sighted person a piece of yellow paper or a yellow scrap of cloth and say "That is yellow". In addition to categorising 'good' as indefinable, Moore also emphasized that it is a non-natural property. This means that it cannot be empirically or scientifically tested or verified—it is not analyzable by "natural science". Moore argued that, once arguments based on thenaturalistic fallacyhad been discarded, questions of intrinsic goodness could be settled only by appeal to what he (followingSidgwick) termed "moral intuitions":self-evidentpropositions which recommend themselves to moral thought, but which are not susceptible to either direct proof or disproof (Principia,§ 45). As a result of his opinion, he has often been described by later writers as an advocate ofethical intuitionism. Moore, however, wished to distinguish his opinions from the opinions usually described as "Intuitionist" whenPrincipia Ethicawas written: In order to express the fact that ethical propositions of myfirstclass [propositions about what is good as an end in itself] are incapable of proof or disproof, I have sometimes followed Sidgwick's usage in calling them 'Intuitions.' But I beg that it may be noticed that I am not an 'Intuitionist,' in the ordinary sense of the term. Sidgwick himself seems never to have been clearly aware of the immense importance of the difference which distinguishes his Intuitionism from the common doctrine, which has generally been called by that name. The Intuitionist proper is distinguished by maintaining that propositions of mysecondclass—propositions which assert that a certain action isrightor aduty—are incapable of proof or disproof by any enquiry into the results of such actions. I, on the contrary, am no less anxious to maintain that propositions ofthiskind arenot'Intuitions,' than to maintain that propositions of myfirstclassareIntuitions. Moore distinguished his view from the opinion ofdeontologicalintuitionists, who claimed that "intuitions" could determine questions about whatactionsare right or required byduty. Moore, as aconsequentialist, argued that "duties" and moral rules could be determined by investigating theeffectsof particular actions or kinds of actions (Principia,§ 89), and so were matters for empirical investigation rather than direct objects of intuition (Principia,§ 90). According to Moore, "intuitions" revealed not the rightness or wrongness of specific actions, but only what items were good in themselves, asends to be pursued. Moore holds thatright actionsare those producing the most good.[28]The difficulty with this is that the consequences of most actions are too complex for us to properly take into account, especially the long-term consequences. Because of this, Moore suggests that the definition of duty is limited to what generally produces better results than probable alternatives in a comparatively near future.[29]: §109Whether a given rule of action is also adutydepends to some extent on the conditions of the corresponding society butdutiesagree mostly with what common-sense recommends.[29]: §95Virtues, like honesty, can in turn be defined aspermanent dispositionsto perform duties.[29]: §109 One of the most important parts of Moore's philosophical development was his differing with theidealismthat dominated British philosophy (as represented by the works of his former teachersF. H. BradleyandJohn McTaggart), and his defence of what he regarded as a "common sense" type ofrealism. In his 1925 essay "A Defence of Common Sense", he argued against idealism andscepticismtoward the external world, on the grounds that they could not give reasons to accept that their metaphysical premises were more plausible than the reasons we have for accepting the common sense claims about our knowledge of the world, which sceptics and idealists must deny. He famously put the point into dramatic relief with his 1939 essay "Proof of an External World", in which he gave a common sense argument against scepticism by raising his right hand and saying "Here is one hand" and then raising his left and saying "And here is another", then concluding that there are at least two external objects in the world, and therefore that he knows (by this argument) that an external world exists. Not surprisingly, not everyone preferring sceptical doubts found Moore's method of argument entirely convincing; Moore, however, defends his argument on the grounds that sceptical arguments seem invariably to require an appeal to "philosophical intuitions" that we have considerably less reason to accept than we have for the common sense claims that they supposedly refute. The "Here is one hand" argument also influencedLudwig Wittgenstein, who spent his last years working out a new method for Moore's argument in the remarks that were published posthumously asOn Certainty.) Moore is also remembered for drawing attention to the peculiar inconsistency involved in uttering a sentence such as "It is raining, but I do not believe it is raining", a puzzle now commonly termed "Moore's paradox". The puzzle is that it seems inconsistent for anyone toassertsuch a sentence; but there doesn't seem to be anylogical contradictionbetween "It is raining" and "I don't believe that it is raining", because the former is a statement about the weather and the latter a statement about a person's belief about the weather, and it is perfectly logically possible that it may rain whilst a person does not believe that it is raining. In addition to Moore's own work on the paradox, the puzzle also inspired a great deal of work byLudwig Wittgenstein, who described the paradox as the most impressive philosophical insight that Moore had ever introduced. It is said[by whom?]that when Wittgenstein first heard this paradox one evening (which Moore had earlier stated in a lecture), he rushed round to Moore's lodgings, got him out of bed and insisted that Moore repeat the entire lecture to him. Moore's description of the principle of theorganic wholeis extremely straightforward, nonetheless, and a variant on a pattern that began with Aristotle: According to Moore, a moral actor cannot survey the 'goodness' inherent in the various parts of a situation, assign a value to each of them, and then generate a sum in order to get an idea of its total value. A moral scenario is a complex assembly of parts, and its total value is often created by the relations between those parts, and not by their individual value. The organic metaphor is thus very appropriate: biological organisms seem to have emergent properties which cannot be found anywhere in their individual parts. For example, a human brain seems to exhibit a capacity for thought when none of its neurons exhibit any such capacity. In the same way, a moral scenario can have a value different than the sum of its component parts. To understand the application of the organic principle to questions of value, it is perhaps best to consider Moore's primary example, that of a consciousness experiencing a beautiful object. To see how the principle works, a thinker engages in "reflective isolation", the act of isolating a given concept in a kind of null context and determining its intrinsic value. In our example, we can easily see that, of themselves, beautiful objects and consciousnesses are not particularly valuable things. They might have some value, but when we consider the total value of a consciousness experiencing a beautiful object, it seems to exceed the simple sum of these values. Hence the value of a whole must not be assumed to be the same as the sum of the values of its parts.
https://en.wikipedia.org/wiki/G._E._Moore#Organic_wholes
The Society of Mindis both the title of a 1986 book and the name of a theory of naturalintelligenceas written and developed byMarvin Minsky.[1] In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts calledagents, which are themselves mindless. He describes the postulatedinteractionsas constituting a "society of mind", hence the title.[2] The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version. In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such aslanguage,memory, andlearningwork, and also covers concepts such asconsciousness, the sense ofself, andfree will; because of this, many viewThe Society of Mindas a work of philosophy. The book was not written to prove anything specific aboutAIorcognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level. Minsky first started developing the theory withSeymour Papertin the early 1970s. Minsky said that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.[3] A core tenet of Minsky's philosophy is that "minds are whatbrainsdo". The society of mind theory views the human mind and any other naturally evolvedcognitivesystems as a vast society of individually simple processes known asagents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simpleformal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results. This idea is perhaps best summarized by the following quote: What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky,The Society of Mind, p. 308
https://en.wikipedia.org/wiki/Society_of_Mind
The termsystem of systemsrefers to a collection of task-oriented or dedicatedsystemsthat pool their resources and capabilities together to create a new, morecomplex systemwhich offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete.[1]referred tosystem of systems engineering. Commonly proposed descriptions—not necessarily definitions—of systems of systems,[2]are outlined below in order of their appearance in the literature: Taken together, all these descriptions suggest that a complete system of systems engineering framework is needed to improve decision support for system of systems problems. Specifically, an effective system of systems engineering framework is needed to help decision makers to determine whether related infrastructure, policy and/or technology considerations as an interrelated whole are good, bad or neutral over time.[10]The need to solve system of systems problems is urgent not only because of the growing complexity of today's challenges, but also because such problems require large monetary and resource investments with multi-generational consequences. While the individual systems constituting a system of systems can be very different and operate independently, their interactions typically expose and deliver important emergent properties. These emergent patterns have anevolvingnature that stakeholders must recognize, analyze and understand. The system of systems approach does not advocate particular tools, methods or practices; instead, it promotes a new way of thinking for solving grand challenges where the interactions of technology, policy, and economics are the primary drivers. System of systems study is related to the general study ofdesigning,complexityandsystems engineering, but also brings to the fore the additional challenge ofdesign. Systems of systems typically exhibit the behaviors of complex systems, but not all complex problems fall in the realm of systems of systems. Inherent to system of systems problems are several combinations of traits, not all of which are exhibited by every such problem:[11][12] The first five traits are known as Maier's criteria[13]for identifying system of systems challenges. The remaining three traits have been proposed from the study of mathematical implications of modeling and analyzing system of systems challenges by Dr. Daniel DeLaurentis[14]and his co-researchers atPurdue University.[15] Current research into effective approaches to system of systems problems includes: Systems of systems, while still being investigated predominantly in the defense sector, is also seeing application in such fields as national air and autotransportation[22]andspace exploration. Other fields where it can be applied includehealth care, design of theInternet,softwareintegration, and energy management[19]and power systems.[23]Social-ecological interpretations of resilience, where different levels of our world (e.g., the Earth system, the political system) are interpreted as interconnected or nested systems, take a systems-of-systems approach. An application in business can be found forsupply chain resilience. Collaboration among a wide array of organizations is helping to drive the development of defining system of systems problem class and methodology for modeling and analysis of system of systems problems. There are ongoing projects throughout many commercial entities, research institutions, academic programs, and government agencies. Major stakeholders in the development of this concept are: For example, DoD recently established the National Centers for System of Systems Engineering[24]to develop a formal methodology for system-of-systems engineering for applications in defense-related projects. In another example, according to theExploration Systems Architecture Study, NASA established the Exploration Systems Mission Directorate (ESMD) organization to lead the development of a new exploration "system-of-systems" to accomplish the goals outlined by President G.W. Bush in the 2004 Vision for Space Exploration. A number of research projects and support actions, sponsored by theEuropean Commission, were performed in theSeventh Framework Programme. These target Strategic Objective IST-2011.3.3 in theFP7ICT Work Programme (New paradigms for embedded systems, monitoring and control towards complex systems engineering). This objective had a specific focus on the "design, development and engineering of System-of-Systems". These projects included: Ongoing European projects which are using a System of Systems approach include:
https://en.wikipedia.org/wiki/System_of_systems
Spontaneous order, also namedself-organizationin thehard sciences, is the spontaneousemergenceof order out of seeming chaos. The term "self-organization" is more often used forphysical changesandbiological processes, while "spontaneous order" is typically used to describe the emergence of various kinds of social orders inhumansocial networksfrom the behavior of a combination of self-interested individuals who are not intentionally trying to create order throughplanning. Proposed examples of systems which evolved through spontaneous order or self-organization include theevolution of life on Earth,language,crystal structure, theInternet,Wikipedia, andfree marketeconomy.[1][2] In economics and the social sciences, spontaneous order has been defined byHayekas "the result of human actions, not of human design".[3] Ineconomics, spontaneous order has been defined as an equilibrium behavior among self-interested individuals, which is most likely to evolve and survive, obeying thenatural selectionprocess "survival of the likeliest".[4] According toMurray Rothbard, the philosopherZhuangzi(c.369–286 BC) was the first to propose the idea of spontaneous order. Zhuangzi rejected the authoritarianism ofConfucianism, writing that there "has been such a thing as letting mankind alone; there has never been such a thing as governing mankind [with success]." He articulated an early form of spontaneous order, asserting that "good order results spontaneously when things are let alone", a concept later "developed particularly byProudhonin the nineteenth [century]".[5] In 1767, the sociologist and historianAdam Fergusonwithin the context ofScottish Enlightenmentdescribed society as the "result of human action, but not the execution of any human design".[6][7] Jacobs has suggested that the term "spontaneous order" was effectively coined byMichael Polanyiin his essay, "The Growth of Thought in Society," Economica 8 (November 1941): 428–56.[8] TheAustrian School of Economics, led byCarl Menger,Ludwig von MisesandFriedrich Hayekmade it a centerpiece in its social and economic thought. Hayek's theory of spontaneous order is the product of two related but distinct influences that do not always tend in the same direction. As an economic theorist, his explanations can be given a rational explanation. But as a legal and social theorist, he leans, by contrast, very heavily on a conservative and traditionalist approach which instructs us to submit blindly to a flow of events over which we can have little control.[9] Manyclassical-liberaltheorists,[10]such as Hayek, have argued thatmarket economiesare a spontaneous order, and that they represent "a more efficient allocation of societal resources than any design could achieve."[11]They claim this spontaneous order (referred to as theextended orderin Hayek'sThe Fatal Conceit) is superior to any order a human mind can design due to the specifics of the information required.[12]Centralized statistical data, they suppose, cannot convey this information because the statistics are created by abstracting away from the particulars of the situation.[13] According toNorman P. Barry, this is illustrated in the concept of theinvisible handproposed byAdam SmithinThe Wealth of Nations.[1] Lawrence Reed, president of theFoundation for Economic Education, alibertarianthink tankin the United States, argues that spontaneous order "is what happens when you leave people alone—when entrepreneurs... see the desires of people... and then provide for them." He further claims that "[entrepreneurs] respond to market signals, to prices. Prices tell them what's needed and how urgently and where. And it's infinitely better and more productive than relying on a handful of elites in some distant bureaucracy."[14] Anarchistsargue that thestateis in fact an artificial creation of the ruling elite, and that true spontaneous order would arise if it were eliminated. This is construed by some but not all as the ushering in of organization byanarchist law. In the anarchist view, such spontaneous order would involve the voluntary cooperation of individuals. According to theOxford Dictionary of Sociology, "the work of manysymbolic interactionistsis largely compatible with the anarchist vision, since it harbours a view of society as spontaneous order."[15] The concept of spontaneous order can also be seen in the works of the RussianSlavophilemovements and specifically in the works ofFyodor Dostoyevsky. The concept of an organic social manifestation as a concept in Russia expressed under the idea ofsobornost. Sobornost was also used byLeo Tolstoyas an underpinning to the ideology ofChristian anarchism. The concept was used to describe the uniting force behind the peasant or serfObshchinain pre-Soviet Russia.[16] Perhaps the most prominent exponent[17]of spontaneous order isFriedrich Hayek. In addition to arguing the economy is a spontaneous order, which he termed acatallaxy, he argued that common law[18]and the brain[19]are also types of spontaneous orders. InThe Republic of Science,[20]Michael Polanyialso argued thatscienceis a spontaneous order, a theory further developed by Bill Butos and Thomas McQuade in a variety of papers. Gus DiZerega has argued thatdemocracyis the spontaneous order form of government,[21]David Emmanuel Andersson has argued thatreligion in places like the United Statesis a spontaneous order,[22]and Troy Camplin argues that artistic and literary production are spontaneous orders.[23]Paul Krugmanhas also contributed to spontaneous order theory in his bookThe Self-Organizing Economy,[24]in which he claims that cities are self-organizing systems.Credibility thesissuggests that the credibility of social institutions is the driving factor behind the endogenous self-organization of institutions and their persistence.[25] Different rules of game would cause different types of spontaneous order. If an economic society obeys the equal-opportunity rules, the resulting spontaneous order is reflected as an exponential income distribution; that is, for an equal-opportunity economic society, the exponential income distribution is most likely to evolve and survive.[4]By analyzing datasets of household income from 66 countries and Hong Kong SAR, ranging from Europe to Latin America, North America and Asia, Tao et al found that, for all of these countries, the income structure for the great majority of populations (low and middle income classes) follows an exponential income distribution.[26] Roland Kley writes about Hayek's theory of spontaneous order that "the foundations of Hayek's liberalism are so incoherent" because the "idea of spontaneous order lacks distinctness and internal structure."[27]The three components of Hayek's theory are lack of intentionality, the "primacy of tacit or practical knowledge", and the "natural selection of competitive traditions." While the first feature, that social institutions may arise in some unintended fashion, is indeed an essential element of spontaneous order, the second two are only implications, not essential elements.[28] Hayek's theory has also been criticized for not offering a moral argument, and his overall outlook contains "incompatible strands that he never seeks to reconcile in a systematic manner."[29] Abby Innes has criticised many of the economic ideas as a fatal confrontation between economic libertarianism and reality, arguing that it represents a form of materialist utopia that has much in common with Soviet Russia[30]
https://en.wikipedia.org/wiki/Spontaneous_order
Biosemiotics(from theGreekβίοςbios, "life" and σημειωτικόςsēmeiōtikos, "observant of signs") is a field ofsemioticsandbiologythat studies the prelinguistic meaning-making, biologicalinterpretationprocesses, production ofsignsandcodesandcommunicationprocesses in the biological realm.[1] Biosemiotics integrates the findings of biology and semiotics and proposes aparadigmatic shiftin the scientific view oflife, in whichsemiosis(sign process, includingmeaningand interpretation) is one of its immanent and intrinsic features.[2]The termbiosemioticwas first used byFriedrich S. Rothschildin 1962,[3]butThomas Sebeok,Thure von Uexküll,Jesper Hoffmeyerand many others have implemented the term and field.[4]The field is generally divided between theoretical and applied biosemiotics. Insights from biosemiotics have also been adopted in thehumanitiesandsocial sciences, includinghuman-animal studies, human-plant studies[5][6]and cybersemiotics.[7] Biosemiotics is the study of meaning making processes in the living realm, or, to elaborate, a study of According to the basic types of semiosis under study, biosemiotics can be divided into According to the dominant aspect of semiosis under study, the following labels have been used: biopragmatics, biosemantics, and biosyntactics. Apart fromCharles Sanders Peirce(1839–1914) andCharles W. Morris(1903–1979), early pioneers of biosemiotics wereJakob von Uexküll(1864–1944),Heini Hediger(1908–1992),Giorgio Prodi(1928–1987),Marcel Florkin(1900–1979) andFriedrich S. Rothschild(1899–1995); the founding fathers of the contemporary interdiscipline wereThomas Sebeok(1920–2001) andThure von Uexküll(1908–2004).[12] In the 1980s a circle of mathematicians active in Theoretical Biology,René Thom(Institut des Hautes Etudes Scientifiques), Yannick Kergosien (Dalhousie UniversityandInstitut des Hautes Etudes Scientifiques), andRobert Rosen(Dalhousie University, also a former member of the Buffalo group withHoward H. Pattee), explored the relations between Semiotics and Biology using such headings as "Nature Semiotics",[13][14]"Semiophysics",[15]or "Anticipatory Systems"[16]and taking a modeling approach. The contemporary period (as initiated byCopenhagen-Tartu school)[17]include biologistsJesper Hoffmeyer,Kalevi Kull,Claus Emmeche,Terrence Deacon, semioticiansMartin Krampen, Paul Cobley, philosophers Donald Favareau,John Deely, John Collier and complex systems scientistsHoward H. Pattee,Michael Conrad,Luis M. Rocha,Cliff JoslynandLeón Croizat. In 2001, an annual international conference for biosemiotic research known as theGatherings in Biosemiotics[18]was inaugurated, and has taken place every year since. In 2004, a group of biosemioticians –Marcello Barbieri,Claus Emmeche,Jesper Hoffmeyer,Kalevi Kull, and Anton Markoš – decided to establish an international journal of biosemiotics. Under their editorship, theJournal of Biosemioticswas launched byNova Science Publishersin 2005 (two issues published), and with the same five co-editorsBiosemioticswas launched bySpringerin 2008. The book seriesBiosemiotics(Springer), edited by Claus Emmeche, Donald Favareau, Kalevi Kull, and Alexei Sharov, began in 2007 and 27 volumes have been published in the series by 2024. TheInternational Society for Biosemiotic Studieswas established in 2005 by Donald Favareau and the five editors listed above.[19]A collective programmatic paper on the basic theses of biosemiotics appeared in 2009.[20]and in 2010, an 800 page textbook and anthology,Essential Readings in Biosemiotics,was published, with bibliographies and commentary by Donald Favareau.[1] One of roots for biosemiotics has been medical semiotics. In 2016, Springer publishedBiosemiotic Medicine: Healing in the World of Meaning,edited by Farzad Goli as part of Studies in Neuroscience, Consciousness and Spirituality.[21] Since the work ofJakob von UexküllandMartin Heidegger, several scholars in the humanities have engaged with or appropriated ideas from biosemiotics in their own projects; conversely, biosemioticians have critically engaged with or reformulated humanistic theories using ideas from biosemiotics and complexity theory. For instance,Andreas Weberhas reformulated some ofHans Jonas'sideas using concepts from biosemiotics,[22]and biosemiotics have been used to interpret the poetry ofJohn Burnside.[23] Since 2021, the American philosopherJason Josephson Stormhas drawn on biosemiotics and empirical research onanimal communicationto proposehylosemiotics, a theory of ontology and communication that Storm believes could allow the humanities to move beyond thelinguistic turn.[24] John Deely's work also represents an engagement between humanistic and biosemiotic approaches. Deely was trained as a historian and not a biologist but discussed biosemiotics and zoosemiotics extensively in his introductory works on semiotics and clarified terms that are relevant for biosemiotics.[25]Although his idea ofphysiosemioticswas criticized by practicing biosemioticians, Paul Cobley, Donald Favareau, and Kalevi Kull wrote that "the debates on this conceptual point between Deely and the biosemiotics community were always civil and marked by a mutual admiration for the contributions of the other towards the advancement of our understanding of sign relations."[26]
https://en.wikipedia.org/wiki/Biosemiotics
Computational biologyrefers to the use of techniques incomputer science,data analysis,mathematical modelingandcomputational simulationsto understandbiological systemsand relationships.[1]An intersection ofcomputer science,biology, anddata science, the field also has foundations inapplied mathematics,molecular biology,cell biology,chemistry, andgenetics.[2] Bioinformatics, the analysis of informatics processes inbiological systems, began in the early 1970s. At this time, research inartificial intelligencewas usingnetwork modelsof the human brain in order to generate newalgorithms. This use ofbiological datapushed biological researchers to use computers to evaluate and compare large data sets in their own field.[3] By 1982, researchers shared information viapunch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational methods for quickly interpreting relevant information.[3] Perhaps the best-known example of computational biology, theHuman Genome Project, officially began in 1990.[4]By 2003, the project had mapped around 85% of the human genome, satisfying its initial goals.[5]Work continued, however, and by 2021 level "a complete genome" was reached with only 0.3% remaining bases covered by potential issues.[6][7]The missing Ychromosomewas added in January 2022. Since the late 1990s, computational biology has become an important part of biology, leading to numerous subfields.[8]Today, theInternational Society for Computational Biologyrecognizes 21 different 'Cofmmunities of Special Interest', each representing a slice of the larger field.[9]In addition to helping sequence the human genome, computational biology has helped create accuratemodelsof thehuman brain,map the 3D structure of genomes, and model biological systems.[3] In 2000, despite a lack of initial expertise in programming and data management, Colombia began applying computational biology from an industrial perspective, focusing on plant diseases. This research has contributed to understanding how to counteract diseases in crops like potatoes and studying the genetic diversity of coffee plants.[10]By 2007, concerns about alternative energy sources and global climate change prompted biologists to collaborate with systems and computer engineers. Together, they developed a robust computational network and database to address these challenges. In 2009, in partnership with the University of Los Angeles, Colombia also created aVirtual Learning Environment (VLE)to improve the integration of computational biology and bioinformatics.[10] In Poland, computational biology is closely linked to mathematics and computational science, serving as a foundation for bioinformatics and biological physics. The field is divided into two main areas: one focusing on physics and simulation and the other on biological sequences.[11]The application of statistical models in Poland has advanced techniques for studying proteins and RNA, contributing to global scientific progress. Polish scientists have also been instrumental in evaluating protein prediction methods, significantly enhancing the field of computational biology. Over time, they have expanded their research to cover topics such as protein-coding analysis and hybrid structures, further solidifying Poland's influence on the development of bioinformatics worldwide.[11] Computational anatomy is the study of anatomical shape and form at the visible orgross anatomical50−100μ{\displaystyle 50-100\mu }scale ofmorphology. It involves the development of computational mathematical and data-analytical methods for modeling and simulating biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such asmagnetic resonance imaging, computational anatomy has emerged as a subfield ofmedical imagingandbioengineeringfor extracting anatomical coordinate systems at the morpheme scale in 3D. The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations.[12]Thediffeomorphismgroup is used to study different coordinate systems viacoordinate transformationsas generated via theLagrangian and Eulerian velocities of flowfrom one anatomical configuration inR3{\displaystyle {\mathbb {R} }^{3}}to another. It relates withshape statisticsandmorphometrics, with the distinction thatdiffeomorphismsare used to map coordinate systems, whose study is known as diffeomorphometry. Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior inbiological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart ofexperimental biology.[13]Mathematical biology draws ondiscrete mathematics,topology(also useful for computational modeling),Bayesian statistics,linear algebraandBoolean algebra.[14] These mathematical approaches have enabled the creation ofdatabasesand other methods for storing, retrieving, and analyzing biological data, a field known asbioinformatics. Usually, this process involvesgeneticsand analyzinggenes. Gathering and analyzing large datasets have made room for growing research fields such asdata mining,[14]and computational biomodeling, which refers to buildingcomputer modelsandvisual simulationsof biological systems. This allows researchers to predict how such systems will react to different environments, which is useful for determining if a system can "maintain their state and functions against external and internal perturbations".[15]While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and genetherapy.[15]A useful modeling approach is to usePetri netsvia tools such asesyN.[16] Along similar lines, until recent decadestheoretical ecologyhas largely dealt withanalyticmodels that were detached from thestatistical modelsused byempiricalecologists. However, computational methods have aided in developing ecological theory viasimulationof ecological systems, in addition to increasing application of methods fromcomputational statisticsin ecological analyses. Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networkingcell signalingandmetabolic pathways. Systems biology often uses computational techniques from biological modeling andgraph theoryto study these complex interactions at cellular levels.[14] Computational biology has assisted evolutionary biology by: Computational genomics is the study of thegenomesofcellsandorganisms. TheHuman Genome Projectis one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individualpatient.[18]This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. Researchers are looking to sequence the genomes of animals, plants,bacteria, and all other types of life.[19] One of the main ways that genomes are compared is bysequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a commonancestor. Research suggests that between 80 and 90% of genes in newly sequencedprokaryoticgenomes can be identified this way.[19] Sequence alignmentis another process for comparing and detecting similarities between biological sequences or genes. Sequence alignment is useful in a number of bioinformatics applications, such as computing thelongest common subsequenceof two genes or comparing variants of certaindiseases.[citation needed] An untouched project in computational genomics is the analysis of intergenic regions, which comprise roughly 97% of the human genome.[19]Researchers are working to understand the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such asENCODEand theRoadmap Epigenomics Project. Understanding how individualgenescontribute to thebiologyof an organism at themolecular,cellular, and organism levels is known asgene ontology. TheGene Ontology Consortium's mission is to develop an up-to-date, comprehensive, computational model ofbiological systems, from the molecular level to larger pathways, cellular, and organism-level systems. The Gene Ontology resource provides a computational representation of current scientific knowledge about the functions of genes (or, more properly, theproteinand non-codingRNAmolecules produced by genes) from many different organisms, from humans to bacteria.[20] 3D genomics is a subsection in computational biology that focuses on the organization and interaction of genes within aeukaryotic cell. One method used to gather 3D genomic data is throughGenome Architecture Mapping(GAM). GAM measures 3D distances ofchromatinand DNA in the genome by combiningcryosectioning, the process of cutting a strip from the nucleus to examine the DNA, with laser microdissection. A nuclear profile is simply this strip or slice that is taken from the nucleus. Each nuclear profile contains genomic windows, which are certain sequences ofnucleotides- the base unit of DNA. GAM captures a genome network of complex, multi enhancer chromatin contacts throughout a cell.[21] Computational biology also plays a pivotal role in identifyingbiomarkersfor diseases such as cardiovascular conditions. By integrating various 'Omic' data - such asgenomics,proteomics, andmetabolomics- researchers can uncover potential biomarkers that aid in disease diagnosis, prognosis, and treatment strategies. For instance, metabolomic analyses have identified specific metabolites capable of distinguishing betweencoronary artery diseaseandmyocardial infarction, thereby enhancing diagnostic precision.[22] Computationalneuroscienceis the study of brain function in terms of the information processing properties of thenervous system. A subset of neuroscience, it looks to model the brain to examine specific aspects of the neurological system.[23]Models of the brain include: It is the work of computational neuroscientists to improve thealgorithmsand data structures currently used to increase the speed of such calculations. Computationalneuropsychiatryis an emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved inmental disorders. Several initiatives have demonstrated that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions.[25][26][27] Computational pharmacology is "the study of the effects of genomic data to find links between specificgenotypesand diseases and thenscreening drug data".[28]Thepharmaceutical industryrequires a shift in methods to analyze drug data. Pharmacologists were able to useMicrosoft Excelto compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on aspreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massivedata sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.[29] Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.[29] Computational biology plays a crucial role in discovering signs of new, previously unknown living creatures and incancerresearch. This field involves large-scale measurements of cellular processes, includingRNA,DNA, and proteins, which pose significant computational challenges. To overcome these, biologists rely on computational tools to accurately measure and analyze biological data.[30]In cancer research, computational biology aids in the complex analysis oftumorsamples, helping researchers develop new ways to characterize tumors and understand various cellular properties. The use of high-throughput measurements, involving millions of data points from DNA, RNA, and other biological structures, helps in diagnosing cancer at early stages and in understanding the key factors that contribute to cancer development. Areas of focus include analyzing molecules that are deterministic in causing cancer and understanding how the human genome relates to tumor causation.[30][31] Computational toxicology is a multidisciplinary area of study, which is employed in the early stages of drug discovery and development to predict the safety and potential toxicity of drug candidates. Computational biology has become instrumental in revolutionizingdrug discoveryprocesses. By integrating computational systems biology approaches, researchers can model complex biological systems, facilitating the identification of novel drug targets and the prediction of drug responses. These methodologies enable the simulation ofintracellularandintercellular signalingevents using data from genomic, proteomic, or metabolomic experiments, thereby streamlining the drug development pipeline and reducing associated costs.[32] Moreover, the convergence of computational biology with artificial intelligence (AI) has further accelerated drug design. AI-driven models can analyze vast datasets to predict molecular behavior, optimize lead compounds, and anticipate potential side effects, thereby enhancing the efficiency and effectiveness of drug discovery.[33] Computational biologists use a wide range of software and algorithms to carry out their research. Unsupervised learningis a type of algorithm that finds patterns in unlabeled data. One example isk-means clustering, which aims to partitionndata points intokclusters, in which each data point belongs to the cluster with the nearest mean. Another version is thek-medoidsalgorithm, which, when selecting a cluster center or cluster centroid, will pick one of its data points in the set, and not just an average of the cluster. The algorithm follows these steps: One example of this in biology is used in the 3D mapping of a genome. Information of a mouse's HIST1 region of chromosome 13 is gathered fromGene Expression Omnibus.[34]This information contains data on which nuclear profiles show up in certain genomic regions. With this information, theJaccard distancecan be used to find a normalized distance between all the loci. Graph analytics, ornetwork analysis, is the study of graphs that represent connections between different objects. Graphs can represent all kinds of networks in biology such asprotein-protein interactionnetworks, regulatory networks, Metabolic and biochemical networks and much more. There are many ways to analyze these networks. One of which is looking atcentralityin graphs. Finding centrality in graphs assigns nodes rankings to their popularity or centrality in the graph. This can be useful in finding which nodes are most important. For example, given data on the activity of genes over a time period, degree centrality can be used to see what genes are most active throughout the network, or what genes interact with others the most throughout the network. This contributes to the understanding of the roles certain genes play in the network. There are many ways to calculate centrality in graphs all of which can give different kinds of information on centrality. Finding centralities in biology can be applied in many different circumstances, some of which are gene regulatory, protein interaction and metabolic networks.[35] Supervised learningis a type of algorithm that learns from labeled data and learns how to assign labels to future data that is unlabeled. In biology supervised learning can be helpful when we have data that we know how to categorize and we would like to categorize more data into those categories. A common supervised learning algorithm is therandom forest, which uses numerousdecision treesto train a model to classify a dataset. Forming the basis of the random forest, a decision tree is a structure which aims to classify, or label, some set of data using certain known features of that data. A practical biological example of this would be taking an individual's genetic data and predicting whether or not that individual is predisposed to develop a certain disease or cancer. At each internal node the algorithm checks the dataset for exactly one feature, a specific gene in the previous example, and then branches left or right based on the result. Then at each leaf node, the decision tree assigns a class label to the dataset. So in practice, the algorithm walks a specific root-to-leaf path based on the input dataset through the decision tree, which results in the classification of that dataset. Commonly, decision trees have target variables that take on discrete values, like yes/no, in which case it is referred to as aclassification tree, but if the target variable is continuous then it is called aregression tree. To construct a decision tree, it must first be trained using a training set to identify which features are the best predictors of the target variable. Open source softwareprovides a platform for computational biology where everyone can access and benefit from software developed in research.[36]PLOScites four main reasons for the use of open source software: There are several large conferences that are concerned with computational biology. Some notable examples areIntelligent Systems for Molecular Biology,European Conference on Computational BiologyandResearch in Computational Molecular Biology. There are also numerous journals dedicated to computational biology. Some notable examples includeJournal of Computational BiologyandPLOS Computational Biology, a peer-reviewedopen access journalthat has many notable research projects in the field of computational biology. They provide reviews onsoftware, tutorials for open source software, and display information on upcoming computational biology conferences.[citation needed]Other journals relevant to this field includeBioinformatics,Computers in Biology and Medicine,BMC Bioinformatics,Nature Methods,Nature Communications,Scientific Reports,PLOS One, etc. Computational biology,bioinformaticsandmathematical biologyare all interdisciplinary approaches to thelife sciencesthat draw from quantitative disciplines such as mathematics andinformation science. TheNIHdescribes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.[1] Specifically, the NIH defines Computational biology: The development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.[1] Bioinformatics: Research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.[1] While each field is distinct, there may be significant overlap at their interface,[1]so much so that to many, bioinformatics and computational biology are terms that are used interchangeably. The terms computational biology andevolutionary computationhave a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to asgenetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.[38]
https://en.wikipedia.org/wiki/Computational_biology
Modelling biological systemsis a significant task ofsystems biologyandmathematical biology.[a]Computational systems biology[b][1]aims to develop and use efficientalgorithms,data structures,visualizationand communication tools with the goal ofcomputer modellingof biological systems. It involves the use ofcomputer simulationsof biological systems, includingcellularsubsystems (such as thenetworks of metabolitesandenzymeswhich comprisemetabolism,signal transductionpathways andgene regulatory networks), to both analyze and visualize the complex connections of these cellular processes.[2] An unexpectedemergent propertyof acomplex systemmay be a result of the interplay of the cause-and-effect among simpler, integrated parts (seebiological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart. By far the most widely accepted standard format for storing and exchanging models in the field is theSystems Biology Markup Language (SBML).[3]TheSBML.orgwebsite includes a guide to many important software packages used in computational systems biology. A large number of models encoded in SBML can be retrieved fromBioModels. Other markup languages with different emphases includeBioPAX,CellMLandMorpheusML.[4] Creating a cellular model has been a particularly challenging task ofsystems biologyandmathematical biology. It involves the use ofcomputer simulationsof the manycellularsubsystems such as thenetworks of metabolites,enzymeswhich comprisemetabolismandtranscription,translation, regulation and induction of gene regulatory networks.[5] The complex network of biochemical reaction/transport processes and their spatial organization make the development of apredictive modelof a living cell a grand challenge for the 21st century, listed as such by theNational Science Foundation(NSF) in 2006.[6] A whole cell computational model for the bacteriumMycoplasma genitalium, including all its 525 genes, gene products, and their interactions, was built by scientists from Stanford University and the J. Craig Venter Institute and published on 20 July 2012 in Cell.[7] A dynamic computer model of intracellular signaling was the basis for Merrimack Pharmaceuticals to discover the target for their cancer medicine MM-111.[8] Membrane computingis the task of modelling specifically acell membrane. An open source simulation of C. elegans at the cellular level is being pursued by theOpenWormcommunity. So far the physics engineGepettohas been built and models of the neural connectome and a muscle cell have been created in the NeuroML format.[9] Protein structure prediction is the prediction of the three-dimensional structure of aproteinfrom itsamino acidsequence—that is, the prediction of a protein'stertiary structurefrom itsprimary structure. It is one of the most important goals pursued bybioinformaticsandtheoretical chemistry.Protein structure predictionis of high importance inmedicine(for example, indrug design) andbiotechnology(for example, in the design of novelenzymes). Every two years, the performance of current methods is assessed in theCASPexperiment. TheBlue Brain Projectis an attempt to create a synthetic brain byreverse-engineeringthemammalian braindown to the molecular level. The aim of this project, founded in May 2005 by the Brain and Mind Institute of theÉcole PolytechniqueinLausanne, Switzerland, is to study the brain's architectural and functional principles. The project is headed by the Institute's director, Henry Markram. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON software, the simulation does not consist simply of anartificial neural network, but involves a partially biologically realistic model ofneurons.[10][11]It is hoped by its proponents that it will eventually shed light on the nature ofconsciousness. There are a number of sub-projects, including theCajal Blue Brain, coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories in the UK, U.S., and Israel. The Human Brain Project builds on the work of the Blue Brain Project.[12][13]It is one of six pilot projects in the Future Emerging Technologies Research Program of the European Commission,[14]competing for a billion euro funding. The last decade has seen the emergence of a growing number of simulations of the immune system.[15][16] TheVirtual Liverproject is a 43 million euro research program funded by the German Government, made up of seventy research group distributed across Germany. The goal is to produce a virtual liver, a dynamic mathematical model that represents human liverphysiology, morphology and function.[17] Electronic trees (e-trees) usually useL-systemsto simulate growth. L-systems are very important in the field ofcomplexity scienceandA-life. A universally accepted system for describing changes in plant morphology at the cellular or modular level has yet to be devised.[18]The most widely implemented tree generating algorithms are described in the papers"Creation and Rendering of Realistic Trees"andReal-Time Tree Rendering. Ecosystem models aremathematicalrepresentations ofecosystems. Typically they simplify complexfoodwebsdown to their major components ortrophic levels, and quantify these as either numbers oforganisms,biomassor theinventory/concentrationof some pertinentchemical element(for instance,carbonor anutrientspeciessuch asnitrogenorphosphorus). The purpose of models inecotoxicologyis the understanding, simulation and prediction of effects caused by toxicants in the environment. Most current models describe effects on one of many different levels of biological organization (e.g. organisms or populations). A challenge is the development of models that predict effects across biological scales.Ecotoxicology and modelsdiscusses some types of ecotoxicological models and provides links to many others. It is possible to model the progress of most infectious diseases mathematically to discover the likely outcome of anepidemicor to help manage them byvaccination. This field tries to findparametersfor variousinfectious diseasesand to use those parameters to make useful calculations about the effects of a massvaccinationprogramme.
https://en.wikipedia.org/wiki/Computational_biomodeling
Medical cyberneticsis a branch ofcyberneticswhich has been heavily affected by the development of thecomputer,[1]which applies the concepts of cybernetics tomedical researchand practice. At the intersection ofsystems biology,systems medicineand clinical applications it covers an emerging working program for the application ofsystems- andcommunication theory,connectionismanddecision theoryon biomedical research and health related questions. Medical cybernetics searches for quantitative descriptions of biological dynamics.[2]It investigates intercausal networks inhuman biology,medical decision makingandinformation processingstructures in theliving organism. Approaches of medical cybernetics include:
https://en.wikipedia.org/wiki/Medical_cybernetics
The following is a list ofsoftware packagesandapplicationsforbiocyberneticsresearch.
https://en.wikipedia.org/wiki/List_of_biomedical_cybernetics_software
Behavior-based robotics(BBR) orbehavioral roboticsis an approach inroboticsthat focuses on robots that are able to exhibit complex-appearing behaviors despite little internalvariable stateto model its immediate environment, mostly gradually correcting its actions via sensory-motor links. Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classicartificial intelligencetypically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering.[1] Most behavior-based systems are alsoreactive, which means they need no programming of what a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment. Behavior-based robots (BBR) usually show more biological-appearing actions than theircomputing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs andinsectsare frequent because of these actions. BBRs are sometimes considered examples ofweak artificial intelligence, although some have claimed they are models of all intelligence.[2] Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than build world models, behavior-based robots simply react to their environment and problems within that environment. They draw upon internal knowledge learned from their past experiences combined with their basic behaviors to resolve problems.[1][3] The school of behavior-based robots owes much to work undertaken in the 1980s at theMassachusetts Institute of TechnologybyRodney Brooks, who with students and colleagues built a series of wheeled and legged robots utilizing thesubsumption architecture. Brooks' papers, often written with lighthearted titles such as "Planning is just a way of avoiding figuring out what to do next", theanthropomorphicqualities of his robots, and the relatively low cost of developing such robots, popularized the behavior-based approach. Brooks' work builds—whether by accident or not—on two prior milestones in the behavior-based approach. In the 1950s,W. Grey Walter, an English scientist with a background inneurologicalresearch, built a pair ofvacuum tube-based robots that were exhibited at the 1951Festival of Britain, and which have simple but effective behavior-based control systems. The second milestone isValentino Braitenberg's1984 book, "Vehicles – Experiments in Synthetic Psychology" (MIT Press). He describes a series of thought experiments demonstrating how simply wired sensor/motor connections can result in some complex-appearing behaviors such as fear and love. Later work in BBR is from theBEAM roboticscommunity, which has built upon the work ofMark Tilden. Tilden was inspired by the reduction in the computational power needed for walking mechanisms from Brooks' experiments (which used onemicrocontrollerfor each leg), and further reduced the computational requirements to that oflogicchips,transistor-basedelectronics, and analogcircuitdesign. A different direction of development includes extensions of behavior-based robotics to multi-robot teams.[4]The focus in this work is on developing simple generic mechanisms that result in coordinated group behavior, either implicitly or explicitly.
https://en.wikipedia.org/wiki/Behavior_based_robotics
Bionicsorbiologically inspired engineeringis the application of biological methods and systems found innatureto the study and design ofengineeringsystems and moderntechnology.[1] The wordbionic, coined byJack E. Steelein August 1958, is aportmanteaufrombiologyandelectronics[2]which was popularized by the 1970s U.S. television seriesThe Six Million Dollar ManandThe Bionic Woman, both based on the novelCyborgbyMartin Caidin. All three stories feature humans given various superhuman powers by theirelectromechanicalimplants. According to proponents of bionic technology, thetransfer of technologybetween lifeforms and manufactured objects is desirable because evolutionary pressure typically forces living organisms—fauna and flora—to become optimized and efficient. For example, dirt- and water-repellent paint (coating) was inspired by the hydrophobic properties of thelotus flowerplant (thelotus effect).[3] The term "biomimetic" is preferred for references to chemical reactions, such as reactions that, in nature, involve biologicalmacromolecules(e.g., enzymes or nucleic acids) whose chemistry can be replicatedin vitrousing much smaller molecules.[4] Examples of bionics in engineering include the hulls of boats imitating the thick skin of dolphins orsonar,radar, and medicalultrasoundimaging imitatinganimal echolocation. In the field ofcomputer science, the study of bionics has producedartificial neurons,artificial neural networks,[5]andswarm intelligence. Bionics also influencedEvolutionary computationbut took the idea further by simulating evolutionin silicoand producing optimized solutions that had never appeared in nature. A 2006 research article estimated that "at present there is only a 12% overlap betweenbiologyand technology in terms of the mechanisms used".[6][clarification needed] The name "biomimetics" was coined byOtto Schmittin the 1950s. The term "bionics" was later introduced byJack E. Steelein August 1958 while working at theAeronautics Division HouseatWright-Patterson Air Force BaseinDayton, Ohio.[7]However, terms like biomimicry or biomimetics are preferred in order to avoid confusion with the medical term "bionics." Coincidentally,Martin Caidinused the word for his 1972 novelCyborg, which was adapted into the television film and subsequent seriesThe Six Million Dollar Man. Caidin was a long-time aviation industry writer before turning to fiction full-time. The study of bionics often emphasizes implementing a function found in nature rather than imitating biological structures. For example, in computer science,cyberneticsmodels the feedback and control mechanisms that are inherent in intelligent behavior, whileartificial intelligencemodels the intelligent function regardless of the particular way it can be achieved. The conscious copying of examples and mechanisms from natural organisms and ecologies is a form of appliedcase-based reasoning, treating nature itself as a database of solutions that already work. Proponents argue that theselective pressureplaced on allnatural life formsminimizes and removes failures. Although almost allengineeringcould be said to be a form ofbiomimicry, the modern origins of this field are usually attributed toBuckminster Fullerand its later codification as a house or field of study toJanine Benyus. There are generally three biological levels in the fauna or flora after which technology can be modeled: Bionicsrefers to the flow of concepts frombiologytoengineeringand vice versa. Hence, there are two slightly different points of view regarding the meaning of the word. In medicine,bionicsmeans the replacement or enhancement oforgansor other body parts by mechanical versions. Bionic implants differ from mereprosthesesby mimicking the original function very closely, or even surpassing it. The German equivalent of bionics,Bionik, always adheres to the broader meaning, in that it tries to develop engineering solutions from biological models. This approach is motivated by the fact that biological solutions will usually be optimized byevolutionaryforces. While the technologies that make bionic implants possible are developing gradually, a few successful bionic devices already exist, a well known one being the Australian-invented multi-channelcochlear implant(bionic ear), a device fordeafpeople. Since the bionic ear, many bionic devices have emerged and work is progressing on bionics solutions for other sensory disorders (e.g. vision and balance). Bionic research has recently provided treatments for medical problems such as neurological and psychiatric conditions, for exampleParkinson's diseaseandepilepsy.[23] In 1997,ColombianresearcherAlvaro Rios Povedadeveloped an upper limb and handprosthesiswithsensory feedback. This technology allows amputee patients to handle prosthetic hand systems in a more natural way.[24] By 2004 fully functionalartificial heartswere developed. Significant progress is expected with the advent ofnanotechnology. A well-known example of a proposed nanodevice is arespirocyte, an artificial red cell designed (though not yet built) byRobert Freitas. During his eight years in the Department of Bioengineering at theUniversity of Pennsylvania,Kwabena Boahendeveloped asiliconretinathat was able to process images in the same manner as a living retina. He confirmed the results by comparing the electrical signals from his silicon retina to the electrical signals produced by asalamandereye while the two retinas were looking at the same image. On July 21, 2015, theBBC's medical correspondentFergus Walshreported, "surgeons in Manchester have performed the first bionic eye implant in a patient with the most common cause of sight loss in the developed world. Ray Flynn, 80, has dry age-relatedmacular degenerationwhich has led to the total loss of his central vision. He is using a retinal implant that converts video images from a miniature video camera worn on his glasses. He can now make out the direction of white lines on a computer screen using the retinal implant." The implant, known as theArgus IIand manufactured in the US by the companySecond Sight Medical Products, had been used previously in patients who were blind as the result of the rare inherited degenerative eye diseaseretinitis pigmentosa.[25] In 2016,Tilly Lockey(born October 7, 2005) was fitted with a pair of bionic "Hero Arms" manufactured byOpenBionics, a UK bionics enterprise. The Hero Arm is a lightweight myoelectric prosthesis for below-elbow amputee adults and children aged eight and above. Tilly Lockey, who at 15 months had both her arms amputated after being diagnosed withmeningococcal sepsisstrain B, describes the Hero Arms as “really realistic, to the point where it was quite creepy how realistic they were.”[26] On February 17, 2020, Darren Fuller, a military veteran, became the first person to receive a bionic arm under a public healthcare system.[27]Fuller lost the lower section of his right arm while serving term inAfghanistanduring an incident that involved mortar ammunition in 2008. Business biomimetics is the latest development in the application of biomimetics. Specifically it applies principles and practice from biological systems to business strategy, process, organization design, and strategic thinking. It has been successfully used by a range of industries inFMCG, defense, central government, packaging, and business services. Based on the work by Phil Richardson at theUniversity of Bath[28]the approach was launched at theHouse of Lordsin May 2009. Generally, biometrics is used as acreativity techniquethat studiesbiologicalprototypes to get ideas for engineering solutions. In chemistry, abiomimetic synthesisis achemical synthesisinspired bybiochemicalprocesses. Another, more recent meaning of the term bionics refers to merging organism and machine. This approach results in a hybrid system combining biological and engineering parts, which can also be referred as a cybernetic organism (cyborg). Practical realization of this was demonstrated inKevin Warwick's implant experiments bringing aboutultrasoundinput via his own nervous system.
https://en.wikipedia.org/wiki/Bionics
Acognitive modelis a representation of one or morecognitive processesin humans or other animals for the purposes of comprehension and prediction. There are many types of cognitivemodels, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).[1][page needed]In terms ofinformation processing,cognitive modelingis modeling of human perception, reasoning, memory and action.[2][3] Cognitive models can be developed within or without acognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search and decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture.[4]Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling includeACT-R,Clarion,LIDA, andSoar. Cognitive modeling historically developed withincognitive psychology/cognitive science(includinghuman factors), and has received contributions from the fields ofmachine learningandartificial intelligenceamong others. A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task. In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information- processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.) Acomputational modelis a mathematical model incomputational sciencethat requires extensive computational resources to study the behavior of a complex system by computer simulation. Computational cognitive models examine cognition and cognitive functions by developing process-based computational models formulated as sets of mathematical equations or computer simulations.[5]The system under study is often a complexnonlinear systemfor which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments. Examples of common computational models areweather forecastingmodels,earth simulatormodels,flight simulatormodels, molecularprotein foldingmodels, andneural networkmodels. Asymbolicmodel is expressed in characters, usually non-numeric ones, that require translation before they can be used. A cognitive model issubsymbolicif it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category. Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details athybrid intelligent system. In the traditionalcomputational approach,representationsare viewed as static structures of discretesymbols.Cognitiontakes place by transforming static symbol structures indiscrete, sequential steps.Sensoryinformation is transformed into symbolic inputs, which produce symbolic outputs that get transformed intomotoroutputs. The entire system operates in an ongoing cycle. What is missing from this traditional view is that human cognition happenscontinuouslyand in real time. Breaking down the processes into discrete time steps may not fullycapturethis behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set orstate space, representing the totality of overall states the system could be in.[6]The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.[7] A typicaldynamicalmodel isformalizedby severaldifferential equationsthat describe how the system's state changes over time. By doing so, the form of the space of possibletrajectoriesand the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlyingmechanismsthat manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs. Early work in the application of dynamical systems to cognition can be found in the model ofHopfield networks.[8][9]These networks were proposed as a model forassociative memory. They represent the neural level ofmemory, modeling systems of around 30 neurons which can be in either an on or off state. By letting thenetworklearn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled withvectorswhich can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for. By taking into account theevolutionary developmentof the humannervous systemand the similarity of thebrainto other organs,Elmanproposed thatlanguageand cognition should be treated as a dynamical system rather than a digital symbol processor.[10]Neural networks of the type Elman implemented have come to be known asElman networks. Instead of treating language as a collection of staticlexicalitems andgrammarrules that are learned and then used according to fixed rules, the dynamical systems view defines thelexiconas regions of state space within a dynamical system. Grammar is made up ofattractorsand repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.[11] A classic developmental error has been investigated in the context of dynamical systems:[12][13]TheA-not-B erroris proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.[13] One proposed mechanism of a dynamical system comes from analysis of continuous-timerecurrent neural networks(CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuroncentral pattern generator(CPG) can be used to represent systems such as leg movements during walking.[14]This CPG contains threemotor neuronsto control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generatetorquein the leg joint. One feature of this pattern is that neuron outputs are eitheroff or onmost of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out. Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”,[15]treats theagentand the environment as a pair ofcoupleddynamical systems based on classical dynamical systems theory. In this formalization, the information from theenvironmentinforms the agent's behavior and the agent's actions modify the environment. In the specific case ofperception-action cycles, the coupling of the environment and the agent is formalized by twofunctions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together[16][17] Behavioral dynamics have been applied to locomotive behavior.[15][18][19]Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment. In an extension of classicaldynamical systems theory,[20]rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system. In the context of dynamical systems andembodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:[21] The interpretations of these examples rely on the followinglogic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.
https://en.wikipedia.org/wiki/Cognitive_modeling
Cognitive scienceis theinterdisciplinary, scientific study of themindand its processes.[2]It examines the nature, the tasks, and the functions ofcognition(in a broad sense). Mental faculties of concern to cognitive scientists includeperception,memory,attention,reasoning,language, andemotion. To understand these faculties, cognitive scientists borrow from fields such aspsychology,economics,artificial intelligence,neuroscience,linguistics, andanthropology.[3]The typical analysis of cognitive science spans many levels of organization, from learning and decision-making to logic and planning; fromneuralcircuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."[3] The cognitive sciences began as an intellectual movement in the 1950s, called thecognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (seePlato'sMenoandAristotle'sDe Anima); Modern philosophers such asDescartes,David Hume,Immanuel Kant,Benedict de Spinoza,Nicolas Malebranche,Pierre Cabanis,LeibnizandJohn Locke, rejectedscholasticismwhile mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.[citation needed] The modern culture of cognitive science can be traced back to the earlycyberneticistsin the 1930s and 1940s, such asWarren McCullochandWalter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known asartificial neural networks, models of computation inspired by the structure ofbiological neural networks.[4] Another precursor was the early development of thetheory of computationand thedigital computerin the 1940s and 1950s.Kurt Gödel,Alonzo Church,Alan Turing, andJohn von Neumannwere instrumental in these developments. The modern computer, orVon Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.[5] The first instance of cognitive science experiments being done at an academic institution took place atMIT Sloan School of Management, established byJ.C.R. Lickliderworking within the psychology department and conducting experiments using computer memory as models for human cognition.[6][unreliable source?]In 1959,Noam Chomskypublished a scathing review ofB. F. Skinner's bookVerbal Behavior.[7]At the time, Skinner'sbehavioristparadigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory likegenerative grammar, which not only attributed internal representations but characterized their underlying order.[citation needed] The termcognitive sciencewas coined byChristopher Longuet-Higginsin his 1973 commentary on theLighthill report, which concerned the then-current state ofartificial intelligenceresearch.[8]In the same decade, the journalCognitive Scienceand theCognitive Science Societywere founded.[9]The founding meeting of theCognitive Science Societywas held at theUniversity of California, San Diegoin 1979, which resulted in cognitive science becoming an internationally visible enterprise.[10]In 1972,Hampshire Collegestarted the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings,Vassar Collegebecame the first institution in the world to grant an undergraduate degree in Cognitive Science.[11]In 1986, the first Cognitive Science Department in the world was founded at theUniversity of California, San Diego.[10] In the 1970s and early 1980s, as access to computers increased,artificial intelligenceresearch expanded. Researchers such asMarvin Minskywould write computer programs in languages such asLISPto attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding humanthought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise ofneural networksandconnectionismas a research paradigm. Under this point of view, often attributed toJames McClellandandDavid Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation.[12][13]While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility.[14][15][16][17][18][19][20]Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.[21] Recent developments inquantum computation, including the ability to run quantum circuits on quantum computers such asIBM Quantum Platform, has accelerated work using elements from quantum mechanics in cognitive models.[22][23] A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, ornaturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individualneuronswhile a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative.Francisco Varela, inThe Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience".[24]On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior.Marr[25]gave a famous description of three levels of analysis: Cognitive science is an interdisciplinary field with contributors from various fields, includingpsychology,neuroscience,linguistics,philosophy of mind,computer science,anthropologyandbiology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses thescientific methodas well assimulationormodeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.[26][27] Many, but not all, who consider themselves cognitive scientists hold afunctionalistview of the mind—the view that mental states and processes should be explained by their function – what they do.[28]According to themultiple realizabilityaccount of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.[citation needed] The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (LakoffandJohnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions ofanalytic philosophy, where "cognitive" has to do only with formal rules andtruth-conditional semantics. The earliest entries for the word "cognitive" in theOEDtake it to mean roughly"pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions ofPlatonictheories ofknowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.[29] Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness,animal cognition, andcomparativeandevolutionarypsychologies. However, with the decline ofbehaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated andembodied cognitiontheories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.[citation needed] Below are some of the main topics that cognitive science is concerned with; seeList of cognitive science topicsfor a more exhaustive list. Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena.Computational modelinguses simulations to study how human intelligence may be structured.[30](See§ Computational modeling.) There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view usesconnectionismto study the mind, whereas the latter emphasizessymbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain. Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include thedichotic listeningtask (Cherry, 1957) and studies ofinattentional blindness(Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.[31] The psychological construct of attention is sometimes confused with the concept ofintentionalitydue to some degree of semantic ambiguity in theirdefinitions. At the beginning of experimental research on attention,Wilhelm Wundtdefined this term as "that psychical process, which is operative in the clear perception of the narrow region of the content of consciousness."[32]His experiments showed the limits of attention in space and time, which were 3-6 letters during an exposition of 1/10 s.[32]Because this notion develops within the framework of the original meaning during a hundred years of research, the definition of attention would reflect the sense when it accounts for the main features initially attributed to this term – it is a process of controlling thought that continues over time.[33]While intentionality is the power of minds to be about something,[34]attention is the concentration of awareness on somephenomenonduring a period of time, which is necessary to elevate the clearperceptionof the narrow region of the content ofconsciousnessand which is feasible to control this focus inmind.[32] The significance of knowledge about the scope of attention for studyingcognitionis that it defines the intellectual functions of cognition such as apprehension, judgment, reasoning, and working memory. The development of attention scope increases the set of faculties responsible for themindrelies on how it perceives, remembers, considers, and evaluates in making decisions.[35]The ground of this statement is that the more details (associated with an event) the mind may grasp for their comparison, association, and categorization, the closer apprehension, judgment, and reasoning of the event are in accord with reality.[36]According to Latvian professor Sandra Mihailova and professor Igor Val Danilov, the more elements of the phenomenon (or phenomena ) the mind can keep in the scope of attention simultaneously, the more significant number of reasonable combinations within that event it can achieve, enhancing the probability of better understanding features and particularity of the phenomenon (phenomena).[36]For example, three items in the focal point of consciousness yield six possible combinations (3 factorial) and four items – 24 (4 factorial) combinations. The number of reasonable combinations becomes significant in the case of a focal point with six items with 720 possible combinations (6 factorial).[36] Embodied cognitionapproaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes,[37]to posture, motor control,proprioception, and kinaesthesis,[38]to autonomic processes that involve heartbeat[39]and respiration,[40]to the role of the enteric gut microbiome.[41]It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition[42][43]includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition[44]) to stronger extended[45]and enactive versions that are sometimes referred to as radical embodied cognitive science.[46][47] A hypothesis of pre-perceptual multimodal integration supports embodied cognition approaches and converges two competing naturalist and constructivist viewpoints about cognition and the development of emotions.[48]According to this hypothesis supported by empirical data, cognition and emotion development are initiated by the association of affective cues with stimuli responsible for triggering the neuronal pathways of simple reflexes.[48]This pre-perceptual multimodal integration can succeed owing to neuronal coherence in mother-child dyads beginning from pregnancy.[48]These cognitive-reflex and emotion-reflex stimuli conjunctions further form simple innate neuronal assemblies, shaping the cognitive and emotional neuronal patterns in statistical learning that are continuously connected with the neuronal pathways of reflexes.[48] The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences.Linguisticsoften divides language processing intoorthography,phonetics,phonology,morphology,syntax,semantics, andpragmatics. Many aspects of language can be studied from each of these components and from their interaction.[49][better source needed] The study of language processing incognitive scienceis closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of.[50]Linguistshave found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, andrecognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities areinnateor learned. This is often framed in terms of thenature and nurturedebate. Thenativistview emphasizes that certain features are innate to an organism and are determined by itsgeneticendowment. Theempiricistview, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains abouthowgenetic information might guide cognitive development. In the area oflanguage acquisition, for example, some (such asSteven Pinker)[51]have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues inRethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience. Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms.Declarative memory—grouped into subsets ofsemanticandepisodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?").Procedural memoryallows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears oncognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")? Perception is the ability to take in information via thesenses, and process it in some way.Visionandhearingare two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people processoptical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study ofhaptic(tactile),olfactory, andgustatorystimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action. Consciousness, at its simplest, isawarenessof a state or object, either internal to oneself or in one's external environment.[52]However, its nature has led to millennia of analyses, explanations, and debate among philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with themind, and at other times, an aspect of it. In the past, it was one's "inner life", the world ofintrospection, of private thought,imagination, andvolition.[53]Today, it often includes any kind ofcognition,experience, feeling, orperception. It may be awareness, awareness of awareness,metacognition, orself-awareness, either continuously changing or not.[54][55]The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.[56] Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods frompsychology,neuroscience,computer scienceandsystems theory. In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that incognitive psychologyandpsychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice.[57]Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used incognitive neuroscience. Computational modelsrequire a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and generalpropertiesofintelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid. All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e.cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizationaldecision-makingandreasoning[59][60]or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.). Research methods borrowed directly fromneuroscienceandneuropsychologycan also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system. Cognitive science has given rise to models of humancognitive biasandriskperception, and has been influential in the development ofbehavioral finance, part ofeconomics. It has also given rise to a new theory of thephilosophy of mathematics(related to denotational mathematics), and many theories ofartificial intelligence,persuasionandcoercion. It has made its presence known in thephilosophy of languageandepistemologyas well as constituting a substantial wing of modernlinguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such asdyslexia,anopsia, andhemispatial neglect. Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names includeDaniel Dennett, who writes from a computational systems perspective,[80]John Searle, known for his controversialChinese roomargument,[81]andJerry Fodor, who advocatesfunctionalism.[82] Others includeDavid Chalmers, who advocatesDualismand is also known for articulatingthe hard problem of consciousness, andDouglas Hofstadter, famous for writingGödel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics,Noam ChomskyandGeorge Lakoffhave been influential (both have also become notable as political commentators). Inartificial intelligence,Marvin Minsky,Herbert A. Simon, andAllen Newellare prominent. Popular names in the discipline of psychology includeGeorge A. Miller,James McClelland,Philip Johnson-Laird,Lawrence Barsalou,Vittorio Guidano,Howard GardnerandSteven Pinker. AnthropologistsDan Sperber,Edwin Hutchins,Bradd Shore,James WertschandScott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association. Computational theories (with models and simulations) have also been developed, byDavid Rumelhart,James McClellandandPhilip Johnson-Laird. Epistemicsis a term coined in 1969 by theUniversity of Edinburghwith the foundation of its School of Epistemics. Epistemics is to be distinguished fromepistemologyin that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge. Christopher Longuet-Higginshas defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated."[83]In his 1978 essay "Epistemics: The Regulative Theory of Cognition",[84]Alvin I. Goldmanclaims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs. In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh'sSchool of Informatics.[85] One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem"[86][87][88](that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by theBinding-by-synchrony(BBS) Hypothesis from neurophysiology.[89][90][91][92]Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition.[93][94][95]In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" inconnectionism). However, despite significant advances in understanding the integrated theory of cognition (specifically theBinding problem), the debate on this issue of beginning cognition is still in progress. From the different perspectives noted above, this problem can be reduced to the issue of how organisms at the simple reflexes stage of development overcome the threshold of the environmental chaos of sensory stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations.[96]The so-called Primary Data Entry (PDE) thesis poses doubts about the ability of such an organism to overcome this cue threshold on its own.[97]In terms of mathematical tools, the PDE thesis underlines the insuperable high threshold of the cacophony of environmental stimuli (the stimuli noise) for young organisms at the onset of life.[97]It argues that the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, any dynamical bound together or integration to a representation of the perceptual object by means of a synchronization mechanism can not help organisms in distinguishing relevant cue (informative stimulus) for overcome this noise threshold.[97]
https://en.wikipedia.org/wiki/Cognitive_science
Adigital organismis aself-replicatingcomputer programthatmutatesandevolves. Digitalorganismsare used as a tool to study the dynamics of Darwinianevolution, and to test or verify specific hypotheses ormathematical modelsof evolution. The study of digital organisms is closely related to the area ofartificial life. Digital organisms can be traced back to the gameDarwin, developed in 1961 at Bell Labs, in which computer programs had to compete with each other by trying to stop others fromexecuting.[1]A similar implementation that followed this was the gameCore War. In Core War, it turned out that one of the winningstrategieswas to replicate as fast as possible, which deprived the opponent of allcomputational resources. Programs in the Core War game were also able to mutate themselves and each other by overwriting instructions in the simulated "memory" in which the game took place. This allowed competing programs to embed damaging instructions in each other that caused errors (terminating the process that read it), "enslaved processes" (making an enemy program work for you), or even change strategies mid-game and heal themselves. Steen RasmussenatLos Alamos National Laboratorytook the idea from Core War one step further in his core world system by introducing a genetic algorithm that automatically wrote programs. However, Rasmussen did not observe the evolution of complex and stable programs. It turned out that theprogramming languagein which core world programs were written was very brittle, and more often than not mutations would completely destroy the functionality of a program. The first to solve the issue of program brittleness wasThomas S. Raywith hisTierrasystem, which was similar to core world. Ray made some key changes to the programming language such that mutations were much less likely to destroy a program. With these modifications, he observed for the first time computer programs that did indeed evolve in a meaningful and complex way. Later,Chris Adami, Titus Brown, andCharles Ofriastarted developing theirAvidasystem,[2]which was inspired by Tierra but again had some crucial differences. In Tierra, all programs lived in the sameaddress spaceand could potentially execute or otherwise interfere with each other's code. In Avida, on the other hand, each program lives in its own address space. Because of this modification, experiments with Avida became much cleaner and easier to interpret than those with Tierra. With Avida, digital organism research has begun to be accepted as a valid contribution to evolutionary biology by a growing number of evolutionary biologists. Evolutionary biologistRichard LenskiofMichigan State Universityhas used Avida extensively in his work. Lenski, Adami, and their colleagues have published in journals such asNature[3]and theProceedings of the National Academy of Sciences(USA).[4] In 1996, Andy Pargellis created a Tierra-like system calledAmoebathat evolved self-replication from a randomly seeded initial condition. More recentlyREvoSim- asoftware packagebased around binary digital organisms - has allowed evolutionary simulations of large populations that can be run for geological timescales.[5]
https://en.wikipedia.org/wiki/Digital_organism
Fuzzy logicis a form ofmany-valued logicin which thetruth valueof variables may be anyreal numberbetween 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false.[1]By contrast, inBoolean logic, the truth values of variables may only be theintegervalues 0 or 1. The termfuzzy logicwas introduced with the 1965 proposal offuzzy set theoryby mathematicianLotfi Zadeh.[2][3]Fuzzy logic had, however, been studied since the 1920s, asinfinite-valued logic—notably byŁukasiewiczandTarski.[4] Fuzzy logic is based on the observation that people make decisions based on imprecise and non-numerical information. Fuzzy models or fuzzy sets are mathematical means of representingvaguenessand imprecise information (hence the term fuzzy). These models have the capability of recognising, representing, manipulating, interpreting, and using data and information that are vague and lack certainty.[5][6] Fuzzy logic has been applied to many fields, fromcontrol theorytoartificial intelligence. Classical logiconly permits conclusions that are either true or false. However, there are alsopropositionswith variable answers, which one might find when asking a group of people to identify a color. In such instances, the truth appears as the result of reasoning from inexact or partial knowledge in which the sampled answers are mapped on a spectrum.[7] Bothdegrees of truthandprobabilitiesrange between 0 and 1 and hence may seem identical at first, but fuzzy logic uses degrees of truth as amathematical modelofvagueness, whileprobabilityis a mathematical model ofignorance.[8] A basic application might characterize various sub-ranges of acontinuous variable. For instance, a temperature measurement foranti-lock brakesmight have several separate membership functions defining particular temperature ranges needed to control the brakes properly. Each function maps the same temperature value to a truth value in the 0 to 1 range. These truth values can then be used to determine how the brakes should be controlled.[9]Fuzzy set theory provides a means for representing uncertainty. In fuzzy logic applications, non-numeric values are often used to facilitate the expression of rules and facts.[10] A linguistic variable such asagemay accept values such asyoungand its antonymold. Because natural languages do not always contain enough value terms to express a fuzzy value scale, it is common practice to modify linguistic values withadjectivesoradverbs. For example, we can use thehedgesratherandsomewhatto construct the additional valuesrather oldorsomewhat young.[11] The most well-known system is theMamdanirule-based one.[12]It uses the following rules: Fuzzification is the process of assigning the numerical input of a system to fuzzy sets with some degree of membership. This degree of membership may be anywhere within the interval [0,1]. If it is 0 then the value does not belong to the given fuzzy set, and if it is 1 then the value completely belongs within the fuzzy set. Any value between 0 and 1 represents the degree of uncertainty that the value belongs in the set. These fuzzy sets are typically described by words, and so by assigning the system input to fuzzy sets, we can reason with it in a linguistically natural manner. For example, in the image below, the meanings of the expressionscold,warm, andhotare represented by functions mapping a temperature scale. A point on that scale has three "truth values"—one for each of the three functions. The vertical line in the image represents a particular temperature that the three arrows (truth values) gauge. Since the red arrow points to zero, this temperature may be interpreted as "not hot"; i.e. this temperature has zero membership in the fuzzy set "hot". The orange arrow (pointing at 0.2) may describe it as "slightly warm" and the blue arrow (pointing at 0.8) "fairly cold". Therefore, this temperature has 0.2 membership in the fuzzy set "warm" and 0.8 membership in the fuzzy set "cold". The degree of membership assigned for each fuzzy set is the result of fuzzification. Fuzzy setsare often defined as triangle or trapezoid-shaped curves, as each value will have a slope where the value is increasing, a peak where the value is equal to 1 (which can have a length of 0 or greater) and a slope where the value is decreasing.[13]They can also be defined using asigmoid function.[14]One common case is thestandard logistic functiondefined as which has the following symmetry property From this it follows that (S(x)+S(−x))⋅(S(y)+S(−y))⋅(S(z)+S(−z))=1{\displaystyle (S(x)+S(-x))\cdot (S(y)+S(-y))\cdot (S(z)+S(-z))=1} Fuzzy logic works with membership values in a way that mimicsBoolean logic. To this end, replacements for basicoperators("gates") AND, OR, NOT must be available. There are several ways to this. A common replacement is called theZadeh operators: For TRUE/1 and FALSE/0, the fuzzy expressions produce the same result as the Boolean expressions. There are also other operators, more linguistic in nature, calledhedgesthat can be applied. These are generally adverbs such asvery, orsomewhat, which modify the meaning of a set using amathematical formula.[15] However, an arbitrary choice table does not always define a fuzzy logic function. In the paper (Zaitsev, et al),[16]a criterion has been formulated to recognize whether a given choice table defines a fuzzy logic function and a simple algorithm of fuzzy logic function synthesis has been proposed based on introduced concepts of constituents of minimum and maximum. A fuzzy logic function represents a disjunction of constituents of minimum, where a constituent of minimum is a conjunction of variables of the current area greater than or equal to the function value in this area (to the right of the function value in the inequality, including the function value). Another set of AND/OR operators is based on multiplication, where Given any two of AND/OR/NOT, it is possible to derive the third. The generalization of AND is an instance of at-norm. IF-THEN rules map input or computed truth values to desired output truth values. Example: Given a certain temperature, the fuzzy variablehothas a certain truth value, which is copied to thehighvariable. Should an output variable occur in several THEN parts, then the values from the respective IF parts are combined using the OR operator. The goal is to get a continuous variable from fuzzy truth values.[17][18] This would be easy if the output truth values were exactly those obtained from fuzzification of a given number. Since, however, all output truth values are computed independently, in most cases they do not represent such a set of numbers.[18]One has then to decide for a number that matches best the "intention" encoded in the truth value. For example, for several truth values of fan_speed, an actual speed must be found that best fits the computed truth values of the variables 'slow', 'moderate' and so on.[18] There is no single algorithm for this purpose. A common algorithm is The TSK system[19]is similar to Mamdani, but the defuzzification process is included in the execution of the fuzzy rules. These are also adapted, so that instead the consequent of the rule is represented through a polynomial function (usually constant or linear). An example of a rule with a constant output would be: In this case, the output will be equal to the constant of the consequent (e.g. 2). In most scenarios we would have an entire rule base, with 2 or more rules. If this is the case, the output of the entire rule base will be the average of the consequent of each rule i (Yi), weighted according to the membership value of its antecedent (hi): ∑i(hi⋅Yi)∑ihi{\displaystyle {\frac {\sum _{i}(h_{i}\cdot Y_{i})}{\sum _{i}h_{i}}}} An example of a rule with a linear output would be instead: In this case, the output of the rule will be the result of function in the consequent. The variables within the function represent the membership values after fuzzification,notthe crisp values. Same as before, in case we have an entire rule base with 2 or more rules, the total output will be the weighted average between the output of each rule. The main advantage of using TSK over Mamdani is that it is computationally efficient and works well within other algorithms, such as PID control and with optimization algorithms. It can also guarantee the continuity of the output surface. However, Mamdani is more intuitive and easier to work with by people. Hence, TSK is usually used within other complex methods, such as inadaptive neuro fuzzy inference systems. Since the fuzzy system output is a consensus of all of the inputs and all of the rules, fuzzy logic systems can be well behaved when input values are not available or are not trustworthy. Weightings can be optionally added to each rule in the rulebase and weightings can be used to regulate the degree to which a rule affects the output values. These rule weightings can be based upon the priority, reliability or consistency of each rule. These rule weightings may be static or can be changed dynamically, even based upon the output from other rules. Fuzzy logic is used incontrol systemsto allow experts to contribute vague rules such as "if you are close to the destination station and moving fast, increase the train's brake pressure"; these vague rules can then be numerically refined within the system. Many of the early successful applications of fuzzy logic were implemented in Japan. A first notable application was on theSendai Subway 1000 series, in which fuzzy logic was able to improve the economy, comfort, and precision of the ride. It has also been used forhandwriting recognitionin Sony pocket computers, helicopter flight aids, subway system controls, improving automobile fuel efficiency, single-button washing machine controls, automatic power controls in vacuum cleaners, and early recognition of earthquakes through the Institute of Seismology Bureau of Meteorology, Japan.[20] Neural networksbasedartificial intelligenceand fuzzy logic are, when analyzed, the same thing—the underlying logic of neural networks is fuzzy. A neural network will take a variety of valued inputs, give them different weights in relation to each other, combine intermediate values a certain number of times, and arrive at a decision with a certain value. Nowhere in that process is there anything like the sequences of either-or decisions which characterize non-fuzzy mathematics,computer programming, anddigital electronics. In the 1980s, researchers were divided about the most effective approach tomachine learning:decision tree learningor neural networks. The former approach uses binary logic, matching the hardware on which it runs, but despite great efforts it did not result in intelligent systems. Neural networks, by contrast, did result in accurate models of complex situations and soon found their way onto a multitude of electronic devices.[21]They can also now be implemented directly on analog microchips, as opposed to the previous pseudo-analog implementations on digital chips. The greater efficiency of these compensates for the intrinsic lesser accuracy of analog in various use cases. Fuzzy logic is an important concept inmedical decision making. Since medical and healthcare data can be subjective or fuzzy, applications in this domain have a great potential to benefit a lot by using fuzzy-logic-based approaches. Fuzzy logic can be used in many different aspects within the medical decision making framework. Such aspects include[22][23][24][clarification needed]inmedical image analysis, biomedical signal analysis,segmentation of images[25]or signals, andfeature extraction/ selection of images[25]or signals.[26] The biggest question in this application area is how much useful information can be derived when using fuzzy logic. A major challenge is how to derive the required fuzzy data. This is even more challenging when one has to elicit such data from humans (usually, patients). As has been said "The envelope of what can be achieved and what cannot be achieved in medical diagnosis, ironically, is itself a fuzzy one" How to elicit fuzzy data, and how to validate the accuracy of the data is still an ongoing effort, strongly related to the application of fuzzy logic. The problem of assessing the quality of fuzzy data is a difficult one. This is why fuzzy logic is a highly promising possibility within the medical decision making application area but still requires more research to achieve its full potential.[27] One of the common application areas of fuzzy logic is image-basedcomputer-aided diagnosisin medicine.[28]Computer-aided diagnosis is a computerized set of inter-related tools that can be used to aid physicians in their diagnostic decision-making. Once fuzzy relations are defined, it is possible to develop fuzzyrelational databases. The first fuzzy relational database, FRDB, appeared inMaria Zemankova's dissertation (1983). Later, some other models arose like the Buckles-Petry model, the Prade-Testemale Model, the Umano-Fukami model or the GEFRED model by J. M. Medina, M. A. Vila et al. Fuzzy querying languages have been defined, such as theSQLfby P. Bosc et al. and theFSQLby J. Galindo et al. These languages define some structures in order to include fuzzy aspects in the SQL statements, like fuzzy conditions, fuzzy comparators, fuzzy constants, fuzzy constraints, fuzzy thresholds, linguistic labels etc. Inmathematical logic, there are severalformal systemsof "fuzzy logic", most of which are in the family oft-norm fuzzy logics. The most important propositional fuzzy logics are: Similar to the waypredicate logicis created frompropositional logic, predicate fuzzy logics extend fuzzy systems byuniversalandexistential quantifiers. The semantics of the universal quantifier int-norm fuzzy logicsis theinfimumof the truth degrees of the instances of the quantified subformula, while the semantics of the existential quantifier is thesupremumof the same. The notions of a "decidable subset" and "recursively enumerablesubset" are basic ones for classical mathematics andclassical logic. Thus the question of a suitable extension of them tofuzzy set theoryis a crucial one. The first proposal in such a direction was made by E. S. Santos by the notions offuzzyTuring machine,Markov normal fuzzy algorithmandfuzzy program(see Santos 1970). Successively, L. Biacino and G. Gerla argued that the proposed definitions are rather questionable. For example, in[29]one shows that the fuzzy Turing machines are not adequate for fuzzy language theory since there are natural fuzzy languages intuitively computable that cannot be recognized by a fuzzy Turing Machine. Then they proposed the following definitions. Denote byÜthe set of rational numbers in [0,1]. Then a fuzzy subsets:S→{\displaystyle \rightarrow }[0,1] of a setSis recursively enumerable if a recursive maph:S×N→{\displaystyle \rightarrow }Üexists such that, for everyxinS, the functionh(x,n) is increasing with respect tonands(x) = limh(x,n). We say thatsisdecidableif bothsand its complement –sare recursively enumerable. An extension of such a theory to the general case of the L-subsets is possible (see Gerla 2006). The proposed definitions are well related to fuzzy logic. Indeed, the following theorem holds true (provided that the deduction apparatus of the considered fuzzy logic satisfies some obvious effectiveness property). Any "axiomatizable" fuzzy theory is recursively enumerable. In particular, thefuzzy setof logically true formulas is recursively enumerable in spite of the fact that the crisp set of valid formulas is not recursively enumerable, in general. Moreover, any axiomatizable and complete theory is decidable. It is an open question to give support for a "Church thesis" forfuzzy mathematics, the proposed notion of recursive enumerability for fuzzy subsets is the adequate one. In order to solve this, an extension of the notions of fuzzy grammar and fuzzyTuring machineare necessary. Another open question is to start from this notion to find an extension ofGödel's theorems to fuzzy logic. Fuzzy logic and probability address different forms of uncertainty. While both fuzzy logic and probability theory can represent degrees of certain kinds of subjective belief,fuzzy set theoryuses the concept of fuzzy set membership, i.e., how much an observation is within a vaguely defined set, and probability theory uses the concept ofsubjective probability, i.e., frequency of occurrence or likelihood of some event or condition[clarification needed]. The concept of fuzzy sets was developed in the mid-twentieth century atBerkeley[30]as a response to the lack of a probability theory for jointly modelling uncertainty andvagueness.[31] Bart Koskoclaims in Fuzziness vs. Probability[32]that probability theory is a subtheory of fuzzy logic, as questions of degrees of belief in mutually-exclusive set membership in probability theory can be represented as certain cases of non-mutually-exclusive graded membership in fuzzy theory. In that context, he also derivesBayes' theoremfrom the concept of fuzzy subsethood.Lotfi A. Zadehargues that fuzzy logic is different in character from probability, and is not a replacement for it. He fuzzified probability to fuzzy probability and also generalized it topossibility theory.[33] More generally, fuzzy logic is one of many different extensions to classical logic intended to deal with issues of uncertainty outside of the scope of classical logic, the inapplicability of probability theory in many domains, and the paradoxes ofDempster–Shafer theory. Computational theoristLeslie Valiantuses the termecorithmsto describe how many less exact systems and techniques like fuzzy logic (and "less robust" logic) can be applied tolearning algorithms. Valiant essentially redefines machine learning as evolutionary. In general use, ecorithms are algorithms that learn from their more complex environments (henceeco-) to generalize, approximate and simplify solution logic. Like fuzzy logic, they are methods used to overcome continuous variables or systems too complex to completely enumerate or understand discretely or exactly.[34]Ecorithms and fuzzy logic also have the common property of dealing with possibilities more than probabilities, although feedback andfeed forward, basically stochastic weights, are a feature of both when dealing with, for example,dynamical systems. Another logical system where truth values are real numbers between 0 and 1 and where AND & OR operators are replaced with MIN and MAX is Gödel's G∞logic. This logic has many similarities with fuzzy logic but defines negation differently and has an internal implication. Negation¬G{\displaystyle \neg _{G}}and implication→G{\displaystyle {\xrightarrow[{G}]{}}}are defined as follows: which turns the resulting logical system into a model forintuitionistic logic, making it particularly well-behaved among all possible choices of logical systems with real numbers between 0 and 1 as truth values. In this case, implication may be interpreted as "x is less true than y" and negation as "x is less true than 0" or "x is strictly false", and for anyx{\displaystyle x}andy{\displaystyle y}, we have thatAND(x,x→Gy)=AND(x,y){\displaystyle AND(x,x\mathrel {\xrightarrow[{G}]{}} y)=AND(x,y)}. In particular, in Gödel logic negation is no longer an involution and double negation maps any nonzero value to 1. Compensatory fuzzy logic (CFL) is a branch of fuzzy logic with modified rules for conjunction and disjunction. When the truth value of one component of a conjunction or disjunction is increased or decreased, the other component is decreased or increased to compensate. This increase or decrease in truth value may be offset by the increase or decrease in another component. An offset may be blocked when certain thresholds are met. Proponents[who?]claim that CFL allows for better computational semantic behaviors and mimic natural language.[vague][35] According to Jesús Cejas Montero (2011) The Compensatory fuzzy logic consists of four continuous operators: conjunction (c); disjunction (d); fuzzy strict order (or); and negation (n). The conjunction is the geometric mean and its dual as conjunctive and disjunctive operators.[36] TheIEEE 1855, the IEEE STANDARD 1855–2016, is about a specification language namedFuzzy Markup Language(FML)[37]developed by theIEEE Standards Association. FML allows modelling a fuzzy logic system in a human-readable and hardware independent way. FML is based on eXtensible Markup Language (XML). The designers of fuzzy systems with FML have a unified and high-level methodology for describing interoperable fuzzy systems. IEEE STANDARD 1855–2016 uses theW3CXML Schemadefinition language to define the syntax and semantics of the FML programs. Prior to the introduction of FML, fuzzy logic practitioners could exchange information about their fuzzy algorithms by adding to their software functions the ability to read, correctly parse, and store the result of their work in a form compatible with theFuzzy Control Language(FCL) described and specified by Part 7 ofIEC 61131.[38][39]
https://en.wikipedia.org/wiki/Fuzzy_logic
Gene expression programming (GEP)incomputer programmingis anevolutionary algorithmthat creates computer programs or models. These computer programs are complextree structuresthat learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linearchromosomesof fixed length. Thus, GEP is agenotype–phenotype system, benefiting from a simplegenometo keep and transmit the genetic information and a complexphenotypeto explore the environment and adapt to it. Evolutionary algorithmsuse populations of individuals, select individuals according to fitness, and introduce genetic variation using one or moregenetic operators. Their use in artificial computational systems dates back to the 1950s where they were used to solve optimization problems (e.g. Box 1957[1]and Friedman 1959[2]). But it was with the introduction ofevolution strategiesby Rechenberg in 1965[3]that evolutionary algorithms gained popularity. A good overview text on evolutionary algorithms is the book "An Introduction to Genetic Algorithms" by Mitchell (1996).[4] Gene expression programming[5]belongs to the family ofevolutionary algorithmsand is closely related togenetic algorithmsandgenetic programming. From genetic algorithms it inherited the linear chromosomes of fixed length; and from genetic programming it inherited the expressiveparse treesof varied sizes and shapes. In gene expression programming the linear chromosomes work as the genotype and the parse trees as the phenotype, creating agenotype/phenotype system. This genotype/phenotype system ismultigenic, thus encoding multiple parse trees in each chromosome. This means that the computer programs created by GEP are composed of multiple parse trees. Because these parse trees are the result of gene expression, in GEP they are calledexpression trees. Masood Nekoei, et al. utilized this expression programming style in ABC optimization to conduct ABCEP as a method that outperformed other evolutionary algorithms.ABCEP The genome of gene expression programming consists of a linear, symbolic string or chromosome of fixed length composed of one or more genes of equal size. These genes, despite their fixed length, code for expression trees of different sizes and shapes. An example of a chromosome with two genes, each of size 9, is the string (position zero indicates the start of each gene): where “L” represents the natural logarithm function and “a”, “b”, “c”, and “d” represent the variables and constants used in a problem. As shownabove, the genes of gene expression programming have all the same size. However, these fixed length strings code forexpression treesof different sizes. This means that the size of the coding regions varies from gene to gene, allowing for adaptation and evolution to occur smoothly. For example, the mathematical expression: can also be represented as anexpression tree: where "Q” represents the square root function. This kind of expression tree consists of the phenotypic expression of GEP genes, whereas the genes are linear strings encoding these complex structures. For this particular example, the linear string corresponds to: which is the straightforward reading of the expression tree from top to bottom and from left to right. These linear strings are called k-expressions (fromKarva notation). Going from k-expressions to expression trees is also very simple. For example, the following k-expression: is composed of two different terminals (the variables “a” and “b”), two different functions of two arguments (“*” and “+”), and a function of one argument (“Q”). Its expression gives: The k-expressions of gene expression programming correspond to the region of genes that gets expressed. This means that there might be sequences in the genes that are not expressed, which is indeed true for most genes. The reason for these noncoding regions is to provide a buffer of terminals so that all k-expressions encoded in GEP genes correspond always to valid programs or expressions. The genes of gene expression programming are therefore composed of two different domains – a head and a tail – each with different properties and functions. The head is used mainly to encode the functions and variables chosen to solve the problem at hand, whereas the tail, while also used to encode the variables, provides essentially a reservoir of terminals to ensure that all programs are error-free. For GEP genes the length of the tail is given by the formula: wherehis the head's length andnmaxis maximum arity. For example, for a gene created using the set of functionsF = {Q, +, −, ∗, /}and the set of terminals T = {a, b},nmax= 2. And if we choose a head length of 15, thent= 15 (2–1) + 1 = 16, which gives a gene lengthgof 15 + 16 = 31. The randomly generated string below is an example of one such gene: It encodes the expression tree: which, in this case, only uses 8 of the 31 elements that constitute the gene. It's not hard to see that, despite their fixed length, each gene has the potential to code for expression trees of different sizes and shapes, with the simplest composed of only one node (when the first element of a gene is a terminal) and the largest composed of as many nodes as there are elements in the gene (when all the elements in the head are functions with maximum arity). It's also not hard to see that it is trivial to implement all kinds ofgenetic modification(mutation, inversion, insertion,recombination, and so on) with the guarantee that all resulting offspring encode correct, error-free programs. The chromosomes of gene expression programming are usually composed of more than one gene of equal length. Each gene codes for a sub-expression tree (sub-ET) or sub-program. Then the sub-ETs can interact with one another in different ways, forming a more complex program. The figure shows an example of a program composed of three sub-ETs. In the final program the sub-ETs could be linked by addition or some other function, as there are no restrictions to the kind of linking function one might choose. Some examples of more complex linkers include taking the average, the median, the midrange, thresholding their sum to make a binomial classification, applying the sigmoid function to compute a probability, and so on. These linking functions are usually chosen a priori for each problem, but they can also be evolved elegantly and efficiently by thecellular system[6][7]of gene expression programming. In gene expression programming,homeotic genescontrol the interactions of the different sub-ETs or modules of the main program. The expression of such genes results in different main programs or cells, that is, they determine which genes are expressed in each cell and how the sub-ETs of each cell interact with one another. In other words, homeotic genes determine which sub-ETs are called upon and how often in which main program or cell and what kind of connections they establish with one another. Homeotic genes have exactly the same kind of structural organization as normal genes and they are built using an identical process. They also contain a head domain and a tail domain, with the difference that the heads contain now linking functions and a special kind of terminals – genic terminals – that represent the normal genes. The expression of the normal genes results as usual in different sub-ETs, which in the cellular system are called ADFs (automatically defined functions). As for the tails, they contain only genic terminals, that is, derived features generated on the fly by the algorithm. For example, the chromosome in the figure has three normal genes and one homeotic gene and encodes a main program that invokes three different functions a total of four times, linking them in a particular way. From this example it is clear that the cellular system not only allows the unconstrained evolution of linking functions but also code reuse. And it shouldn't be hard to implementrecursionin this system. Multicellular systems are composed of more than onehomeotic gene. Each homeotic gene in this system puts together a different combination of sub-expression trees or ADFs, creating multiple cells or main programs. For example, the program shown in the figure was created using a cellular system with two cells and three normal genes. The applications of these multicellular systems are multiple and varied and, like themultigenic systems, they can be used both in problems with just one output and in problems with multiple outputs. The head/tail domain of GEP genes (both normal and homeotic) is the basic building block of all GEP algorithms. However, gene expression programming also explores other chromosomal organizations that are more complex than the head/tail structure. Essentially these complex structures consist of functional units or genes with a basic head/tail domain plus one or more extra domains. These extra domains usually encode random numerical constants that the algorithm relentlessly fine-tunes in order to find a good solution. For instance, these numerical constants may be the weights or factors in a function approximation problem (see theGEP-RNC algorithmbelow); they may be the weights and thresholds of a neural network (see theGEP-NN algorithmbelow); the numerical constants needed for the design of decision trees (see theGEP-DT algorithmbelow); the weights needed for polynomial induction; or the random numerical constants used to discover the parameter values in a parameter optimization task. The fundamental steps of the basic gene expression algorithm are listed below in pseudocode: The first four steps prepare all the ingredients that are needed for the iterative loop of the algorithm (steps 5 through 10). Of these preparative steps, the crucial one is the creation of the initial population, which is created randomly using the elements of the function and terminal sets. Like all evolutionary algorithms, gene expression programming works with populations of individuals, which in this case are computer programs. Therefore, some kind of initial population must be created to get things started. Subsequent populations are descendants, viaselectionandgenetic modification, of the initial population. In the genotype/phenotype system of gene expression programming, it is only necessary to create the simple linear chromosomes of the individuals without worrying about the structural soundness of the programs they code for, as their expression always results in syntactically correct programs. Fitness functions and selection environments (called training datasets inmachine learning) are the two facets of fitness and are therefore intricately connected. Indeed, the fitness of a program depends not only on thecost functionused to measure its performance but also on the training data chosen to evaluate fitness The selection environment consists of the set of training records, which are also called fitness cases. These fitness cases could be a set of observations or measurements concerning some problem, and they form what is called the training dataset. The quality of the training data is essential for the evolution of good solutions. A good training set should be representative of the problem at hand and also well-balanced, otherwise the algorithm might get stuck at some local optimum. In addition, it is also important to avoid using unnecessarily large datasets for training as this will slow things down unnecessarily. A good rule of thumb is to choose enough records for training to enable a good generalization in the validation data and leave the remaining records for validation and testing. Broadly speaking, there are essentially three different kinds of problems based on the kind of prediction being made: The first type of problem goes by the name ofregression; the second is known asclassification, withlogistic regressionas a special case where, besides the crisp classifications like "Yes" or "No", a probability is also attached to each outcome; and the last one is related toBoolean algebraandlogic synthesis. Inregression, the response or dependent variable is numeric (usually continuous) and therefore the output of a regression model is also continuous. So it's quite straightforward to evaluate the fitness of the evolving models by comparing the output of the model to the value of the response in the training data. There are several basicfitness functionsfor evaluating model performance, with the most common being based on the error or residual between the model output and the actual value. Such functions include themean squared error,root mean squared error,mean absolute error, relative squared error, root relative squared error, relative absolute error, and others. All these standard measures offer a fine granularity or smoothness to the solution space and therefore work very well for most applications. But some problems might require a coarser evolution, such as determining if a prediction is within a certain interval, for instance less than 10% of the actual value. However, even if one is only interested in counting the hits (that is, a prediction that is within the chosen interval), making populations of models evolve based on just the number of hits each program scores is usually not very efficient due to the coarse granularity of thefitness landscape. Thus the solution usually involves combining these coarse measures with some kind of smooth function such as the standard error measures listed above. Fitness functions based on thecorrelation coefficientandR-squareare also very smooth. For regression problems, these functions work best by combining them with other measures because, by themselves, they only tend to measurecorrelation, not caring for the range of values of the model output. So by combining them with functions that work at approximating the range of the target values, they form very efficient fitness functions for finding models with good correlation and good fit between predicted and actual values. The design of fitness functions forclassificationandlogistic regressiontakes advantage of three different characteristics of classification models. The most obvious is just counting the hits, that is, if a record is classified correctly it is counted as a hit. This fitness function is very simple and works well for simple problems, but for more complex problems or datasets highly unbalanced it gives poor results. One way to improve this type of hits-based fitness function consists of expanding the notion of correct and incorrect classifications. In a binary classification task, correct classifications can be 00 or 11. The "00" representation means that a negative case (represented by "0”) was correctly classified, whereas the "11" means that a positive case (represented by "1”) was correctly classified. Classifications of the type "00" are called true negatives (TN) and "11" true positives (TP). There are also two types of incorrect classifications and they are represented by 01 and 10. They are called false positives (FP) when the actual value is 0 and the model predicts a 1; and false negatives (FN) when the target is 1 and the model predicts a 0. The counts of TP, TN, FP, and FN are usually kept on a table known as theconfusion matrix. So by counting the TP, TN, FP, and FN and further assigning different weights to these four types of classifications, it is possible to create smoother and therefore more efficient fitness functions. Some popular fitness functions based on the confusion matrix includesensitivity/specificity,recall/precision,F-measure,Jaccard similarity,Matthews correlation coefficient, and cost/gain matrix which combines the costs and gains assigned to the 4 different types of classifications. These functions based on the confusion matrix are quite sophisticated and are adequate to solve most problems efficiently. But there is another dimension to classification models which is key to exploring more efficiently the solution space and therefore results in the discovery of better classifiers. This new dimension involves exploring the structure of the model itself, which includes not only the domain and range, but also the distribution of the model output and the classifier margin. By exploring this other dimension of classification models and then combining the information about the model with the confusion matrix, it is possible to design very sophisticated fitness functions that allow the smooth exploration of the solution space. For instance, one can combine some measure based on the confusion matrix with themean squared errorevaluated between the raw model outputs and the actual values. Or combine theF-measurewith theR-squareevaluated for the raw model output and the target; or the cost/gain matrix with thecorrelation coefficient, and so on. More exotic fitness functions that explore model granularity include the area under theROC curveand rank measure. Also related to this new dimension of classification models, is the idea of assigning probabilities to the model output, which is what is done inlogistic regression. Then it is also possible to use these probabilities and evaluate themean squared error(or some other similar measure) between the probabilities and the actual values, then combine this with the confusion matrix to create very efficient fitness functions for logistic regression. Popular examples of fitness functions based on the probabilities includemaximum likelihood estimationandhinge loss. In logic there is no model structure (as definedabovefor classification and logistic regression) to explore: the domain and range of logical functions comprises only 0's and 1's or false and true. So, the fitness functions available forBoolean algebracan only be based on the hits or on the confusion matrix as explained in the sectionabove. Roulette-wheel selectionis perhaps the most popular selection scheme used in evolutionary computation. It involves mapping the fitness of each program to a slice of the roulette wheel proportional to its fitness. Then the roulette is spun as many times as there are programs in the population in order to keep the population size constant. So, with roulette-wheel selection programs are selected both according to fitness and the luck of the draw, which means that some times the best traits might be lost. However, by combining roulette-wheel selection with the cloning of the best program of each generation, one guarantees that at least the very best traits are not lost. This technique of cloning the best-of-generation program is known as simple elitism and is used by most stochastic selection schemes. The reproduction of programs involves first the selection and then the reproduction of their genomes. Genome modification is not required for reproduction, but without it adaptation and evolution won't take place. The selection operator selects the programs for the replication operator to copy. Depending on the selection scheme, the number of copies one program originates may vary, with some programs getting copied more than once while others are copied just once or not at all. In addition, selection is usually set up so that the population size remains constant from one generation to another. The replication of genomes in nature is very complex and it took scientists a long time to discover theDNA double helixand propose a mechanism for its replication. But the replication of strings is trivial in artificial evolutionary systems, where only an instruction to copy strings is required to pass all the information in the genome from generation to generation. The replication of the selected programs is a fundamental piece of all artificial evolutionary systems, but for evolution to occur it needs to be implemented not with the usual precision of a copy instruction, but rather with a few errors thrown in. Indeed, genetic diversity is created withgenetic operatorssuch asmutation,recombination,transposition, inversion, and many others. In gene expression programming mutation is the most important genetic operator.[8]It changes genomes by changing an element by another. The accumulation of many small changes over time can create great diversity. In gene expression programming mutation is totally unconstrained, which means that in each gene domain any domain symbol can be replaced by another. For example, in the heads of genes any function can be replaced by a terminal or another function, regardless of the number of arguments in this new function; and a terminal can be replaced by a function or another terminal. Recombinationusually involves two parent chromosomes to create two new chromosomes by combining different parts from the parent chromosomes. And as long as the parent chromosomes are aligned and the exchanged fragments are homologous (that is, occupy the same position in the chromosome), the new chromosomes created by recombination will always encode syntactically correct programs. Different kinds of crossover are easily implemented either by changing the number of parents involved (there's no reason for choosing only two); the number of split points; or the way one chooses to exchange the fragments, for example, either randomly or in some orderly fashion. For example, gene recombination, which is a special case of recombination, can be done by exchanging homologous genes (genes that occupy the same position in the chromosome) or by exchanging genes chosen at random from any position in the chromosome. Transpositioninvolves the introduction of an insertion sequence somewhere in a chromosome. In gene expression programming insertion sequences might appear anywhere in the chromosome, but they are only inserted in the heads of genes. This method guarantees that even insertion sequences from the tails result in error-free programs. For transposition to work properly, it must preserve chromosome length and gene structure. So, in gene expression programming transposition can be implemented using two different methods: the first creates a shift at the insertion site, followed by a deletion at the end of the head; the second overwrites the local sequence at the target site and therefore is easier to implement. Both methods can be implemented to operate between chromosomes or within a chromosome or even within a single gene. Inversion is an interesting operator, especially powerful forcombinatorial optimization.[9]It consists of inverting a small sequence within a chromosome. In gene expression programming it can be easily implemented in all gene domains and, in all cases, the offspring produced is always syntactically correct. For any gene domain, a sequence (ranging from at least two elements to as big as the domain itself) is chosen at random within that domain and then inverted. Several other genetic operators exist and in gene expression programming, with its different genes and gene domains, the possibilities are endless. For example, genetic operators such as one-point recombination, two-point recombination, gene recombination, uniform recombination, gene transposition, root transposition, domain-specific mutation, domain-specific inversion, domain-specific transposition, and so on, are easily implemented and widely used. Numerical constants are essential elements of mathematical and statistical models and therefore it is important to allow their integration in the models designed by evolutionary algorithms. Gene expression programming solves this problem very elegantly through the use of an extra gene domain – the Dc – for handling random numerical constants (RNC). By combining this domain with a special terminal placeholder for the RNCs, a richly expressive system can be created. Structurally, the Dc comes after the tail, has a length equal to the size of the tailt, and is composed of the symbols used to represent the RNCs. For example, below is shown a simple chromosome composed of only one gene a head size of 7 (the Dc stretches over positions 15–22): where the terminal "?” represents the placeholder for the RNCs. This kind of chromosome is expressed exactly as shownabove, giving: Then the ?'s in the expression tree are replaced from left to right and from top to bottom by the symbols (for simplicity represented by numerals) in the Dc, giving: The values corresponding to these symbols are kept in an array. (For simplicity, the number represented by the numeral indicates the order in the array.) For instance, for the following 10 element array of RNCs: the expression tree above gives: This elegant structure for handling random numerical constants is at the heart of different GEP systems, such asGEP neural networksandGEP decision trees. Like thebasic gene expression algorithm, the GEP-RNC algorithm is also multigenic and its chromosomes are decoded as usual by expressing one gene after another and then linking them all together by the same kind of linking process. The genetic operators used in the GEP-RNC system are an extension to the genetic operators of the basic GEP algorithm (seeabove), and they all can be straightforwardly implemented in these new chromosomes. On the other hand, the basic operators of mutation, inversion, transposition, and recombination are also used in the GEP-RNC algorithm. Furthermore, special Dc-specific operators such as mutation, inversion, and transposition, are also used to aid in a more efficient circulation of the RNCs among individual programs. In addition, there is also a special mutation operator that allows the permanent introduction of variation in the set of RNCs. The initial set of RNCs is randomly created at the beginning of a run, which means that, for each gene in the initial population, a specified number of numerical constants, chosen from a certain range, are randomly generated. Then their circulation and mutation is enabled by the genetic operators. Anartificial neural network(ANN or NN) is a computational device that consists of many simple connected units or neurons. The connections between the units are usually weighted by real-valued weights. These weights are the primary means of learning in neural networks and a learning algorithm is usually used to adjust them. Structurally, a neural network has three different classes of units: input units, hidden units, and output units. An activation pattern is presented at the input units and then spreads in a forward direction from the input units through one or more layers of hidden units to the output units. The activation coming into one unit from other unit is multiplied by the weights on the links over which it spreads. All incoming activation is then added together and the unit becomes activated only if the incoming result is above the unit's threshold. In summary, the basic components of a neural network are the units, the connections between the units, the weights, and the thresholds. So, in order to fully simulate an artificial neural network one must somehow encode these components in a linear chromosome and then be able to express them in a meaningful way. In GEP neural networks (GEP-NN or GEP nets), the network architecture is encoded in the usual structure of a head/tail domain.[10]The head contains special functions/neurons that activate the hidden and output units (in the GEP context, all these units are more appropriately called functional units) and terminals that represent the input units. The tail, as usual, contains only terminals/input units. Besides the head and the tail, these neural network genes contain two additional domains, Dw and Dt, for encoding the weights and thresholds of the neural network. Structurally, the Dw comes after the tail and its lengthdwdepends on the head sizehand maximum aritynmaxand is evaluated by the formula: The Dt comes after Dw and has a lengthdtequal tot. Both domains are composed of symbols representing the weights and thresholds of the neural network. For each NN-gene, the weights and thresholds are created at the beginning of each run, but their circulation and adaptation are guaranteed by the usual genetic operators ofmutation,transposition,inversion, andrecombination. In addition, special operators are also used to allow a constant flow of genetic variation in the set of weights and thresholds. For example, below is shown a neural network with two input units (i1andi2), two hidden units (h1andh2), and one output unit (o1). It has a total of six connections with six corresponding weights represented by the numerals 1–6 (for simplicity, the thresholds are all equal to 1 and are omitted): This representation is the canonical neural network representation, but neural networks can also be represented by a tree, which, in this case, corresponds to: where "a” and "b” represent the two inputsi1andi2and "D” represents a function with connectivity two. This function adds all its weighted arguments and then thresholds this activation in order to determine the forwarded output. This output (zero or one in this simple case) depends on the threshold of each unit, that is, if the total incoming activation is equal to or greater than the threshold, then the output is one, zero otherwise. The above NN-tree can be linearized as follows: where the structure in positions 7–12 (Dw) encodes the weights. The values of each weight are kept in an array and retrieved as necessary for expression. As a more concrete example, below is shown a neural net gene for theexclusive-orproblem. It has a head size of 3 and Dw size of 6: Its expression results in the following neural network: which, for the set of weights: it gives: which is a perfect solution to the exclusive-or function. Besides simple Boolean functions with binary inputs and binary outputs, the GEP-nets algorithm can handle all kinds of functions or neurons (linear neuron, tanh neuron, atan neuron, logistic neuron, limit neuron, radial basis and triangular basis neurons, all kinds of step neurons, and so on). Also interesting is that the GEP-nets algorithm can use all these neurons together and let evolution decide which ones work best to solve the problem at hand. So, GEP-nets can be used not only in Boolean problems but also inlogistic regression,classification, andregression. In all cases, GEP-nets can be implemented not only withmultigenic systemsbut alsocellular systems, both unicellular and multicellular. Furthermore, multinomial classification problems can also be tackled in one go by GEP-nets both with multigenic systems and multicellular systems. Decision trees(DT) are classification models where a series of questions and answers are mapped using nodes and directed edges. Decision trees have three types of nodes: a root node, internal nodes, and leaf or terminal nodes. The root node and all internal nodes represent test conditions for different attributes or variables in a dataset. Leaf nodes specify the class label for all different paths in the tree. Most decision tree induction algorithms involve selecting an attribute for the root node and then make the same kind of informed decision about all the nodes in a tree. Decision trees can also be created by gene expression programming,[11]with the advantage that all the decisions concerning the growth of the tree are made by the algorithm itself without any kind of human input. There are basically two different types of DT algorithms: one for inducing decision trees with only nominal attributes and another for inducing decision trees with both numeric and nominal attributes. This aspect of decision tree induction also carries to gene expression programming and there are two GEP algorithms for decision tree induction: the evolvable decision trees (EDT) algorithm for dealing exclusively with nominal attributes and the EDT-RNC (EDT with random numerical constants) for handling both nominal and numeric attributes. In the decision trees induced by gene expression programming, the attributes behave as function nodes in thebasic gene expression algorithm, whereas the class labels behave as terminals. This means that attribute nodes have also associated with them a specific arity or number of branches that will determine their growth and, ultimately, the growth of the tree. Class labels behave like terminals, which means that for ak-class classification task, a terminal set withkterminals is used, representing thekdifferent classes. The rules for encoding a decision tree in a linear genome are very similar to the rules used to encode mathematical expressions (seeabove). So, for decision tree induction the genes also have a head and a tail, with the head containing attributes and terminals and the tail containing only terminals. This again ensures that all decision trees designed by GEP are always valid programs. Furthermore, the size of the tailtis also dictated by the head sizehand the number of branches of the attribute with more branchesnmaxand is evaluated by the equation: For example, consider the decision tree below to decide whether to play outside: It can be linearly encoded as: where “H” represents the attribute Humidity, “O” the attribute Outlook, “W” represents Windy, and “a” and “b” the class labels "Yes" and "No" respectively. Note that the edges connecting the nodes are properties of the data, specifying the type and number of branches of each attribute, and therefore don't have to be encoded. The process of decision tree induction with gene expression programming starts, as usual, with an initial population of randomly created chromosomes. Then the chromosomes are expressed as decision trees and their fitness evaluated against a training dataset. According to fitness they are then selected to reproduce with modification. The genetic operators are exactly the same that are used in a conventional unigenic system, for example,mutation,inversion,transposition, andrecombination. Decision trees with both nominal and numeric attributes are also easily induced with gene expression programming using the framework describedabovefor dealing with random numerical constants. The chromosomal architecture includes an extra domain for encoding random numerical constants, which are used as thresholds for splitting the data at each branching node. For example, the gene below with a head size of 5 (the Dc starts at position 16): encodes the decision tree shown below: In this system, every node in the head, irrespective of its type (numeric attribute, nominal attribute, or terminal), has associated with it a random numerical constant, which for simplicity in the example above is represented by a numeral 0–9. These random numerical constants are encoded in the Dc domain and their expression follows a very simple scheme: from top to bottom and from left to right, the elements in Dc are assigned one-by-one to the elements in the decision tree. So, for the following array of RNCs: the decision tree above results in: which can also be represented more colorfully as a conventional decision tree: GEP has been criticized for not being a major improvement over othergenetic programmingtechniques. In many experiments, it did not perform better than existing methods.[12]
https://en.wikipedia.org/wiki/Gene_expression_programming
Gerald Maurice Edelman(/ˈɛdəlmən/; July 1, 1929 – May 17, 2014) was an Americanbiologistwho shared the 1972Nobel Prize in Physiology or Medicinefor work withRodney Robert Porteron theimmune system.[1]Edelman's Nobel Prize-winning research concerned discovery of the structure ofantibodymolecules.[2]In interviews, he has said that the way the components of the immune system evolve over the life of the individual is analogous to the way the components of the brain evolve in a lifetime. There is a continuity in this way between his work on the immune system, for which he won theNobel Prize, and his later work inneuroscienceand inphilosophy of mind. Gerald Edelman was born in 1929[3]inOzone Park, Queens, New York, toJewishparents,physicianEdward Edelman, and Anna (née Freedman) Edelman, who worked in the insurance industry.[4]He studied violin for years, but eventually realized that he did not have the inner drive needed to pursue a career as a concert violinist, and decided to go into medical research instead.[5]He attended public schools in New York, graduating fromJohn Adams High School,[6]and then attendedUrsinus College, where he graduatedmagna cum laudewith aB.S.in 1950. He received anM.D.from theUniversity of Pennsylvania School of Medicinein 1954.[4] After a year at the Johnson Foundation for Medical Physics, Edelman became aresidentat theMassachusetts General Hospital; he then practiced medicine in France while serving withUS Army Medical Corps.[4]In 1957, Edelman joined theRockefeller Institute for Medical Researchas a graduate fellow, working in the laboratory of Henry Kunkel and receiving aPh.D.in 1960.[4]The institute made him the assistant (later associate) dean of graduate studies; he became a professor at the school in 1966.[4]In 1992, he moved toCaliforniaand became a professor ofneurobiologyatThe Scripps Research Institute.[7] After his Nobel prize award, Edelman began research into the regulation of primarycellular processes, particularly the control of cell growth and the development ofmulti-celled organisms, focusing on cell-to-cell interactions in earlyembryonic developmentand in the formation and function of the nervous system. These studies led to the discovery ofcell adhesion molecules(CAMs), which guide the fundamental processes that help an animal achieve its shape and form, and by which nervous systems are built. One of the most significant discoveries made in this research is that the precursorgenefor the neural cell adhesion molecule gave rise in evolution to the entire molecular system ofadaptive immunity.[8] For his efforts, Edelman was an elected member of both theAmerican Academy of Arts and Sciences(1968) and theAmerican Philosophical Society(1977).[9][10] While in Paris serving in the Army, Edelman read a book that sparked his interest inantibodies.[11]He decided that, since the book said so little about antibodies, he would investigate them further upon returning to the United States, which led him to studyphysical chemistryfor his 1960 Ph.D.[11]Research by Edelman and his colleagues andRodney Robert Porterin the early 1960s produced fundamental breakthroughs in the understanding of the antibody's chemical structure, opening a door for further study.[12]For this work, Edelman and Porter shared theNobel Prize in Physiology or Medicinein 1972.[1] In its Nobel Prize press release in 1972, theKarolinska Institutetlauded Edelman and Porter's work as a major breakthrough: The impact of Edelman's and Porter's discoveries is explained by the fact that they provided a clear picture of the structure and mode of action of a group of biologically particularly important substances. By this they laid a firm foundation for truly rational research, something that was previously largely lacking in immunology. Their discoveries represent clearly a break-through that immediately incited a fervent research activity the whole world over, in all fields of immunological science, yielding results of practical value for clinical diagnostics and therapy.[13] Edelman's early research on the structure of antibody proteins revealed thatdisulfide bondslink together the protein subunits.[2]The protein subunits of antibodies are of two types, the larger heavy chains and the smaller light chains. Two light and two heavy chains are linked together by disulfide bonds to form a functional antibody. Using experimental data from his own research and the work of others, Edelman developed molecular models of antibody proteins.[14]A key feature of these models included the idea that theantigenbinding domains of antibodies (Fab) includeamino acidsfrom both thelightandheavyprotein subunits. The inter-chain disulfide bonds help bring together the two parts of the antigen binding domain. Edelman and his colleagues usedcyanogen bromideandproteasesto fragment the antibody protein subunits into smaller pieces that could be analyzed for determination of theiramino acid sequence.[15][16]At the time when the first complete antibody sequence was determined (1969)[17]it was the largest complete protein sequence that had ever been determined. The availability of amino acid sequences of antibody proteins allowed recognition of the fact that the body can produce many different antibody proteins with similar antibody constant regions and divergent antibodyvariable regions. Topobiology is Edelman's theory which asserts that morphogenesis is driven by differential adhesive interactions among heterogeneous cell populations and it explains how a single cell can give rise to a complex multi-cellular organism. As proposed by Edelman in 1988, topobiology is the process that sculpts and maintains differentiated tissues and is acquired by the energetically favored segregation of cells through heterologous cellular interactions. In his later career, Edelman was noted for his theory ofconsciousness, documented in a trilogy of technical books and in several subsequent books written for a general audience, includingBright Air, Brilliant Fire(1992),[18][19]A Universe of Consciousness(2001, withGiulio Tononi),Wider than the Sky(2004) andSecond Nature: Brain Science and Human Knowledge(2007). InSecond NatureEdelman defines human consciousness as: The first of Edelman's technical books,The Mindful Brain(1978),[20]develops his theory ofNeural Darwinism, which is built around the idea of plasticity in the neural network in response to the environment. The second book,Topobiology(1988),[21]proposes a theory of how the original neuronal network of a newborn'sbrainis established during development of theembryo.The Remembered Present(1990)[22]contains an extended exposition of his theory ofconsciousness. In his books, Edelman proposed a biological theory of consciousness, based on his studies of the immune system. He explicitly roots his theory withinCharles Darwin's Theory ofNatural Selection, citing the key tenets of Darwin's population theory, which postulates that individual variation within species provides the basis for the natural selection that eventually leads to the evolution of new species.[23]He explicitly rejecteddualismand also dismissed newer hypotheses such as the so-called'computational' model of consciousness, which liken the brain's functions to the operations of a computer. Edelman argued that mind and consciousness are purely biological phenomena, arising from complex cellular processes within the brain, and that the development of consciousness and intelligence can be explained by Darwinian theory. Edelman's theory seeks to explain consciousness in terms of the morphology of the brain. A brain comprises a massive population of neurons (approx. 100billioncells) each with an enormous number of synaptic connections to other neurons. During development, the subset of connections that survive the initial phases of growth and development will make approximately 100trillionconnections with each other. A sample of brain tissue the size of a match head contains about a billion connections, and if we consider how these neuronal connections might be variously combined, the number of possible permutations becomes hyper-astronomical – in the order of ten followed by millions of zeros.[24]The young brain contains many more neural connections than will ultimately survive to maturity, and Edelman argued that this redundant capacity is needed because neurons are the only cells in the body that cannot be renewed and because only those networks best adapted to their ultimate purpose will be selected as they organize into neuronal groups. Edelman's theory of neuronal group selection, also known as 'Neural Darwinism', has three basic tenets—Developmental Selection, Experiential Selection and Reentry. Edelman and Gally were the first to point out the pervasiveness ofdegeneracyin biological systems and the fundamental role that degeneracy plays in facilitating evolution.[27] Edelman founded and directedThe Neurosciences Institute, a nonprofit research center inSan Diegothat between 1993 and 2012 studied the biological bases of higher brain function in humans. He served on the scientific board of the World Knowledge Dialogue project.[28] Edelman was a member of theUSA Science and Engineering Festival's advisory board.[29] Edelman married Maxine M. Morrison in 1950.[4]They have two sons,Eric, a visual artistin New York City, andDavid, an adjunct professor of neuroscienceatUniversity of San Diego. Their daughter,Judith Edelman, is abluegrassmusician,[30]recording artist, and writer. Some observers[who?]have noted that a character inRichard Powers'The Echo Makermay be a nod at Edelman. Later in his life, he hadprostate cancerandParkinson's disease.[31]Edelman died on May 17, 2014, inLa Jolla, California, aged 84.[3][32][33]
https://en.wikipedia.org/wiki/Gerald_Edelman
Janine M. Benyus(born 1958) is an Americannatural scienceswriter, innovation consultant, and author. After writing books on wildlife and animal behavior, she coined the termBiomimicryto describe intentional problem-solving design inspired by nature. Her bookBiomimicry(1997) attracted widespread attention from businesspeople in design, architecture, and engineering as well as from scientists. Benyus argues that by following biomimetic approaches, designers can develop products that will perform better, be less expansive, use less energy, and leave companies less open to legal risk.[1][2] Born inNew Jersey, Benyus graduatedsumma cum laudefromRutgers Universitywith degrees innatural resource managementandEnglish literature/writing.[3]Benyus has taught interpretive writing and lectured at theUniversity of Montana, and worked towards restoring and protecting wild lands.[4]She serves on a number ofland usecommittees in her rural county, and is president of Living Education, a nonprofit dedicated to place-based living and learning.[5]Benyus lives inStevensville, Montana.[6] Benyus has written a number of books on animals and their behavior, but is best known forBiomimicry: Innovation Inspired by Nature(1997). In this book she develops the basic thesis that human beings should consciously emulate nature's genius in their designs. She encourages people to ask "What would Nature do?" and to look at natural forms, processes, and ecosystems in nature[7][8]to see what works and what lasts.[1] If you go into the world with an attitude of deep and reverent observation, you don't go with a pre-formed hypothesis. I am much more excited by staying open so that I can absorb something I could never have imagined.... That deep observation is a different kind of scientific inquiry. It may allow me to find something new while someone who is prejudging, someone with a hypothesis, will only see what affirms the hypothesis. If you go out waiting to be amazed, more may be revealed.[9] Benyus articulates an approach that strongly emphasizessustainabilitywithin biomimicry practice. sometimes referred to as Conditions Conducive to Life (CCL).[10]Benyus has described the development of sustainable solutions in terms of "Life’s Principles", emphasizing that organisms in nature have evolved methods of working that are not destructive of themselves and their environment. “Nature runs on sunlight, uses only the energy it needs, fits form to function, recycles everything, rewards cooperation, banks on diversity, demands local expertise, curbs excess from within and taps the power of limits”.[11] In 1998, Benyus andDayna Baumeisterco-founded the Biomimicry Guild[1][12]as an innovation consultancy. Their goal was to help innovators learn from and emulate natural models in order to designsustainableproducts, processes, and policies that create conditions conducive to life.[13][1] In 2006, Benyus co-foundedThe Biomimicry Institutewith Dayna Baumeister andBryony Schwan.[14]Benyus is President of thenon-profit organization,[15]whose mission is to naturalize biomimicry in the culture by promoting the transfer of ideas, designs, and strategies from biology to sustainable human systems design.[2]In 2008 the Biomimicry Institute launched AskNature.org, "an encyclopedia of nature's solutions to common design problems".[16]The Biomimicry Institute has become a key communicator in the field of biomimetics, connecting 12,576 member practitioners and organizations in 36 regional networks and 21 countries through its Biomimicry Global Network as of 2020.[2] In 2010, Benyus, Dayna Baumeister, Bryony Schwan, and Chris Allen formed Biomimicry 3.8, connecting their for-profit and nonprofit work by creating abenefit corporation. Biomimicry 3.8, which achievedB-corp certification,[17][18][19]offers consultancy, professional training, development for educators,[17]and "inspirational speaking".[20][21][22]Among its more than 250 clients areNike,Kohler.Seventh GenerationandC40 Cities.[23][12]By 2013, over 100 universities had joined the Biomimicry Educator’s Network, offering training in biomimetics.[17]In 2014, the profit and non-profit aspects again became separate entities, with Biomimicry 3.8 engaging in for-profit consultancy and the Biomimicry Institute as a non-profit organization.[24] Benyus has served on various boards, including the Board of Directors for theU.S. Green Building Counciland the advisory boards of theRay C. Anderson FoundationandProject Drawdown. Benyus is an affiliate faculty member in The Biomimicry Center atArizona State University.[25] Beynus' work has been used as the basis for films[26]including the two-part filmBiomimicry: Learning from Nature(2002), directed by Paul Lang and David Springbett for CBC'sThe Nature of Thingsand presented byDavid Suzuki.[27]She was one of the experts in the filmDirt! The Movie(2009) which was voiced byJamie Lee Curtis.[28]
https://en.wikipedia.org/wiki/Janine_Benyus
Mark A. O'Neill(born 3 November 1959) is an Englishcomputational biologistwith interests inartificial intelligence,systems biology,complex systemsandimage analysis. He is the creator and lead programmer on a number of computational projects including theDigital Automated Identification SYstem (DAISY)forautomated species identificationandPUPS P3, anorganic computingenvironment forLinux. O'Neill was educated atThe King's School, Grantham,Sheffield UniversityandUniversity College London.[1] O'Neill's interests lie at the interface of biology and computing. He has worked in the areas ofartificial lifeandbiologically inspired computing. In particular, he has attempted to answer the question "can one createsoftware agentswhich are capable of carrying a useful computational payload which respond to their environment with the flexibility of a livingorganism?" He has also investigated how computational methods may be used to analyze biological and quasi biological systems for example:ecosystemsandeconomies. O'Neill is also interested inethology, especially the emergent social ecosystems which occur as a result ofsocial networkingon theinternet. His recent projects include the use of artificial intelligence techniques to look at complex socio-economic data.[2] On the computer science front, O'Neill continues to develop and contribute to a number of otheropen sourceand commercial software projects and is involved in the design of cluster/parallel computer hardware via his company,Tumbling Dice Ltd. Long-running projects include DAISY;[3]PUPS P3 an organic computing environment for Linux;Cryopid, a Linux process freezer; the[Mensor digital terrain model generation system]; andRanaVision, a vision based motion detection system. He has also worked with public domain agent based social interaction models such asSugarscapeand artificial life simulators, for example physis, which is a development ofTierra. O'Neill has been a keen naturalist since childhood. In addition to his interests in complex systems and computer science, he is a member of theRoyal Entomological Societyand an expert in the rearing and ecology ofhawk moths. He is also currently convenor of the[Electronic and Computing Technology Special Interest Group](SIG) for the Royal Entomological Society. He is also interested in the use ofprecision agriculturemethodologies to monitor agri-ecosystems,[4]and has been an active participant in a series of projects looking at the automatic tracking ofbumblebees,[5][6]and other insects[7][8]using vision, and using bothnetwork analysisandremote sensingtechniques to monitor ecosystem health. Latterly, he has become interested in applying these techniques in the commercial sphere to look at issues ofcorporate responsibilityandsustainabilityin industries like mining and agriculture which have significant ecological footprints. He has also been involved in bothcomputational neuroscienceandsystems biology, the former association resulting in many papers while working atOxford University. Work in the latter area led to the successful flotation in 2007 of a systems biology company, e-Therapeutics, where O'Neill was a senior scientist, assisted with the establishment of the company, and was named in a number of seminalpatents. O'Neill is a fellow of theBritish Computer Society, theInstitute of Engineering and Technology, and theRoyal Astronomical Society. He is also achartered engineer, achartered IT professionaland a member of theInstitute of Directors. He was one of the recipients of theBCS Award for Computing Technologyin 1992.
https://en.wikipedia.org/wiki/Mark_A._O%27Neill
Mathematical and theoretical biology, orbiomathematics, is a branch ofbiologywhich employs theoretical analysis,mathematical modelsand abstractions of livingorganismsto investigate the principles that govern the structure, development and behavior of the systems, as opposed toexperimental biologywhich deals with the conduction of experiments to test scientific theories.[1]The field is sometimes calledmathematical biologyorbiomathematicsto stress the mathematical side, ortheoretical biologyto stress the biological side.[2]Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms interchange; overlapping asArtificial Immune SystemsofAmorphous Computation.[3][4] Mathematical biology aims at the mathematical representation and modeling ofbiological processes, using techniques and tools ofapplied mathematics. It can be useful in boththeoreticalandpracticalresearch. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter; requiringmathematical models. Because of the complexity of theliving systems, theoretical biology employs several fields of mathematics,[5]and has contributed to the development of new techniques. Mathematics has been used in biology as early as the 13th century, whenFibonacciused the famousFibonacci seriesto describe a growing population of rabbits. In the 18th century,Daniel Bernoulliapplied mathematics to describe the effect of smallpox on the human population.Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth.Pierre François Verhulstformulated the logistic growth model in 1836.[citation needed] Fritz Müllerdescribed the evolutionary benefits of what is now calledMüllerian mimicryin 1879, in an account notable for being the first use of a mathematical argument inevolutionary ecologyto show how powerful the effect of natural selection would be, unless one includesMalthus's discussion of the effects ofpopulation growththat influencedCharles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment'scarrying capacity) could only grow arithmetically.[6] The term "theoretical biology" was first used as a monograph title byJohannes Reinkein 1901, and soon after byJakob von Uexküllin 1920. One founding text is considered to beOn Growth and Form(1917) byD'Arcy Thompson,[7]and other early pioneers includeRonald Fisher,Hans Leo Przibram,Vito Volterra,Nicolas RashevskyandConrad Hal Waddington.[8] Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include: Several areas of specialized research in mathematical and theoretical biology[10][11][12][13][14]as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models. Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization. Other approaches include the notion ofautopoiesisdeveloped byMaturanaandVarela,Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.[15] Algebraic biology (also known as symbolic systems biology) applies the algebraic methods ofsymbolic computationto the study of biological problems, especially ingenomics,proteomics, analysis ofmolecular structuresand study ofgenes.[16][17][18] An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology. A monograph on this topic summarizes an extensive amount of published research in this area up to 1986,[19][20][21]including subsections in the following areas:computer modelingin biology and medicine, arterial system models,neuronmodels, biochemical andoscillationnetworks, quantum automata,quantum computersinmolecular biologyandgenetics,[22]cancer modelling,[23]neural nets,genetic networks, abstract categories in relational biology,[24]metabolic-replication systems,category theory[25]applications in biology and medicine,[26]automata theory,cellular automata,[27]tessellationmodels[28][29]and complete self-reproduction,chaotic systemsinorganisms, relational biology and organismic theories.[16][30] Modeling cell and molecular biology This area has received a boost due to the growing importance ofmolecular biology.[13] Modelling physiological systems Computational neuroscience(also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.[43][44] Ecologyandevolutionary biologyhave traditionally been the dominant fields of mathematical biology. Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, ispopulation genetics. Most population geneticists consider the appearance of newallelesbymutation, the appearance of newgenotypesbyrecombination, and changes in the frequencies of existing alleles and genotypes at a small number ofgeneloci. Wheninfinitesimaleffects at a large number of gene loci are considered, together with the assumption oflinkage equilibriumorquasi-linkage equilibrium, one derivesquantitative genetics.Ronald Fishermade fundamental advances in statistics, such asanalysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development ofcoalescent theoryisphylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics[45]Traditional population genetic models deal with alleles and genotypes, and are frequentlystochastic. Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field ofpopulation dynamics. Work in this area dates back to the 19th century, and even as far as 1798 whenThomas Malthusformulated the first principle of population dynamics, which later became known as theMalthusian growth model. TheLotka–Volterra predator-prey equationsare another famous example. Population dynamics overlap with another active area of research in mathematical biology:mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread ofinfectionshave been proposed and analyzed, and provide important results that may be applied to health policy decisions. Inevolutionary game theory, developed first byJohn Maynard SmithandGeorge R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field ofadaptive dynamics. The earlier stages of mathematical biology were dominated by mathematicalbiophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments. The following is a list of mathematical descriptions and their assumptions. A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space. A random mapping between an initial state and a final state, making the state of the system arandom variablewith a correspondingprobability distribution. One classic work in this area isAlan Turing's paper onmorphogenesisentitledThe Chemical Basis of Morphogenesis, published in 1952 in thePhilosophical Transactions of the Royal Society. A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or atequilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur. Molecular set theory is a mathematical formulation of the wide-sensechemical kineticsof biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced byAnthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.[52]In a more general sense, Molecular set theory is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.[52] Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea. For example, abstract relational biology (ARB)[53]is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or(M,R)--systems introduced byRobert Rosenin 1957–1958 as abstract, relational models of cellular and organismal organization.[54] The eukaryoticcell cycleis very complex and has been the subject of intense study, since its misregulation leads tocancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups[55][56]have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006). By means of a system ofordinary differential equationsthese models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called adeterministic process(whereas a model describing a statistical distribution of protein concentrations in a population of cells is called astochastic process). To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such asrate kineticsfor stoichiometric reactions,Michaelis-Menten kineticsfor enzyme substrate reactions andGoldbeter–Koshland kineticsfor ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size. To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a startingvector(list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments. In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as avector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: astable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), anunstable point, either a source or asaddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate). A better representation, which handles the large number of variables and parameters, is abifurcation diagramusingbifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called aHopf bifurcationand aninfinite period bifurcation.[citation needed]
https://en.wikipedia.org/wiki/Mathematical_biology
Natural computing,[1][2]also callednatural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches areartificial neural networks,evolutionary algorithms,swarm intelligence,artificial immune systems, fractal geometry,artificial life,DNA computing, andquantum computing, among others. However, the field is more related toBiological Computation. Computational paradigms studied by natural computing are abstracted from natural phenomena as diverse asself-replication, the functioning of thebrain,Darwinian evolution,group behavior, theimmune system, the defining properties of life forms,cell membranes, andmorphogenesis. Besides traditionalelectronic hardware, these computational paradigms can be implemented on alternative physical media such as biomolecules (DNA, RNA), or trapped-ionquantum computingdevices. Dually, one can view processes occurring in nature as information processing. Such processes includeself-assembly,developmental processes,gene regulationnetworks,protein–protein interactionnetworks, biological transport (active transport,passive transport) networks, andgene assemblyinunicellular organisms. Efforts to understand biological systems also include engineering of semi-synthetic organisms, and understanding the universe itself from the point of view of information processing. Indeed, the idea was even advanced that information is more fundamental than matter or energy. The Zuse-Fredkin thesis, dating back to the 1960s, states that the entire universe is a hugecellular automatonwhich continuously updates its rules.[3][4]Recently it has been suggested that the whole universe is aquantum computerthat computes its own behaviour.[5]The universe/nature as computational mechanism is addressed by,[6]exploring nature with help the ideas of computability, and[7]studying natural processes as computations (information processing). The most established "classical" nature-inspired models of computation are cellular automata, neural computation, and evolutionary computation. More recent computational systems abstracted from natural processes include swarm intelligence, artificial immune systems, membrane computing, and amorphous computing. Detailed reviews can be found in many books .[8][9] A cellular automaton is adynamical systemconsisting of an array of cells. Space and time are discrete and each of the cells can be in a finite number ofstates. The cellular automaton updates the states of its cells synchronously according to the transition rules givena priori. The next state of a cell is computed by a transition rule and it depends only on its current state and the states of its neighbors. Conway's Game of Lifeis one of the best-known examples of cellular automata, shown to becomputationally universal. Cellular automata have been applied to modelling a variety of phenomena such as communication, growth, reproduction, competition, evolution and other physical and biological processes. Neural computation is the field of research that emerged from the comparison betweencomputing machinesand the humannervous system.[10]This field aims both to understand how thebrainofliving organismsworks (brain theoryorcomputational neuroscience), and to design efficient algorithms based on the principles of how the human brain processes information (Artificial Neural Networks, ANN[11]). Anartificial neural networkis a network ofartificial neurons.[12]An artificial neuronAis equipped with a functionfA{\displaystyle f_{A}}, receivesnreal-valuedinputsx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}with respectiveweightsw1,w2,…,wn{\displaystyle w_{1},w_{2},\ldots ,w_{n}}, and it outputsfA(w1x1+w2x2+…+wnxn){\displaystyle f_{A}(w_{1}x_{1}+w_{2}x_{2}+\ldots +w_{n}x_{n})}. Some neurons are selected to be the output neurons, and the network function is the vectorial function that associates to theninput values, the outputs of themselected output neurons. Note that different choices of weights produce different network functions for the same inputs. Back-propagation is asupervised learning methodby which the weights of the connections in the network are repeatedly adjusted so as to minimize the difference between the vector of actual outputs and that of desired outputs.Learning algorithmsbased onbackwards propagation of errorscan be used to find optimal weights for giventopology of the networkand input-output pairs. Evolutionary computation[13]is a computational paradigm inspired byDarwinian evolution. An artificial evolutionary system is a computational system based on the notion of simulated evolution. It comprises a constant- or variable-size population of individuals, afitness criterion, and genetically inspired operators that produce the nextgenerationfrom the current one. The initial population is typically generated randomly or heuristically, and typical operators aremutationandrecombination. At each step, the individuals are evaluated according to the given fitness function (survival of the fittest). The next generation is obtained from selected individuals (parents) by using genetically inspired operators. The choice of parents can be guided by a selection operator which reflects the biological principle ofmate selection. This process of simulatedevolutioneventually converges towards a nearly optimal population of individuals, from the point of view of the fitness function. The study of evolutionary systems has historically evolved along three main branches:Evolution strategiesprovide a solution toparameter optimization problemsfor real-valued as well as discrete and mixed types of parameters.Evolutionary programmingoriginally aimed at creating optimal "intelligent agents" modelled, e.g., as finite state machines.Genetic algorithms[14]applied the idea of evolutionary computation to the problem of finding a (nearly-)optimal solution to a given problem. Genetic algorithms initially consisted of an input population of individuals encoded as fixed-length bit strings, the genetic operators mutation (bit flips) and recombination (combination of a prefix of a parent with the suffix of the other), and a problem-dependent fitness function. Genetic algorithms have been used to optimize computer programs, calledgenetic programming, and today they are also applied to real-valued parameter optimization problems as well as to many types ofcombinatorial tasks. Estimation of Distribution Algorithm(EDA), on the other hand, are evolutionary algorithms that substitute traditional reproduction operators by model-guided ones. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled[15][16]or generated from guided-crossover.[17][18] Swarm intelligence,[19]sometimes referred to ascollective intelligence, is defined as the problem solving behavior that emerges from the interaction ofindividual agents(e.g.,bacteria,ants,termites,bees,spiders,fish,birds) which communicate with other agents by acting on theirlocal environments. Particle swarm optimizationapplies this idea to the problem of finding an optimal solution to a given problem by a search through a (multi-dimensional)solution space. The initial set-up is a swarm ofparticles, each representing a possible solution to the problem. Each particle has its ownvelocitywhich depends on its previous velocity (the inertia component), the tendency towards the past personal best position (the nostalgia component), and its tendency towards a global neighborhood optimum or local neighborhood optimum (the social component). Particles thus move through a multidimensional space and eventually converge towards a point between theglobal bestand their personal best. Particle swarm optimization algorithms have been applied to various optimization problems, and tounsupervised learning,game learning, andschedulingapplications. In the same vein,ant algorithmsmodel the foraging behaviour of ant colonies. To find the best path between the nest and a source of food, ants rely on indirect communication by laying apheromonetrail on the way back to the nest if they found food, respectively following the concentration of pheromones if they are looking for food. Ant algorithms have been successfully applied to a variety of combinatorial optimization problems over discrete search spaces. Artificial immune systems (a.k.a. immunological computation orimmunocomputing) are computational systems inspired by the natural immune systems of biological organisms. Viewed as an information processing system, thenatural immune systemof organisms performs many complex tasks inparallelanddistributed computingfashion.[20]These include distinguishing between self andnonself,[21]neutralizationof nonselfpathogens(viruses, bacteria,fungi, andparasites),learning,memory,associative retrieval,self-regulation, andfault-tolerance.Artificial immune systemsare abstractions of the natural immune system, emphasizing these computational aspects. Their applications includecomputer virus detection,anomaly detectionin a time series of data,fault diagnosis,pattern recognition, machine learning,bioinformatics, optimization,roboticsandcontrol. Membrane computinginvestigates computing models abstracted from thecompartmentalized structureof living cells affected bymembranes.[22]A generic membrane system (P-system) consists of cell-like compartments (regions) delimited bymembranes, that are placed in anested hierarchicalstructure. Each membrane-enveloped region contains objects, transformation rules which modify these objects, as well as transfer rules, which specify whether the objects will be transferred outside or stay inside the region. Regions communicate with each other via the transfer of objects. The computation by a membrane system starts with an initial configuration, where the number (multiplicity) of each object is set to some value for each region (multiset of objects). It proceeds by choosing,nondeterministicallyand in amaximally parallel manner, which rules are applied to which objects. The output of the computation is collected from ana prioridetermined output region. Applications of membrane systems include machine learning, modelling of biological processes (photosynthesis, certainsignaling pathways,quorum sensingin bacteria, cell-mediatedimmunity), as well as computer science applications such ascomputer graphics,public-key cryptography,approximationandsorting algorithms, as well as analysis of variouscomputationally hard problems. In biological organisms,morphogenesis(the development of well-defined shapes and functional structures) is achieved by the interactions between cells guided by the geneticprogramencoded in the organism's DNA. Inspired by this idea,amorphous computingaims at engineering well-defined shapes and patterns, or coherent computational behaviours, from the local interactions of a multitude of simple unreliable, irregularly placed, asynchronous, identically programmed computing elements (particles).[23]As a programming paradigm, the aim is to find newprogramming techniquesthat would work well for amorphous computing environments. Amorphous computing also plays an important role as the basis for "cellular computing" (see the topicssynthetic biologyandcellular computing, below). The understanding that the morphology performs computation is used to analyze the relationship between morphology and control and to theoretically guide the design of robots with reduced control requirements, has been used in both robotics and for understanding of cognitive processes in living organisms, seeMorphological computationand .[24] Cognitive computing CC is a new type of computing, typically with the goal of modelling of functions of human sensing, reasoning, and response to stimulus, seeCognitive computingand .[25] Cognitive capacities of present-day cognitive computing are far from human level. The same info-computational approach can be applied to other, simpler living organisms. Bacteria are an example of a cognitive system modelled computationally, seeEshel Ben-JacobandMicrobes-mind. Artificial life(ALife) is a research field whose ultimate goal is to understand the essential properties of life organisms[26]by building, within electronic computers or other artificial media,ab initiosystems that exhibit properties normally associated only with living organisms. Early examples includeLindenmayer systems(L-systems), that have been used to model plant growth and development. An L-system is a parallel rewriting system that starts with an initial word, and applies its rewriting rules in parallel to all letters of the word.[27] Pioneering experiments in artificial life included the design of evolving "virtual block creatures" acting in simulated environments with realistic features such askinetics,dynamics,gravity,collision, andfriction.[28]These artificial creatures were selected for their abilities endowed to swim, or walk, or jump, and they competed for a common limited resource (controlling a cube). The simulation resulted in the evolution of creatures exhibiting surprising behaviour: some developed hands to grab the cube, others developed legs to move towards the cube. This computational approach was further combined with rapid manufacturing technology to actually build the physical robots that virtually evolved.[29]This marked the emergence of the field ofmechanical artificial life. The field ofsynthetic biologyexplores a biological implementation of similar ideas. Other research directions within the field of artificial life includeartificial chemistryas well as traditionally biological phenomena explored in artificial systems, ranging from computational processes such asco-evolutionaryadaptation and development, to physical processes such as growth,self-replication, andself-repair. All of the computational techniques mentioned above, while inspired by nature, have been implemented until now mostly on traditionalelectronic hardware. In contrast, the two paradigms introduced here,molecular computingandquantum computing, employ radically different types of hardware. Molecular computing(a.k.a. biomolecular computing, biocomputing, biochemical computing,DNA computing) is a computational paradigm in which data is encoded asbiomoleculessuch asDNA strands, and molecular biology tools act on the data to perform various operations (e.g.,arithmeticorlogical operations). The first experimental realization of special-purpose molecular computer was the 1994 breakthrough experiment byLeonard Adlemanwho solved a 7-node instance of theHamiltonian Path Problemsolely by manipulating DNA strands in test tubes.[30]DNA computations start from an initial input encoded as a DNA sequence (essentially a sequence over the four-letter alphabet {A, C, G, T}), and proceed by a succession of bio-operations such as cut-and-paste (byrestriction enzymesandligases), extraction of strands containing a certain subsequence (by using Watson-Crick complementarity), copy (by usingpolymerase chain reactionthat employs the polymerase enzyme), and read-out.[31]Recent experimental research succeeded in solving more complex instances ofNP-completeproblems such as a 20-variable instance of3SAT, and wet DNA implementations of finite state machines with potential applications to the design ofsmart drugs. One of the most notable contributions of research in this field is to the understanding ofself-assembly.[33]Self-assembly is thebottom-upprocess by which objects autonomously come together to form complex structures. Instances in nature abound, and includeatomsbinding by chemical bonds to formmolecules, and molecules formingcrystalsormacromolecules. Examples of self-assembly research topics include self-assembled DNA nanostructures[34]such asSierpinski triangles[35]or arbitrary nanoshapes obtained using theDNA origami[36]technique, and DNA nanomachines[37]such as DNA-based circuits (binary counter,bit-wise cumulative XOR), ribozymes for logic operations, molecular switches (DNA tweezers), and autonomous molecular motors (DNA walkers). Theoretical research in molecular computing has yielded several novel models of DNA computing (e.g.splicing systemsintroduced by Tom Head already in 1987) and their computational power has been investigated.[38]Various subsets of bio-operations are now known to be able to achieve the computational power ofTuring machines[citation needed]. A quantum computer[39]processes data stored as quantum bits (qubits), and uses quantum mechanical phenomena such assuperpositionandentanglementto perform computations. A qubit can hold a "0", a "1", or a quantum superposition of these. A quantum computer operates on qubits withquantum logic gates. ThroughShor's polynomial algorithmfor factoring integers, andGrover's algorithmfor quantum database search that has a quadratic time advantage, quantum computers were shown to potentially possess a significant benefit relative to electronic computers. Quantum cryptographyis not based on thecomplexity of the computation, but on the special properties ofquantum information, such as the fact that quantum information cannot be measured reliably and any attempt at measuring it results in an unavoidable and irreversible disturbance. A successful open air experiment in quantum cryptography was reported in 2007, where data was transmitted securely over a distance of 144 km.[40]Quantum teleportationis another promising application, in which a quantum state (not matter or energy) is transferred to an arbitrary distant location. Implementations of practical quantum computers are based on various substrates such asion-traps,superconductors,nuclear magnetic resonance, etc. As of 2006, the largest quantum computing experiment used liquid state nuclear magnetic resonance quantum information processors, and could operate on up to 12 qubits.[41] The dual aspect of natural computation is that it aims to understand nature by regarding natural phenomena as information processing. Already in the 1960s, Zuse and Fredkin suggested the idea that the entire universe is a computational (information processing) mechanism, modelled as a cellular automaton which continuously updates its rules.[3][4]A recent quantum-mechanical approach of Lloyd suggests the universe as a quantum computer that computes its own behaviour,[5]while Vedral[42]suggests that information is the most fundamental building block of reality. The universe/nature as computational mechanism is elaborated in,[6]exploring the nature with help of the ideas of computability, whilst,[7]based on the idea of nature as network of networks of information processes on different levels of organization, is studying natural processes as computations (information processing). The main directions of research in this area aresystems biology,synthetic biologyandcellular computing. Computational systems biology (or simply systems biology) is an integrative and qualitative approach that investigates the complex communications and interactions taking place in biological systems. Thus, in systems biology, the focus of the study is theinteraction networksthemselves and the properties of biological systems that arise due to these networks, rather than the individual components of functional processes in an organism. This type of research on organic components has focused strongly on four different interdependent interaction networks:[43]gene-regulatory networks, biochemical networks, transport networks, and carbohydrate networks. Gene regulatory networkscomprise gene-gene interactions, as well as interactions between genes and other substances in the cell.Genesare transcribed intomessenger RNA(mRNA), and then translated intoproteinsaccording to thegenetic code. Each gene is associated with other DNA segments (promoters,enhancers, orsilencers) that act asbinding sitesforactivatorsorrepressorsforgene transcription. Genes interact with each other either through their gene products (mRNA, proteins) which can regulate gene transcription, or through smallRNA speciesthat can directly regulate genes. Thesegene-gene interactions, together with genes' interactions with other substances in the cell, form the most basic interaction network: thegene regulatory networks. They perform information processing tasks within the cell, including the assembly and maintenance of other networks. Models of gene regulatory networks include random and probabilisticBoolean networks,asynchronous automata, andnetwork motifs. Another viewpoint is that the entire genomic regulatory system is a computational system, agenomic computer. This interpretation allows one to compare human-made electronic computation with computation as it occurs in nature.[44] In addition, unlike a conventional computer, robustness in a genomic computer is achieved by variousfeedback mechanismsby which poorly functional processes are rapidly degraded, poorly functional cells are killed byapoptosis, and poorly functional organisms are out-competed by more fit species. Biochemical networksrefer to the interactions between proteins, and they perform various mechanical and metabolic tasks inside a cell. Two or more proteins may bind to each other via binding of their interactions sites, and form a dynamic protein complex (complexation). These protein complexes may act ascatalystsfor other chemical reactions, or may chemically modify each other. Such modifications cause changes to available binding sites of proteins. There are tens of thousands of proteins in a cell, and they interact with each other. To describe such a massive scale interactions,Kohn maps[45]were introduced as a graphical notation to depict molecular interactions in succinct pictures. Other approaches to describing accurately and succinctly protein–protein interactions include the use oftextual bio-calculus[46]orpi-calculusenriched with stochastic features.[47] Transport networksrefer to the separation and transport of substances mediated by lipid membranes. Some lipids can self-assemble into biological membranes. A lipid membrane consists of alipid bilayerin which proteins and other molecules are embedded, being able to travel along this layer. Through lipid bilayers, substances are transported between the inside and outside of membranes to interact with other molecules. Formalisms depicting transport networks include membrane systems andbrane calculi.[48] Synthetic biology aims at engineering synthetic biological components, with the ultimate goal of assembling whole biological systems from their constituent components. The history of synthetic biology can be traced back to the 1960s, whenFrançois JacobandJacques Monoddiscovered the mathematical logic in gene regulation. Genetic engineering techniques, based onrecombinant DNAtechnology, are a precursor of today's synthetic biology which extends these techniques to entire systems of genes and gene products. Along with the possibility of synthesizing longer and longer DNA strands, the prospect of creating synthetic genomes with the purpose of building entirely artificialsynthetic organismsbecame a reality. Indeed, rapid assembly of chemically synthesized short DNA strands made it possible to generate a 5386bp synthetic genome of a virus.[49] Alternatively, Smith et al. found about 100 genes that can be removed individually from the genome ofMycoplasma Genitalium. This discovery paves the way to the assembly of a minimal but still viable artificial genome consisting of the essential genes only. A third approach to engineering semi-synthetic cells is the construction of a single type of RNA-like molecule with the ability of self-replication.[50]Such a molecule could be obtained by guiding the rapid evolution of an initial population of RNA-like molecules, by selection for the desired traits. Another effort in this field is towards engineering multi-cellular systems by designing, e.g.,cell-to-cell communication modulesused to coordinate living bacterial cell populations.[51] Computation in living cells (a.k.a.cellular computing, orin-vivo computing) is another approach to understand nature as computation. One particular study in this area is that of the computational nature of gene assembly in unicellular organisms calledciliates. Ciliates store a copy of their DNA containing functional genes in themacronucleus, and another "encrypted" copy in themicronucleus. Conjugation of two ciliates consists of the exchange of their micronuclear genetic information, leading to the formation of two new micronuclei, followed by each ciliate re-assembling the information from its new micronucleus to construct a new functional macronucleus. The latter process is calledgene assembly, or gene re-arrangement. It involves re-ordering some fragments of DNA (permutationsand possiblyinversion) and deleting other fragments from the micronuclear copy. From the computational point of view, the study of this gene assembly process led to many challenging research themes and results, such as the Turing universality of various models of this process.[52]From the biological point of view, a plausible hypothesis about the "bioware" that implements the gene-assembly process was proposed, based ontemplate guided recombination.[53][54] Other approaches to cellular computing include developing anin vivoprogrammable and autonomous finite-state automaton withE. coli,[55]designing and constructingin vivocellular logic gates and genetic circuits that harness the cell's existing biochemical processes (see for example[56]) and the global optimization ofstomataaperture in leaves, following a set of local rules resembling acellular automaton.[57] This article was written based on the following references with the kind permission of their authors: Many of the constituent research areas of natural computing have their own specialized journals and books series. Journals and book series dedicated to the broad field of Natural Computing include the journalsNatural Computing(Springer Verlag),Theoretical Computer Science, Series C: Theory of Natural Computing(Elsevier),the Natural Computing book series(Springer Verlag), and theHandbook of Natural Computing(G.Rozenberg, T.Back, J.Kok, Editors, Springer Verlag). For readers interested in popular science article, consider this one on Medium:Nature-Inspired Algorithms
https://en.wikipedia.org/wiki/Natural_computation
Olaf Sporns(born 18 September 1963) is Provost Professor in Psychological and Brain Sciences atIndiana Universityand scientific co-director of the university's Network Science Institute.[1]He is the founding editor of theacademic journalNetwork Neuroscience, published byMIT Press.[citation needed][2] Sporns received his degree fromUniversity of TübingeninTübingen, West Germany, before going toNew Yorkto study at theRockefeller UniversityunderGerald Edelman. After receiving his doctorate, he followed Edelman to theNeurosciences InstituteinLa Jolla,California. His focus is in the area of computational cognitive neuroscience. His topics of study include functional integration and binding in the cerebral cortex, neural models of perception and action, network structure and dynamics, applications of information theory to the brain and embodied cognitive science using robotics.[3]He was awarded aGuggenheim Fellowshipin 2011 in the Natural Sciences category.[citation needed] One of the core areas of research being conducted by Sporns is in the area of complexity of the brain. One aspect in particular is howsmall-world networkeffects are seen in the neural connections which are decentralized in the brain.[4]Research in collaboration with scientists across the world has revealed that there are pathways in the brain that are very well connected.[5]This is insightful for understanding how the architecture of the brain may relate toschizophrenia,autismandAlzheimer's disease. Sporns is also interested in understanding the relationship between statistical properties of neuronal populations and perceptual data. How does an organism use and structure its environment in such a way as to achieve (statistically) complex input? To this end, he has run statistical analysis on movement patterns and input within simulations, videos and robotic devices.[citation needed] Sporns also has a research interest in reward models of the brain utilizing robots.[6]The reward models have shown ways in whichdopamineis onset bydrug addiction. Though not directly related to his core research, in early 2000 Sporns was interested indeveloping robots with human-like qualities in their ability to learn.[7]
https://en.wikipedia.org/wiki/Olaf_Sporns
Organic computingis computing that behaves and interacts with humans in anorganic manner. The term "organic" is used to describe the system's behavior, and does not imply that they are constructed fromorganic materials. It is based on the insight that we will soon be surrounded by large collections ofautonomous systems, which are equipped withsensorsandactuators, aware of their environment, communicate freely, and organize themselves in order to perform the actions and services that seem to be required. The goal is to construct such systems as robust, safe, flexible, and trustworthy as possible. In particular, a strong orientation towards human needs as opposed to a pure implementation of the technologically possible seems absolutely central. In order to achieve these goals, our technical systems will have to act more independently, flexibly, and autonomously, i.e. they will have to exhibit lifelike properties. We call such systems "organic". Hence, an "Organic Computing System" is a technical system which adapts dynamically to exogenous and endogenous change. It is characterized by the properties of self-organization,self-configuration, self-optimization,self-healing, self-protection,self-explaining, andcontext awareness. It can be seen as an extension of theAutonomic computingvision of IBM. In a variety of research projects the priority research programSPP 1183of the German Research Foundation (DFG) addresses fundamental challenges in the design of Organic Computing systems; its objective is a deeper understanding of emergent global behavior in self-organizing systems and the design of specific concepts and tools to support the construction of Organic Computing systems for technical applications.
https://en.wikipedia.org/wiki/Organic_computing
Chisanboporchisenbop(from Koreanchi (ji)finger +sanpŏp (sanbeop)calculation[1]지산법/指算法), sometimes calledFingermath,[2]is afinger countingmethod used to perform basicmathematicaloperations. According toThe Complete Book of Chisanbop[3]by Hang Young Pai, chisanbop was created in the 1940s inKoreaby Sung Jin Pai and revised by his son Hang Young Pai, who brought the system to theUnited Statesin 1977. With thechisanbopmethod it is possible to represent all numbers from 0 to 99 with the hands, rather than the usual 0 to 10, and to perform the addition, subtraction, multiplication and division of numbers.[4]The system has been described as being easier to use than a physical abacus for students with visual impairments.[5] Each finger has a value of one, while the thumb has a value of five. Therefore each hand can represent the digits 0-9, rather than the usual 0-5. The two hands combine to represent two digits; the right hand is the ones place, and the left hand is the tens place. This way, any number from 0 to 99 can be shown, and it's possible to count up to 99 instead of just 10. The hands can be held above a table, with the fingers pressing down on the table; or the hands can simply be held up, fingers extended, as with the more common practice of 0-10 counting.[6] Chisanbop can be used for teaching math, or simply for counting. The results for teaching math have been mixed. A school inShawnee Mission, Kansas, ran a pilot program with students in 1979. It was found that although they could add large numbers quickly, they could not add them in their heads. The program was dropped. Grace Burton of theUniversity of North Carolinasaid, "It doesn't teach the basic number facts, only to count faster. Adding and subtracting quickly are only a small part of mathematics."[7]
https://en.wikipedia.org/wiki/Chisanbop
Finger binaryis a system forcountingand displayingbinary numberson thefingersof either or bothhands. Each finger represents one binary digit orbit. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both: that is, up to 25−1 or 210−1 respectively. Modern computers typically store values as some whole number of 8-bitbytes, making the fingers of both hands together equivalent to 1¼ bytes of storage—in contrast to less than half a byte when using ten fingers to count up to 10.[1] In the binary number system, eachnumerical digithas two possible states (0 or 1) and each successive digit represents an increasingpower of two. Note: What follows is but one of several possible schemes for assigning the values 1, 2, 4, 8, 16, etc. to fingers, not necessarily the best. (see below the illustrations.): The rightmost digit represents two to thezeroth power(i.e., it is the "ones digit"); the digit to its left represents two to the first power (the "twos digit"); the next digit to the left represents two to the second power (the "fours digit"); and so on. (Thedecimal number systemis essentially the same, only that powers of ten are used: "ones digit", "tens digit" "hundreds digit", etc.) It is possible to useanatomical digitsto representnumerical digitsby using a raised finger to represent a binary digit in the "1" state and a lowered finger to represent it in the "0" state. Each successive finger represents a higher power of two. With palms oriented toward the counter's face, the values for when only the right hand is used are: When only the left hand is used: When both hands are used: And, alternately, with the palms oriented away from the counter: The values of each raised finger are added together to arrive at a total number. In the one-handed version, all fingers raised is thus31(16 + 8 + 4 + 2 + 1), and all fingers lowered (a fist) is 0. In the two-handed system, all fingers raised is1,023(512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1) and two fists (no fingers raised) represents 0. It is also possible to have each hand represent an independent number between 0 and 31; this can be used to represent various types of paired numbers, such asmonthandday, X-Ycoordinates, or sports scores (such as fortable tennisorbaseball). Showing the time as hours and minutes is possible using 10 fingers, with the hour using 4 fingers (0-23) and the minutes using 6 fingers (0-59). When used in addition to the right. Just as fractional and negative numbers can be represented in binary, they can be represented in finger binary. Representing negative numbers is extremely simple, by using the leftmost finger as asign bit: raised means the number is negative, in asign-magnitudesystem. Anywhere between −511 and +511 can be represented this way, using two hands. Note that, in this system, both a positive and a negative zero may be represented. If a convention were reached on palm up/palm down or fingers pointing up/down representing positive/negative, you could maintain 210−1 in both positive and negative numbers (−1,023 to +1023, with positive and negative zero still represented). Fractions can be stored natively in a binary format by having each finger represent a fractional power of two:12x{\displaystyle {\tfrac {1}{2^{x}}}}. (These are known asdyadic fractions.) Using the left hand only: Using two hands: The total is calculated by adding all the values in the same way as regular (non-fractional) finger binary, then dividing by the largest fractional power being used (32 for one-handed fractional binary, 1024 for two-handed), andsimplifying the fractionas necessary. For example, with thumb and index finger raised on the left hand and no fingers raised on the right hand, this is (512 + 256)/1024 = 768/1024 = 3/4. If using only one hand (left or right), it would be (16 + 8)/32 = 24/32 = 3/4 also. The simplification process can itself be greatly simplified by performing abit shiftoperation: all digits to the right of the rightmost raised finger (i.e., all trailing zeros) are discarded and the rightmost raised finger is treated as the ones digit. The digits are added together using their now-shifted values to determine thenumeratorand the rightmost finger's original value is used to determine thedenominator. For instance, if the thumb and index finger on the left hand are the only raised digits, the rightmost raised finger (the index finger) becomes "1". The thumb, to its immediate left, is now the 2s digit; added together, they equal 3. The index finger's original value (1/4) determines the denominator: the result is 3/4. Combinedintegerand fractional values (i.e.,rational numbers) can be represented by setting aradix pointsomewhere between two fingers (for instance, between the left and right pinkies). All digits to the left of the radix point are integers; those to the right are fractional. Dyadic fractions, explained above, have limited use in a society based around decimal figures. A simple non-dyadic fraction such as 1/3 can be approximated as 341/1024 (0.3330078125), but the conversion between dyadic anddecimal(0.333) orvulgar(1/3) forms is complicated. Instead, either decimal or vulgar fractions can be represented natively in finger binary. Decimal fractions can be represented by using regular integer binary methods and dividing the result by 10, 100, 1000, or some other power of ten. Numbers between 0 and 102.3, 10.23, 1.023, etc. can be represented this way, in increments of 0.1, 0.01, 0.001, etc. Vulgar fractionscan be represented by using one hand to represent thenumeratorand one hand to represent thedenominator; a spectrum of rational numbers can be represented this way, ranging from 1/31 to 31/1 (as well as 0). In theory, it is possible to use other positions of the fingers to represent more than two states (0 and 1); for instance, aternary numeral system(base3) could be used by having a fully raised finger represent 2, fully lowered represent 0, and "curled" (half-lowered) represent 1. This would make it possible to count up to 242 (35−1) on one hand or 59,048 (310−1) on two hands. In practice, however, many people will find it difficult to hold all fingers independently (especially the middle and ring fingers) in more than two distinct positions.
https://en.wikipedia.org/wiki/Finger_binary
Quinary(base 5orpental[1][2][3]) is anumeral systemwithfiveas thebase. A possible origination of a quinary system is that there are fivedigitson eitherhand. In the quinary place system, five numerals, from0to4, are used to represent anyreal number. According to this method,fiveis written as 10,twenty-fiveis written as 100, andsixtyis written as 220. As five is a prime number, only the reciprocals of the powers of five terminate, although its location between twohighly composite numbers(4and6) guarantees that many recurring fractions have relatively short periods. Many languages[4]use quinary number systems, includingGumatj,Nunggubuyu,[5]Kuurn Kopan Noot,[6]Luiseño,[7]andSaraveca. Gumatj has been reported to be a true "5–25" language, in which 25 is the higher group of 5. The Gumatj numerals are shown below:[5] However, Harald Hammarström reports that "one would not usually use exact numbers for counting this high in this language and there is a certain likelihood that the system was extended this high only at the time of elicitation with one single speaker," pointing to theBiwat languageas a similar case (previously attested as 5-20, but with one speaker recorded as making an innovation to turn it 5-25).[4] Adecimalsystem with two and five as a sub-bases is calledbiquinaryand is found inWolofandKhmer.Roman numeralsare an early biquinary system. The numbers1,5,10, and50are written asI,V,X, andLrespectively. Seven isVII, and seventy isLXX. The full list of symbols is: Note that these are not positional number systems. In theory, a number such as 73 could be written as IIIXXL (without ambiguity) and as LXXIII. To extend Roman numerals to beyond thousands, avinculum(horizontal overline) was added, multiplying the letter value by a thousand, e.g. overlinedM̅was one million. There is also no sign for zero. But with the introduction of inversions like IV and IX, it was necessary to keep the order from most to least significant. Many versions of theabacus, such as thesuanpanandsoroban, use a biquinary system to simulate a decimal system for ease of calculation.Urnfield culture numeralsand sometally marksystems are also biquinary. Units ofcurrenciesare commonly partially or wholly biquinary. Bi-quinary coded decimalis a variant of biquinary that was used on a number of early computers includingColossusand theIBM 650to represent decimal numbers. Fewcalculatorssupport calculations in the quinary system, except for someSharpmodels (including some of theEL-500WandEL-500Xseries, where it is named thepental system[1][2][3]) since about 2005, as well as the open-source scientific calculatorWP 34S.
https://en.wikipedia.org/wiki/Quinary
TheFACOM 128was a relay-basedelectromechanical computerbuilt byFujitsu. Two models were made, namely the FACOM 128A, built in 1956, and the FACOM 128B, built in 1959.[1]As of 2019[update], a fully working FACOM 128B is still in working order, maintained by Fujitsu staff at a facility inNumazuinShizuoka Prefecture.[2][3] The FACOM 128B processes numbers using abi-quinary coded decimalrepresentation.[4] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/FACOM_128
Incombinatorialmathematics, ade Bruijn sequenceof ordernon a size-kalphabetAis acyclic sequencein which every possible length-nstringonAoccurs exactly once as asubstring(i.e., as acontiguoussubsequence). Such a sequence is denoted byB(k,n)and has lengthkn, which is also the number of distinct strings of lengthnonA. Each of these distinct strings, when taken as a substring ofB(k,n), must start at a different position, because substrings starting at the same position are not distinct. Therefore,B(k,n)must haveat leastknsymbols. And sinceB(k,n)hasexactlyknsymbols, de Bruijn sequences are optimally short with respect to the property of containing every string of lengthnat least once. The number of distinct de Bruijn sequencesB(k,n)is For a binary alphabet this is22(n−1)−n{\displaystyle 2^{2^{(n-1)}-n}}, leading to the following sequence for positiven{\displaystyle n}:   1, 1, 2, 16, 2048, 67108864... (OEIS:A016031) The sequences are named after the Dutch mathematicianNicolaas Govert de Bruijn, who wrote about them in 1946.[1]As he later wrote,[2]the existence of de Bruijn sequences for each order together with the above properties were firstproved, for the case of alphabets with two elements, by Camille Flye Sainte-Marie (1894). The generalization to larger alphabets is due toTatyana van Aardenne-Ehrenfestand de Bruijn (1951).Automatafor recognizing these sequences are denoted as de Bruijn automata. In many applications,A= {0,1}. The earliest known example of a de Bruijn sequence comes fromSanskrit prosodywhere, since the work ofPingala, each possible three-syllable pattern of long and short syllables is given a name, such as 'y' for short–long–long and 'm' for long–long–long. To remember these names, the mnemonicyamātārājabhānasalagāmis used, in which each three-syllable pattern occurs starting at its name: 'yamātā' has a short–long–long pattern, 'mātārā' has a long–long–long pattern, and so on, until 'salagām' which has a short–short–long pattern. This mnemonic, equivalent to a de Bruijn sequence on binary 3-tuples, is of unknown antiquity, but is at least as old asCharles Philip Brown's 1869 book on Sanskrit prosody that mentions it and considers it "an ancient line, written byPāṇini".[3] In 1894, A. de Rivière raised the question in an issue of the French problem journalL'Intermédiaire des Mathématiciens, of the existence of a circular arrangement of zeroes and ones of size2n{\displaystyle 2^{n}}that contains all2n{\displaystyle 2^{n}}binary sequences of lengthn{\displaystyle n}. The problem was solved (in the affirmative), along with the count of22n−1−n{\displaystyle 2^{2^{n-1}-n}}distinct solutions, by Camille Flye Sainte-Marie in the same year.[2]This was largely forgotten, andMartin (1934)proved the existence of such cycles for general alphabet size in place of 2, with an algorithm for constructing them. Finally, when in 1944Kees Posthumusconjecturedthe count22n−1−n{\displaystyle 2^{2^{n-1}-n}}for binary sequences, de Bruijn proved the conjecture in 1946, through which the problem became well known.[2] Karl Popperindependently describes these objects in hisThe Logic of Scientific Discovery(1934), calling them "shortest random-like sequences".[4] The de Bruijn sequences can be constructed by taking aHamiltonian pathof ann-dimensionalde Bruijn graphoverksymbols (or equivalently, anEulerian cycleof an (n− 1)-dimensional de Bruijn graph).[5] An alternative construction involves concatenating together, in lexicographic order, all theLyndon wordswhose length dividesn.[6] An inverseBurrows–Wheeler transformcan be used to generate the required Lyndon words in lexicographic order.[7] de Bruijn sequences can also be constructed usingshift registers[8]or viafinite fields.[9] Goal: to construct aB(2, 4) de Bruijn sequence of length 24= 16 using Eulerian (n− 1 = 4 − 1 = 3) 3-D de Bruijn graph cycle. Each edge in this 3-dimensional de Bruijn graph corresponds to a sequence of four digits: the three digits that label the vertex that the edge is leaving followed by the one that labels the edge. If one traverses the edge labeled 1 from 000, one arrives at 001, thereby indicating the presence of the subsequence 0001 in the de Bruijn sequence. To traverse each edge exactly once is to use each of the 16 four-digit sequences exactly once. For example, suppose we follow the following Eulerian path through these vertices: These are the output sequences of lengthk: This corresponds to the following de Bruijn sequence: The eight vertices appear in the sequence in the following way: ...and then we return to the starting point. Each of the eight 3-digit sequences (corresponding to the eight vertices) appears exactly twice, and each of the sixteen 4-digit sequences (corresponding to the 16 edges) appears exactly once. Mathematically, an inverseBurrows—Wheeler transformon a wordwgenerates a multi-set ofequivalence classesconsisting of strings and their rotations.[7]These equivalence classes of strings each contain aLyndon wordas a unique minimum element, so the inverse Burrows—Wheeler transform can be considered to generate a set of Lyndon words. It can be shown that if we perform the inverse Burrows—Wheeler transform on a wordwconsisting of the size-kalphabet repeatedkn−1times (so that it will produce a word the same length as the desired de Bruijn sequence), then the result will be the set of all Lyndon words whose length dividesn. It follows that arranging these Lyndon words in lexicographic order will yield a de Bruijn sequenceB(k,n), and that this will be the first de Bruijn sequence in lexicographic order. The following method can be used to perform the inverse Burrows—Wheeler transform, using itsstandard permutation: For example, to construct the smallestB(2,4) de Bruijn sequence of length 24= 16, repeat the alphabet (ab) 8 times yieldingw=abababababababab. Sort the characters inw, yieldingw′=aaaaaaaabbbbbbbb. Positionw′abovewas shown, and map each element inw′to the corresponding element inwby drawing a line. Number the columns as shown so we can read the cycles of the permutation: Starting from the left, the Standard Permutation notation cycles are:(1) (2 3 5 9) (4 7 13 10) (6 11) (8 15 14 12) (16). (Standard Permutation) Then, replacing each number by the corresponding letter inw′from that column yields:(a)(aaab)(aabb)(ab)(abbb)(b). These are all of the Lyndon words whose length divides 4, in lexicographic order, so dropping the parentheses givesB(2,4) = aaaabaabbababbbb. The followingPythoncode calculates a de Bruijn sequence, givenkandn, based on an algorithm fromFrank Ruskey'sCombinatorial Generation.[10] which prints Note that these sequences are understood to "wrap around" in a cycle. For example, the first sequence contains 110 and 100 in this fashion. de Bruijn cycles are of general use in neuroscience and psychology experiments that examine the effect of stimulus order upon neural systems,[11]and can be specially crafted for use withfunctional magnetic resonance imaging.[12] The symbols of a de Bruijn sequence written around a circular object (such as a wheel of arobot) can be used to identify itsangleby examining thenconsecutive symbols facing a fixed point. This angle-encoding problem is known as the "rotating drum problem".[13]Gray codescan be used as similar rotary positional encoding mechanisms, a method commonly found inrotary encoders. A de Bruijn sequence can be used to quickly find the index of the least significant set bit ("right-most 1") or the most significant set bit ("left-most 1") in awordusingbitwise operationsand multiplication.[14]The following example uses a de Bruijn sequence to determine the index of the least significant set bit (equivalent to counting the number of trailing '0' bits) in a 32 bit unsigned integer: ThelowestBitIndex()function returns the index of the least-significant set bit inv, or zero ifvhas no set bits. The constant 0x077CB531U in the expression is theB(2, 5) sequence 0000 0111 0111 1100 1011 0101 0011 0001 (spaces added for clarity). The operation(v & -v)zeros all bits except the least-significant bit set, resulting in a new value which is a power of 2. This power of 2 is multiplied (arithmetic modulo 232) by the de Bruijn sequence, thus producing a 32-bit product in which the bit sequence of the 5 MSBs is unique for each power of 2. The 5 MSBs are shifted into the LSB positions to produce a hash code in the range [0, 31], which is then used as an index intohash tableBitPositionLookup. The selected hash table value is the bit index of the least significant set bit inv. The following example determines the index of the most significant bit set in a 32 bit unsigned integer: In the above example an alternative de Bruijn sequence (0x06EB14F9U) is used, with corresponding reordering of array values. The choice of this particular de Bruijn sequence is arbitrary, but the hash table values must be ordered to match the chosen de Bruijn sequence. ThekeepHighestBit()function zeros all bits except the most-significant set bit, resulting in a value which is a power of 2, which is then processed as in the previous example. A de Bruijn sequence can be used to shorten a brute-force attack on aPIN-like code lock that does not have an "enter" key and accepts the lastndigits entered. For example, adigital door lockwith a 4-digit code (each digit having 10 possibilities, from 0 to 9) would haveB(10, 4) solutions, with length10000. Therefore, only at most10000+ 3 =10003(as the solutions are cyclic) presses are needed to open the lock, whereas trying all codes separately would require4 ×10000=40000presses. Anf-fold n-ary de Bruijn sequenceis an extension of the notionn-ary de Bruijn sequence, such that the sequence of the lengthfkn{\displaystyle fk^{n}}contains every possiblesubsequenceof the lengthnexactlyftimes. For example, forn=2{\displaystyle n=2}the cyclic sequences 11100010 and 11101000 are two-fold binary de Bruijn sequences. The number of two-fold de Bruijn sequences,Nn{\displaystyle N_{n}}forn=1{\displaystyle n=1}isN1=2{\displaystyle N_{1}=2}, the other known numbers[16]areN2=5{\displaystyle N_{2}=5},N3=72{\displaystyle N_{3}=72}, andN4=43768{\displaystyle N_{4}=43768}. Ade Bruijn torusis a toroidal array with the property that everyk-arym-by-nmatrix occurs exactly once. Such a pattern can be used for two-dimensional positional encoding in a fashion analogous to that described above for rotary encoding. Position can be determined by examining them-by-nmatrix directly adjacent to the sensor, and calculating its position on the de Bruijn torus. Computing the position of a particular unique tuple or matrix in a de Bruijn sequence or torus is known as thede Bruijn decoding problem. Efficient⁠O(nlog⁡n){\displaystyle \color {Blue}O(n\log n)}⁠decoding algorithms exist for special, recursively constructed sequences[17]and extend to the two-dimensional case.[18]de Bruijn decoding is of interest, e.g., in cases where large sequences or tori are used for positional encoding.
https://en.wikipedia.org/wiki/De_Bruijn_sequence
TheSteinhaus–Johnson–Trotter algorithmorJohnson–Trotter algorithm, also calledplain changes, is analgorithmnamed afterHugo Steinhaus,Selmer M. JohnsonandHale F. Trotterthat generates all of thepermutationsofn{\displaystyle n}elements. Each two adjacent permutations in the resulting sequence differ by swapping two adjacent permuted elements. Equivalently, this algorithm finds aHamiltonian cyclein thepermutohedron, apolytopewhose vertices represent permutations and whose edges represent swaps. This method was known already to 17th-century Englishchange ringers, andRobert Sedgewickcalls it "perhaps the most prominent permutation enumeration algorithm".[1]A version of the algorithm can be implemented in such a way that the average time per permutation is constant. As well as being simple and computationally efficient, this algorithm has the advantage that subsequent computations on the generated permutations may be sped up by taking advantage of the similarity between consecutive permutations.[1] The sequence of permutations generated by the Steinhaus–Johnson–Trotter algorithm has a naturalrecursivestructure, that can be generated by a recursive algorithm. However the actual Steinhaus–Johnson–Trotter algorithm does not use recursion, instead computing the same sequence of permutations by a simple iterative method. A later improvement allows it to run in constant average time per permutation. The sequence of permutations for a given numbern{\displaystyle n}can be formed from the sequence of permutations forn−1{\displaystyle n-1}by placing the numbern{\displaystyle n}into each possible position in each of the shorter permutations. The Steinhaus–Johnson–Trotter algorithm follows this structure: the sequence of permutations it generates consists of(n−1)!{\displaystyle (n-1)!}blocks of permutations, so that within each block the permutations agree on the ordering of the numbers from 1 ton−1{\displaystyle n-1}and differ only in the position ofn{\displaystyle n}. The blocks themselves are ordered recursively, according to the Steinhaus–Johnson–Trotter algorithm for one less element. Within each block, the positions in whichn{\displaystyle n}is placed occur either in descending or ascending order, and the blocks alternate between these two orders: the placements ofn{\displaystyle n}in the first block are in descending order, in the second block they are in ascending order, in the third block they are in descending order, and so on.[2] Thus, from the single permutation on one element, one may place the number 2 in each possible position in descending order to form a list of two permutations on two elements, Then, one may place the number 3 in each of three different positions for these two permutations, in descending order for the first permutation 1 2, and then in ascending order for the permutation 2 1: The same placement pattern, alternating between descending and ascending placements ofn{\displaystyle n}, applies for any larger value ofn{\displaystyle n}.[2]In sequences of permutations with this recursive structure, each permutation differs from the previous one either by the single-position-at-a-time motion ofn{\displaystyle n}, or by a change of two smaller numbers inherited from the previous sequence of shorter permutations. In either case this difference is just the transposition of two adjacent elements. Whenn>1{\displaystyle n>1}the first and final elements of the sequence, also, differ in only two adjacent elements (the positions of the numbers1{\displaystyle 1}and2{\displaystyle 2}), as may be proven by induction. This sequence may be generated by arecursive algorithmthat constructs the sequence of smaller permutations and then performs all possible insertions of the largest number into the recursively-generated sequence.[2]The same ordering of permutations can also be described equivalently as the ordering generated by the following greedy algorithm.[3]Start with the identity permutation12…n{\displaystyle 1\;2\;\ldots \;n}. Now repeatedly transpose the largest possible entry with the entry to its left or right, such that in each step, a new permutation is created that has not been encountered in the list of permutations before. For example, in the casen=3{\displaystyle n=3}the sequence starts with123{\displaystyle 1\;2\;3}, then flips3{\displaystyle 3}with its left neighbor to get132{\displaystyle 1\;3\;2}. From this point, flipping3{\displaystyle 3}with its right neighbor2{\displaystyle 2}would yield the initial permutation123{\displaystyle 1\;2\;3}, so the sequence instead flips3{\displaystyle 3}with its left neighbor1{\displaystyle 1}and arrives at312{\displaystyle 3\;1\;2}, etc. The direction of the transposition (left or right) is always uniquely determined in this algorithm. However, the actual Steinhaus–Johnson–Trotter algorithm does not use recursion, and does not need to keep track of the permutations that it has already encountered. Instead, it computes the same sequence of permutations by a simpleiterative method. As described by Johnson, the algorithm for generating the next permutation from a given permutationπ{\displaystyle \pi }performs the following steps. When no numberi{\displaystyle i}can be found meeting the conditions of the second step of the algorithm, the algorithm has reached the final permutation of the sequence and terminates. This procedure may be implemented inO(n){\displaystyle O(n)}time per permutation.[4] Trotter gives an alternative implementation of an iterative algorithm for the same sequence, in lightly commentedALGOL 60notation.[5] Because this method generates permutations that alternate between being even and odd, it may easily be modified to generate only the even permutations or only the odd permutations: to generate the next permutation of the same parity from a given permutation, simply apply the same procedure twice.[6] A subsequent improvement byShimon Evenprovides an improvement to the running time of the algorithm by storing additional information for each element in the permutation: its position, and adirection(positive, negative, or zero) in which it is currently moving (essentially, this is the same information computed using the parity of the permutation in Johnson's version of the algorithm). Initially, the direction of the number 1 is zero, and all other elements have a negative direction: At each step, the algorithm finds the greatest element with a nonzero direction, and swaps it in the indicated direction: If this causes the chosen element to reach the first or last position within the permutation, or if the next element in the same direction is greater than the chosen element, the direction of the chosen element is set to zero: After each step, all elements greater than the chosen element (which previously had direction zero) have their directions set to indicate motion toward the chosen element. That is, positive for all elements between the start of the permutation and the chosen element, and negative for elements toward the end. Thus, in this example, after the number 2 moves, the number 3 becomes marked with a direction again: The remaining two steps of the algorithm forn=3{\displaystyle n=3}are: When all numbers become unmarked, the algorithm terminates.[7] This algorithm takes timeO(i){\displaystyle O(i)}for every step in which the greatest number to move isn−i+1{\displaystyle n-i+1}. Thus, the swaps involving the numbern{\displaystyle n}take only constant time; since these swaps account for all but a1/n{\displaystyle 1/n}fraction of all of the swaps performed by the algorithm, the average time per permutation generated is also constant, even though a small number of permutations will take a larger amount of time.[1] A more complexlooplessversion of the same procedure suitable forfunctional programmingallows it to be performed in constant time per permutation in every case; however, the modifications needed to eliminate loops from the procedure make it slower in practice.[8] The set of all permutations ofn{\displaystyle n}items may be represented geometrically by apermutohedron, thepolytopeformed from theconvex hullofn!{\displaystyle n!}vectors, the permutations of the vector(1,2,…n){\displaystyle (1,2,\dots n)}. Although defined in this way inn{\displaystyle n}-dimensional space, it is actually an(n−1){\displaystyle (n-1)}-dimensional polytope; for example, the permutohedron on four items is a three-dimensional polyhedron, thetruncated octahedron. If each vertex of the permutohedron is labeled by theinverse permutationto the permutation defined by its vertex coordinates, the resulting labeling describes aCayley graphof thesymmetric groupof permutations onn{\displaystyle n}items, as generated by the permutations that swap adjacent pairs of items. Thus, each two consecutive permutations in the sequence generated by the Steinhaus–Johnson–Trotter algorithm correspond in this way to two vertices that form the endpoints of an edge in the permutohedron, and the whole sequence of permutations describes aHamiltonian pathin the permutohedron, a path that passes through each vertex exactly once. If the sequence of permutations is completed by adding one more edge from the last permutation to the first one in the sequence, the result is instead a Hamiltonian cycle.[9] AGray codefor numbers in a givenradixis a sequence that contains each number up to a given limit exactly once, in such a way that each pair of consecutive numbers differs by one in a single digit. Then!{\displaystyle n!}permutations of then{\displaystyle n}numbers from 1 ton{\displaystyle n}may be placed in one-to-one correspondence with then!{\displaystyle n!}numbers from 0 ton!−1{\displaystyle n!-1}by pairing each permutation with the sequence of numbersci{\displaystyle c_{i}}that count the number of positions in the permutation that are to the right of valuei{\displaystyle i}and that contain a value less thani{\displaystyle i}(that is, the number ofinversionsfor whichi{\displaystyle i}is the larger of the two inverted values), and then interpreting these sequences as numbers in thefactorial number system, that is, themixed radixsystem with radix sequence(1,2,3,4,…){\displaystyle (1,2,3,4,\dots )}For instance, the permutation(3,1,4,5,2){\displaystyle (3,1,4,5,2)}would give the valuesc1=0{\displaystyle c_{1}=0},c2=0{\displaystyle c_{2}=0},c3=2{\displaystyle c_{3}=2},c4=1{\displaystyle c_{4}=1}, andc5=1{\displaystyle c_{5}=1}. The sequence of these values,(0,0,2,1,1){\displaystyle (0,0,2,1,1)}, gives the number0×0!+0×1!+2×2!+1×3!+1×4!=34.{\displaystyle 0\times 0!+0\times 1!+2\times 2!+1\times 3!+1\times 4!=34.}Consecutive permutations in the sequence generated by the Steinhaus–Johnson–Trotter algorithm have numbers of inversions that differ by one, forming a Gray code for the factorial number system.[10] More generally, combinatorial algorithms researchers have defined a Gray code for a set of combinatorial objects to be an ordering for the objects in which each two consecutive objects differ in the minimal possible way. In this generalized sense, the Steinhaus–Johnson–Trotter algorithm generates a Gray code for the permutations themselves.[11] The method was known for much of its history as a method forchange ringingof church bells: it gives a procedure by which a set of bells can be rung through all possible permutations, changing the order of only two bells per change. These so-called "plain changes" or "plain hunt" were known by circa 1621 for four bells,[12]and the general method has been traced to an unpublished 1653 manuscript byPeter Mundy.[6]A 1677 book byFabian Stedmanlists the solutions for up to six bells.[13]More recently, change ringers have abided by a rule that no bell may stay in the same position for three consecutive permutations; this rule is violated by the plain changes, so other strategies that swap multiple bells per change have been devised.[14] The algorithm is named afterHugo Steinhaus,Selmer M. JohnsonandHale F. Trotter. Johnson and Trotter rediscovered the algorithm independently of each other in the early 1960s.[15]A 1958 book by Steinhaus, translated into English in 1964, describes a relatedimpossible puzzleof generating all permutations by a system of particles, each moving at constant speed along a line and swapping positions when one particle overtakes another.[16]A 1976 paper by Hu and Bien credited Steinhaus with formulating the algorithmic problem of generating all permutations,[17]and by 1989 his book had been (incorrectly) credited as one of the original publications of the algorithm.[18]
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Johnson%E2%80%93Trotter_algorithm
Incombinatorics, thefactorial number system, also calledfactoradic, is amixed radixnumeral systemadapted to numberingpermutations. It is also calledfactorial base, althoughfactorialsdo not function asbase, but asplace valueof digits. By converting a number less thann! to factorial representation, one obtains asequenceofndigits that can be converted to a permutation ofnelements in a straightforward way, either using them asLehmer codeor asinversiontable[1]representation; in the former case the resulting map fromintegersto permutations ofnelements lists them inlexicographical order. General mixed radix systems were studied byGeorg Cantor.[2] The term "factorial number system" is used byKnuth,[3]while the French equivalent "numération factorielle" was first used in 1888.[4]The term "factoradic", which is aportmanteauof factorial and mixed radix, appears to be of more recent date.[5] The factorial number system is amixed radixnumeral system: thei-th digit from the right has basei, which means that the digit must be strictly less thani, and that (taking into account the bases of the less significant digits) its value is to be multiplied by(i− 1)! (its place value). From this it follows that the rightmost digit is always 0, the second can be 0 or 1, the third 0, 1 or 2, and so on (sequenceA124252in theOEIS). The factorial number system is sometimes defined with the 0! place omitted because it is always zero (sequenceA007623in theOEIS). In this article, a factorial number representation will be flagged by a subscript "!". In addition, some examples will have digits delimited by a colon. For example, 3:4:1:0:1:0!stands for (The place value is the factorial of one less than the radix position, which is why the equation begins with 5! for a 6-digit factoradic number.) General properties of mixed radix number systems also apply to the factorial number system. For instance, one can convert a number into factorial representation producing digits from right to left, by repeatedly dividing the number by the radix (1, 2, 3, ...), taking the remainder as digits, and continuing with the integerquotient, until this quotient becomes 0. For example, 46310can be transformed into a factorial representation by these successive divisions: The process terminates when the quotient reaches zero. Reading the remainders backward gives 3:4:1:0:1:0!. In principle, this system may be extended to representrational numbers, though rather than the natural extension of place values (−1)!, (−2)!, etc., which are undefined, the symmetric choice of radix valuesn= 0, 1, 2, 3, 4, etc. after the point may be used instead. Again, the 0 and 1 places may be omitted as these are always zero. The corresponding place values are therefore 1/1, 1/1, 1/2, 1/6, 1/24, ..., 1/n!, etc. The following sortable table shows the 24 permutations of four elements with differentinversionrelated vectors. The left and right inversion countsl{\displaystyle l}andr{\displaystyle r}(the latter often calledLehmer code) are particularly eligible to be interpreted as factorial numbers.l{\displaystyle l}gives the permutation's position in reversecolexicographicorder (the default order of this table), and the latter the position inlexicographicorder (both counted from 0). Sorting by a column that has the omissible 0 on the right makes the factorial numbers in that column correspond to the index numbers in the immovable column on the left. The small columns are reflections of the columns next to them, and can be used to bring those in colexicographic order. The rightmost column shows the digit sums of the factorial numbers (OEIS:A034968in the tables default order). For another example, the greatest number that could be represented with six digits would be 543210!which equals 719 indecimal: Clearly the next factorial number representation after 5:4:3:2:1:0!is 1:0:0:0:0:0:0!which designates 6! = 72010, the place value for the radix-7 digit. So the former number, and its summed out expression above, is equal to: The factorial number system provides a unique representation for each natural number, with the given restriction on the "digits" used. No number can be represented in more than one way because the sum of consecutive factorials multiplied by their index is always the next factorial minus one: This can be easilyprovedwithmathematical induction, or simply by noticing that∀i,i⋅i!=(i+1−1)⋅i!=(i+1)!−i!{\displaystyle \forall i,i\cdot i!=(i+1-1)\cdot i!=(i+1)!-i!}: subsequent terms cancel each other, leaving the first and last term (seeTelescoping series). However, when usingArabic numeralsto write the digits (and not including the subscripts as in the above examples), their simple concatenation becomes ambiguous for numbers having a "digit" greater than 9. The smallest such example is the number 10 × 10! = 36,288,00010, which may be written A0000000000!=10:0:0:0:0:0:0:0:0:0:0!, but not 100000000000!= 1:0:0:0:0:0:0:0:0:0:0:0!which denotes 11! = 39,916,80010. Thus using letters A–Z to denote digits 10, 11, 12, ..., 35 as in other base-Nmake the largest representable number 36 × 36! − 1. For arbitrarily greater numbers one has to choose a base for representing individual digits, say decimal, and provide a separating mark between them (for instance by subscripting each digit by its base, also given in decimal, like 24031201, this number also can be written as 2:0:1:0!). In fact the factorial number system itself is not truly anumeral systemin the sense of providing a representation for all natural numbers using only a finite alphabet of symbols. There is a naturalmappingbetween the integers0, 1, ...,n! − 1(or equivalently the numbers withndigits in factorial representation) andpermutationsofnelements inlexicographicalorder, when the integers are expressed in factoradic form. This mapping has been termed theLehmer code(or inversion table). For example, withn= 3, such a mapping is In each case, calculating the permutation proceeds by using the leftmost factoradic digit (here, 0, 1, or 2) as the first permutation digit, then removing it from the list of choices (0, 1, and 2). Think of this new list of choices as zero indexed, and use each successive factoradic digit to choose from its remaining elements. If the second factoradic digit is "0" then the first element of the list is selected for the second permutation digit and is then removed from the list. Similarly, if the second factoradic digit is "1", the second is selected and then removed. The final factoradic digit is always "0", and since the list now contains only one element, it is selected as the last permutation digit. The process may become clearer with a longer example. Let's say we want the 2982nd permutation of the numbers 0 through 6. The number 2982 is 4:0:4:1:0:0:0!in factoradic, and that number picks out digits (4,0,6,2,1,3,5) in turn, via indexing a dwindling ordered set of digits and picking out each digit from the set at each turn: A natural index for thedirect productof twopermutation groupsis theconcatenationof two factoradic numbers, with two subscript "!"s. Unlike single radix systems whose place values arebasenfor both positive and negative integraln, the factorial number base cannot be extended to negative place values as these would be (−1)!, (−2)! and so on, and these values are undefined (seefactorial). One possible extension is therefore to use 1/0!, 1/1!, 1/2!, 1/3!, ..., 1/n! etc. instead, possibly omitting the 1/0! and 1/1! places which are always zero. With this method, all rational numbers have a terminating expansion, whose length in 'digits' is less than or equal to the denominator of the rational number represented. This may be proven by considering that there exists a factorial for any integer and therefore the denominator divides into its own factorial even if it does not divide into any smaller factorial. By necessity, therefore, the factoradic expansion of the reciprocal of aprimehas a length of exactly that prime (less one if the 1/1! place is omitted). Other terms are given as the sequenceA046021on the OEIS. It can also be proven that the last 'digit' or term of the representation of a rational with prime denominator is equal to the difference between the numerator and the prime denominator. Similar to how checking the divisibility of 4 in base 10 requires looking at only the last two digits, checking the divisibility of any number in factorial number system requires looking at only a finite number of digits. That is, it has adivisibility rulefor each number. There is also a non-terminating equivalent for every rational number akin to the fact that in decimal 0.24999... = 0.25 = 1/4 and0.999... = 1, etc., which can be created by reducing the final term by 1 and then filling in the remaining infinite number of terms with the highest value possible for the radix of that position. In the following selection of examples, spaces are used to separate the place values, otherwise represented in decimal. The rational numbers on the left are also in decimal: There are also a small number of constants that have patterned representations with this method:
https://en.wikipedia.org/wiki/Factorial_number_system
Incoding theory,decodingis the process of translating received messages intocodewordsof a givencode. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over anoisy channel, such as abinary symmetric channel. C⊂F2n{\displaystyle C\subset \mathbb {F} _{2}^{n}}is considered abinary codewith the lengthn{\displaystyle n};x,y{\displaystyle x,y}shall be elements ofF2n{\displaystyle \mathbb {F} _{2}^{n}}; andd(x,y){\displaystyle d(x,y)}is the distance between those elements. One may be given the messagex∈F2n{\displaystyle x\in \mathbb {F} _{2}^{n}}, thenideal observer decodinggenerates the codewordy∈C{\displaystyle y\in C}. The process results in this solution: For example, a person can choose the codewordy{\displaystyle y}that is most likely to be received as the messagex{\displaystyle x}after transmission. Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include: Given a received vectorx∈F2n{\displaystyle x\in \mathbb {F} _{2}^{n}}maximum likelihooddecodingpicks a codewordy∈C{\displaystyle y\in C}thatmaximizes that is, the codewordy{\displaystyle y}that maximizes the probability thatx{\displaystyle x}was received,given thaty{\displaystyle y}was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, byBayes' theorem, Upon fixingP(xreceived){\displaystyle \mathbb {P} (x{\mbox{ received}})},x{\displaystyle x}is restructured andP(ysent){\displaystyle \mathbb {P} (y{\mbox{ sent}})}is constant as all codewords are equally likely to be sent. Therefore,P(xreceived∣ysent){\displaystyle \mathbb {P} (x{\mbox{ received}}\mid y{\mbox{ sent}})}is maximised as a function of the variabley{\displaystyle y}precisely whenP(ysent∣xreceived){\displaystyle \mathbb {P} (y{\mbox{ sent}}\mid x{\mbox{ received}})}is maximised, and the claim follows. As with ideal observer decoding, a convention must be agreed to for non-unique decoding. The maximum likelihood decoding problem can also be modeled as aninteger programmingproblem.[1] The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying thegeneralized distributive law.[2] Given a received codewordx∈F2n{\displaystyle x\in \mathbb {F} _{2}^{n}},minimum distance decodingpicks a codewordy∈C{\displaystyle y\in C}to minimise theHamming distance: i.e. choose the codewordy{\displaystyle y}that is as close as possible tox{\displaystyle x}. Note that if the probability of error on adiscrete memoryless channelp{\displaystyle p}is strictly less than one half, thenminimum distance decodingis equivalent tomaximum likelihood decoding, since if then: which (sincepis less than one half) is maximised by minimisingd. Minimum distance decoding is also known asnearest neighbour decoding. It can be assisted or automated by using astandard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met: These assumptions may be reasonable for transmissions over abinary symmetric channel. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords. As with other decoding methods, a convention must be agreed to for non-unique decoding. Syndrome decodingis a highly efficient method of decoding alinear codeover anoisy channel, i.e. one on which errors are made. In essence, syndrome decoding isminimum distance decodingusing a reduced lookup table. This is allowed by the linearity of the code.[3] Suppose thatC⊂F2n{\displaystyle C\subset \mathbb {F} _{2}^{n}}is a linear code of lengthn{\displaystyle n}and minimum distanced{\displaystyle d}withparity-check matrixH{\displaystyle H}. Then clearlyC{\displaystyle C}is capable of correcting up to errors made by the channel (since if no more thant{\displaystyle t}errors are made then minimum distance decoding will still correctly decode the incorrectly transmitted codeword). Now suppose that a codewordx∈F2n{\displaystyle x\in \mathbb {F} _{2}^{n}}is sent over the channel and the error patterne∈F2n{\displaystyle e\in \mathbb {F} _{2}^{n}}occurs. Thenz=x+e{\displaystyle z=x+e}is received. Ordinary minimum distance decoding would lookup the vectorz{\displaystyle z}in a table of size|C|{\displaystyle |C|}for the nearest match - i.e. an element (not necessarily unique)c∈C{\displaystyle c\in C}with for ally∈C{\displaystyle y\in C}. Syndrome decoding takes advantage of the property of the parity matrix that: for allx∈C{\displaystyle x\in C}. Thesyndromeof the receivedz=x+e{\displaystyle z=x+e}is defined to be: To performML decodingin abinary symmetric channel, one has to look-up a precomputed table of size2n−k{\displaystyle 2^{n-k}}, mappingHe{\displaystyle He}toe{\displaystyle e}. Note that this is already of significantly less complexity than that of astandard array decoding. However, under the assumption that no more thant{\displaystyle t}errors were made during transmission, the receiver can look up the valueHe{\displaystyle He}in a further reduced table of size This is a family ofLas Vegas-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions. The simplest form is due to Prange: LetG{\displaystyle G}be thek×n{\displaystyle k\times n}generator matrix ofC{\displaystyle C}used for encoding. Selectk{\displaystyle k}columns ofG{\displaystyle G}at random, and denote byG′{\displaystyle G'}the corresponding submatrix ofG{\displaystyle G}. With reasonable probabilityG′{\displaystyle G'}will have full rank, which means that if we letc′{\displaystyle c'}be the sub-vector for the corresponding positions of any codewordc=mG{\displaystyle c=mG}ofC{\displaystyle C}for a messagem{\displaystyle m}, we can recoverm{\displaystyle m}asm=c′G′−1{\displaystyle m=c'G'^{-1}}. Hence, if we were lucky that thesek{\displaystyle k}positions of the received wordy{\displaystyle y}contained no errors, and hence equalled the positions of the sent codeword, then we may decode. Ift{\displaystyle t}errors occurred, the probability of such a fortunate selection of columns is given by(n−tk)/(nk){\displaystyle \textstyle {\binom {n-t}{k}}/{\binom {n}{k}}}. This method has been improved in various ways, e.g. by Stern[4]andCanteautand Sendrier.[5] Partial response maximum likelihood (PRML) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal. A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded usingforward error correctionbased on a convolutional code. TheHamming distanceis used as a metric for hard decision Viterbi decoders. ThesquaredEuclidean distanceis used as a metric for soft decision decoders. Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.[clarification needed][6]
https://en.wikipedia.org/wiki/Decoding_methods#Minimum_distance_decoding
Inmathematics, theThue–MorseorProuhet–Thue–Morse sequenceis thebinary sequence(an infinite sequence of 0s and 1s) that can be obtained by starting with 0 and successively appending theBoolean complementof the sequence obtained thus far.[1]It is sometimes called thefair share sequencebecause of its applications tofair divisionorparity sequence. The first few steps of this procedure yield the strings 0, 01, 0110, 01101001, 0110100110010110, and so on, which are theprefixesof the Thue–Morse sequence. The full sequence begins: The sequence is named afterAxel Thue,Marston Morseand (in its extended form)Eugène Prouhet. There are several equivalent ways of defining the Thue–Morse sequence. To compute thenth elementtn, write the numberninbinary. If thenumber of onesin this binary expansion is odd thentn= 1, if even thentn= 0.[2]That is,tnis theeven parity bitforn.John H. Conwayet al. deemed numbersnsatisfyingtn= 1 to beodious(intended to be similar toodd) numbers, and numbers for whichtn= 0 to beevil(similar toeven) numbers. This method leads to a fast method for computing the Thue–Morse sequence: start witht0= 0, and then, for eachn, find the highest-order bit in the binary representation ofnthat is different from the same bit in the representation ofn− 1. If this bit is at an even index,tndiffers fromtn− 1, and otherwise it is the same astn− 1. InPython: The resulting algorithm takes constant time to generate each sequence element, using only alogarithmic number of bits(constant number of words) of memory.[3] The Thue–Morse sequence is the sequencetnsatisfying therecurrence relation for all non-negative integersn.[2] The Thue–Morse sequence is amorphic word:[4]it is the output of the followingLindenmayer system: The Thue–Morse sequence in the form given above, as a sequence ofbits, can be definedrecursivelyusing the operation ofbitwise negation. So, the first element is 0. Then once the first 2nelements have been specified, forming a strings, then the next 2nelements must form the bitwise negation ofs. Now we have defined the first 2n+1elements, and we recurse. Spelling out the first few steps in detail: So InPython: Which can then be converted to a (reversed) string as follows: The sequence can also be defined by: wheretjis thejth element if we start atj= 0. The Thue–Morse sequence contains manysquares: instances of the stringXX{\displaystyle XX}, whereX{\displaystyle X}denotes the stringA{\displaystyle A},A¯{\displaystyle {\overline {A}}},AA¯A{\displaystyle A{\overline {A}}A}, orA¯AA¯{\displaystyle {\overline {A}}A{\overline {A}}}, whereA=Tk{\displaystyle A=T_{k}}for somek≥0{\displaystyle k\geq 0}andA¯{\displaystyle {\overline {A}}}is the bitwise negation ofA{\displaystyle A}.[5]For instance, ifk=0{\displaystyle k=0}, thenA=T0=0{\displaystyle A=T_{0}=0}. The squareAA¯AAA¯A=010010{\displaystyle A{\overline {A}}AA{\overline {A}}A=010010}appears inT{\displaystyle T}starting at the 16th bit. Since all squares inT{\displaystyle T}are obtained by repeating one of these 4 strings, they all have length2n{\displaystyle 2^{n}}or3⋅2n{\displaystyle 3\cdot 2^{n}}for somen≥0{\displaystyle n\geq 0}.T{\displaystyle T}contains nocubes: instances ofXXX{\displaystyle XXX}. There are also nooverlapping squares: instances of0X0X0{\displaystyle 0X0X0}or1X1X1{\displaystyle 1X1X1}.[6][7]Thecritical exponentofT{\displaystyle T}is 2.[8] The Thue–Morse sequence is auniformly recurrent word: given any finite stringXin the sequence, there is some lengthnX(often much longer than the length ofX) such thatXappears ineveryblock of lengthnX.[9][10]Notably, the Thue–Morse sequence is uniformly recurrentwithoutbeing eitherperiodicor eventually periodic (i.e., periodic after some initial nonperiodic segment).[11] The sequenceT2nis apalindromefor anyn. Furthermore, letqnbe a word obtained by counting the ones between consecutive zeros inT2n. For instance,q1= 2 andq2= 2102012. SinceTndoes not containoverlapping squares, the wordsqnare palindromicsquarefree words. TheThue–Morsemorphismμis defined on alphabet {0,1} by the substitution mapμ(0) = 01,μ(1) = 10: every 0 in a sequence is replaced with 01 and every 1 with 10.[12]IfTis the Thue–Morse sequence, thenμ(T) is alsoT. Thus,Tis afixed pointofμ. The morphismμis aprolongable morphismon thefree monoid{0,1}∗withTas fixed point:Tis essentially theonlyfixed point ofμ; the only other fixed point is the bitwise negation ofT, which is simply the Thue–Morse sequence on (1,0) instead of on (0,1). This property may be generalized to the concept of anautomatic sequence. Thegenerating seriesofTover thebinary fieldis theformal power series Thispower seriesis algebraic over the field of rational functions, satisfying the equation[13] The set ofevil numbers(numbersn{\displaystyle n}withtn=0{\displaystyle t_{n}=0}) forms a subspace of the nonnegative integers undernim-addition(bitwiseexclusive or). For the game ofKayles, evilnim-valuesoccur for few (finitely many) positions in the game, with all remaining positions having odious nim-values. TheProuhet–Tarry–Escott problemcan be defined as: given a positive integerNand a non-negative integerk,partitionthe setS= { 0, 1, ...,N-1 } into twodisjointsubsetsS0andS1that have equal sums of powers up to k, that is: This has a solution ifNis a multiple of 2k+1, given by: For example, forN= 8 andk= 2, The condition requiring thatNbe a multiple of 2k+1is not strictly necessary: there are some further cases for which a solution exists. However, it guarantees a stronger property: if the condition is satisfied, then the set ofkth powers of any set ofNnumbers inarithmetic progressioncan be partitioned into two sets with equal sums. This follows directly from the expansion given by thebinomial theoremapplied to the binomial representing thenth element of an arithmetic progression. For generalizations of the Thue–Morse sequence and the Prouhet–Tarry–Escott problem to partitions into more than two parts, see Bolker, Offner, Richman and Zara, "The Prouhet–Tarry–Escott problem and generalized Thue–Morse sequences".[14] Usingturtle graphics, a curve can be generated if an automaton is programmed with a sequence. When Thue–Morse sequence members are used in order to select program states: The resulting curve converges to theKoch curve, afractal curveof infinite length containing a finite area. This illustrates the fractal nature of the Thue–Morse Sequence.[15] It is also possible to draw the curve precisely using the following instructions:[16] In their book on the problem offair division,Steven BramsandAlan Taylorinvoked the Thue–Morse sequence but did not identify it as such. When allocating a contested pile of items between two parties who agree on the items' relative values, Brams and Taylor suggested a method they calledbalanced alternation, ortaking turns taking turns taking turns . . ., as a way to circumvent the favoritism inherent when one party chooses before the other. An example showed how a divorcing couple might reach a fair settlement in the distribution of jointly-owned items. The parties would take turns to be the first chooser at different points in the selection process: Ann chooses one item, then Ben does, then Ben chooses one item, then Ann does.[17] Lionel LevineandKatherine E. Stange, in their discussion of how to fairly apportion a shared meal such as anEthiopian dinner, proposed the Thue–Morse sequence as a way to reduce the advantage of moving first. They suggested that “it would be interesting to quantify the intuition that the Thue–Morse order tends to produce a fair outcome.”[18] Robert Richman addressed this problem, but he too did not identify the Thue–Morse sequence as such at the time of publication.[19]He presented the sequencesTnasstep functionson the interval [0,1] and described their relationship to theWalshandRademacherfunctions. He showed that thenthderivativecan be expressed in terms ofTn. As a consequence, the step function arising fromTnisorthogonaltopolynomialsofordern− 1. A consequence of this result is that a resource whose value is expressed as amonotonicallydecreasingcontinuous functionis most fairly allocated using a sequence that converges to Thue–Morse as the function becomesflatter. An example showed how to pour cups ofcoffeeof equal strength from a carafe with anonlinearconcentrationgradient, prompting a whimsical article in the popular press.[20] Joshua Cooper and Aaron Dutle showed why the Thue–Morse order provides a fair outcome for discrete events.[21]They considered the fairest way to stage aGaloisduel, in which each of the shooters has equally poor shooting skills. Cooper and Dutle postulated that each dueler would demand a chance to fire as soon as the other'sa prioriprobabilityof winning exceeded their own. They proved that, as the duelers’ hitting probability approaches zero, the firing sequence converges to the Thue–Morse sequence. In so doing, they demonstrated that the Thue–Morse order produces a fair outcome not only for sequencesTnof length2n, but for sequences of any length. Thus the mathematics supports using the Thue–Morse sequence instead of alternating turns when the goal is fairness but earlier turns differ monotonically from later turns in some meaningful quality, whether that quality varies continuously[19]or discretely.[21] Sports competitions form an important class of equitable sequencing problems, because strict alternation often gives an unfair advantage to one team.Ignacio Palacios-Huertaproposed changing the sequential order to Thue–Morse to improve theex postfairness of various tournament competitions, such as the kicking sequence of apenalty shoot-outin soccer.[22]He did a set of field experiments with pro players and found that the team kicking first won 60% of games using ABAB (orT1), 54% using ABBA (orT2), and 51% using full Thue–Morse (orTn).  As a result, ABBA is undergoingextensive trialsinFIFA (European and World Championships)and English Federation professional soccer (EFL Cup).[23]An ABBA serving pattern has also been found to improve the fairness oftennis tie-breaks.[24]Incompetitive rowing,T2is the only arrangement ofport- and starboard-rowingcrew members that eliminates transverse forces (and hence sideways wiggle) on a four-membered coxless racing boat, whileT3is one of only fourrigsto avoid wiggle on an eight-membered boat.[25] Fairness is especially important inplayer drafts. Many professional sports leagues attempt to achievecompetitive parityby giving earlier selections in each round to weaker teams. By contrast,fantasy football leagueshave no pre-existing imbalance to correct, so they often use a “snake” draft (forward, backward, etc.; orT1).[26]Ian Allan argued that a “third-round reversal” (forward, backward, backward, forward, etc.; orT2) would be even more fair.[27]Richman suggested that the fairest way for “captain A” and “captain B” to choose sides for apick-up game of basketballmirrorsT3: captain A has the first, fourth, sixth, and seventh choices, while captain B has the second, third, fifth, and eighth choices.[19] The initial2kbits of the Thue–Morse sequence are mapped to 0 by a wide class of polynomialhash functionsmodulo apower of two, which can lead tohash collisions.[28] Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue–Morse sequence give rise to identities involving the Riemann Zeta function (Tóth, 2022[29]). For instance: where(tn)n≥0{\displaystyle (t_{n})_{n\geq 0}}is thenth{\displaystyle n^{\rm {th}}}term of the Thue–Morse sequence. In fact, for alls{\displaystyle s}with real part greater than1{\displaystyle 1}, we have The Thue–Morse sequence was first studied byEugène Prouhet[fr]in 1851,[30]who applied it tonumber theory. However, Prouhet did not mention the sequence explicitly; this was left toAxel Thuein 1906, who used it to found the study ofcombinatorics on words. The sequence was only brought to worldwide attention with the work ofMarston Morsein 1921, when he applied it todifferential geometry. The sequence has beendiscovered independentlymany times, not always by professional research mathematicians; for example,Max Euwe, achess grandmasterand mathematicsteacher, discovered it in 1929 in an application tochess: by using its cube-free property (see above), he showed how to circumvent thethreefold repetitionrule aimed at preventing infinitely protracted games by declaring repetition of moves a draw. At the time, consecutive identical board states were required to trigger the rule; the rule was later amended to the same board position reoccurring three times at any point, as the sequence shows that the consecutive criterion can be evaded forever.
https://en.wikipedia.org/wiki/Prouhet%E2%80%93Thue%E2%80%93Morse_sequence
Inlinear algebra, the computation of thepermanentof amatrixis a problem that is thought to be more difficult than the computation of thedeterminantof a matrix despite the apparent similarity of the definitions. The permanent is defined similarly to the determinant, as a sum of products of sets of matrix entries that lie in distinct rows and columns. However, where the determinant weights each of these products with a ±1 sign based on theparity of the set, the permanent weights them all with a +1 sign. While the determinant can be computed inpolynomial timebyGaussian elimination, it is generally believed that the permanent cannot be computed in polynomial time. Incomputational complexity theory,a theorem of Valiantstates that computing permanents is#P-hard, and even#P-completefor matrices in which all entries are 0 or 1Valiant (1979). This puts the computation of the permanent in a class of problems believed to be even more difficult to compute thanNP. It is known that computing the permanent is impossible for logspace-uniformACC0circuits.(Allender & Gore 1994) The development of both exact and approximate algorithms for computing the permanent of a matrix is an active area of research. The permanent of ann-by-nmatrixA= (ai,j) is defined as The sum here extends over all elements σ of thesymmetric groupSn, i.e. over allpermutationsof the numbers 1, 2, ...,n. This formula differs from the corresponding formula for the determinant only in that, in the determinant, each product is multiplied by thesign of the permutationσ while in this formula each product is unsigned. The formula may be directly translated into an algorithm that naively expands the formula, summing over all permutations and within the sum multiplying out each matrix entry. This requiresn!narithmetic operations. The best known[1]general exact algorithm is due toH. J. Ryser(1963). Ryser's method is based on aninclusion–exclusionformula that can be given[2]as follows: LetAk{\displaystyle A_{k}}be obtained fromAby deletingkcolumns, letP(Ak){\displaystyle P(A_{k})}be the product of the row-sums ofAk{\displaystyle A_{k}}, and letΣk{\displaystyle \Sigma _{k}}be the sum of the values ofP(Ak){\displaystyle P(A_{k})}over all possibleAk{\displaystyle A_{k}}. Then It may be rewritten in terms of the matrix entries as follows[3] Ryser's formula can be evaluated usingO(2n−1n2){\displaystyle O(2^{n-1}n^{2})}arithmetic operations, orO(2n−1n){\displaystyle O(2^{n-1}n)}by processing the setsS{\displaystyle S}inGray codeorder.[4] Another formula that appears to be as fast as Ryser's (or perhaps even twice as fast) is to be found in the two Ph.D. theses; see (Balasubramanian 1980), (Bax 1998); also (Bax & Franklin 1996). The methods to find the formula are quite different, being related to the combinatorics of the Muir algebra, and to finite difference theory respectively. Another way, connected with invariant theory is via thepolarization identityfor asymmetric tensor(Glynn 2010). The formula generalizes to infinitely many others, as found by all these authors, although it is not clear if they are any faster than the basic one. See (Glynn 2013). The simplest known formula of this type (when the characteristic of the field is not two) is where the outer sum is over all2n−1{\displaystyle 2^{n-1}}vectorsδ=(δ1=1,δ2,…,δn)∈{±1}n{\displaystyle \delta =(\delta _{1}=1,\delta _{2},\dots ,\delta _{n})\in \{\pm 1\}^{n}}. The number ofperfect matchingsin abipartite graphis counted by the permanent of the graph'sbiadjacency matrix, and the permanent of any 0-1 matrix can beinterpreted in this wayas the number of perfect matchings in a graph. Forplanar graphs(regardless of bipartiteness), theFKT algorithmcomputes the number of perfect matchings in polynomial time by changing the signs of a carefully chosen subset of the entries in theTutte matrixof the graph, so that thePfaffianof the resultingskew-symmetric matrix(thesquare rootof itsdeterminant) is the number of perfect matchings. This technique can be generalized to graphs that contain no subgraphhomeomorphicto thecomplete bipartite graphK3,3.[5] George Pólyahad asked the question[6]of when it is possible to change the signs of some of the entries of a 01 matrix A so that the determinant of the new matrix is the permanent of A. Not all 01 matrices are "convertible" in this manner; in fact it is known (Marcus & Minc (1961)) that there is no linear mapT{\displaystyle T}such thatper⁡T(A)=detA{\displaystyle \operatorname {per} T(A)=\det A}for alln×n{\displaystyle n\times n}matricesA{\displaystyle A}. The characterization of "convertible" matrices was given byLittle (1975)who showed that such matrices are precisely those that are the biadjacency matrix of bipartite graphs that have aPfaffian orientation: an orientation of the edges such that for every even cycleC{\displaystyle C}for whichG∖C{\displaystyle G\setminus C}has a perfect matching, there are an odd number of edges directed along C (and thus an odd number with the opposite orientation). It was also shown that these graphs are exactly those that do not contain a subgraph homeomorphic toK3,3{\displaystyle K_{3,3}}, as above. Modulo2, the permanent is the same as the determinant, as(−1)≡1(mod2).{\displaystyle (-1)\equiv 1{\pmod {2}}.}It can also be computed modulo2k{\displaystyle 2^{k}}in timeO(n4k−3){\displaystyle O(n^{4k-3})}fork≥2{\displaystyle k\geq 2}. However, it isUP-hardto compute the permanent modulo any number that is not a power of 2.Valiant (1979) There are various formulae given byGlynn (2010)for the computation modulo a primep. First, there is one using symbolic calculations with partial derivatives. Second, forp= 3 there is the following formula for an n×n-matrixA{\displaystyle A}, involving the matrix's principalminors(Kogan (1996)): whereAJ{\displaystyle A_{J}}is the submatrix ofA{\displaystyle A}induced by the rows and columns ofA{\displaystyle A}indexed byJ{\displaystyle J}, andJ¯{\displaystyle {\bar {J}}}is the complement ofJ{\displaystyle J}in{1,…,n}{\displaystyle \{1,\dots ,n\}}, while the determinant of the empty submatrix is defined to be 1. The expansion above can be generalized in an arbitrarycharacteristicpas the following pair of dual identities:per⁡(A)=(−1)n∑J1,…,Jp−1det(AJ1)⋯det(AJp−1)det(A)=(−1)n∑J1,…,Jp−1per⁡(AJ1)⋯per⁡(AJp−1){\displaystyle {\begin{aligned}\operatorname {per} (A)&=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})\\\det(A)&=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\operatorname {per} (A_{J_{1}})\dotsm \operatorname {per} (A_{J_{p-1}})\end{aligned}}}where in both formulas the sum is taken over all the (p− 1)-tuplesJ1,…,Jp−1{\displaystyle {J_{1}},\ldots ,{J_{p-1}}}that are partitions of the set{1,…,n}{\displaystyle \{1,\dots ,n\}}intop− 1 subsets, some of them possibly empty. The former formula possesses an analog for the hafnian of a symmetricA{\displaystyle A}and an odd p: haf2⁡(A)=(−1)n∑J1,…,Jp−1det(AJ1)⋯det(AJp−1)(−1)|J1|+⋯+|J(p−1)/2|{\displaystyle \operatorname {haf} ^{2}(A)=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})(-1)^{|J_{1}|+\dots +|J_{(p-1)/2}|}} with the sum taken over the same set of indexes. Moreover, in characteristic zero a similar convolution sum expression involving both the permanent and the determinant yields theHamiltonian cyclepolynomial (defined asham⁡(A)=∑σ∈Hn∏i=1nai,σ(i){\textstyle \operatorname {ham} (A)=\sum _{\sigma \in H_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}}whereHn{\displaystyle H_{n}}is the set of n-permutations having only one cycle):ham⁡(A)=∑J⊆{2,…,n}det(AJ)per⁡(AJ¯)(−1)|J|.{\displaystyle \operatorname {ham} (A)=\sum _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {per} (A_{\bar {J}})(-1)^{|J|}.} In characteristic 2 the latter equality turns intoham⁡(A)=∑J⊆{2,…,n}det(AJ)det⁡(AJ¯){\displaystyle \operatorname {ham} (A)=\sum _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {det} (A_{\bar {J}})}what therefore provides an opportunity to polynomial-time calculate theHamiltonian cyclepolynomial of anyunitaryU{\displaystyle U}(i.e. such thatUTU=I{\displaystyle U^{\textsf {T}}U=I}whereI{\displaystyle I}is the identityn×n-matrix), because each minor of such a matrix coincides with its algebraic complement:ham⁡(U)=det2⁡(U+I/1){\displaystyle \operatorname {ham} (U)=\operatorname {det} ^{2}(U+I_{/1})}whereI/1{\displaystyle I_{/1}}is the identityn×n-matrix with the entry of indexes 1,1 replaced by 0. Moreover, it may, in turn, be further generalized for a unitaryn×n-matrixU{\displaystyle U}ashamK⁡(U)=det2⁡(U+I/K){\displaystyle \operatorname {ham_{K}} (U)=\operatorname {det} ^{2}(U+I_{/K})}whereK{\displaystyle K}is a subset of {1, ...,n},I/K{\displaystyle I_{/K}}is the identityn×n-matrix with the entries of indexesk,kreplaced by 0 for allkbelonging toK{\displaystyle K}, and we definehamK⁡(A)=∑σ∈Hn(K)∏i=1nai,σ(i){\textstyle \operatorname {ham_{K}} (A)=\sum _{\sigma \in H_{n}(K)}\prod _{i=1}^{n}a_{i,\sigma (i)}}whereHn(K){\displaystyle H_{n}(K)}is the set of n-permutations whose each cycle contains at least one element ofK{\displaystyle K}. This formula also implies the following identities over fields of characteristic 3: for anyinvertibleA{\displaystyle A} for anyunitaryU{\displaystyle U}, that is, a square matrixU{\displaystyle U}such thatUTU=I{\displaystyle U^{\textsf {T}}U=I}whereI{\displaystyle I}is the identity matrix of the corresponding size, whereV{\displaystyle V}is the matrix whose entries are the cubes of the corresponding entries ofU{\displaystyle U}. It was also shown (Kogan (1996)) that, if we define a square matrixA{\displaystyle A}as k-semi-unitary whenrank⁡(ATA−I)=k{\displaystyle \operatorname {rank} (A^{\textsf {T}}A-I)=k}, the permanent of a 1-semi-unitary matrix is computable in polynomial time over fields of characteristic 3, while fork> 1 the problem becomes#3-P-complete. (A parallel theory concerns theHamiltonian cyclepolynomial in characteristic 2: while computing it on the unitary matrices is polynomial-time feasible, the problem is #2-P-complete for the k-semi-unitary ones for anyk> 0). The latter result was essentially extended in 2017 (Knezevic & Cohen (2017)) and it was proven that in characteristic 3 there is a simple formula relating the permanents of a square matrix and its partial inverse (forA11{\displaystyle A_{11}}andA22{\displaystyle A_{22}}being square,A11{\displaystyle A_{11}}beinginvertible): per⁡(A11A12A21A22)=det2⁡(A11)per⁡(A11−1A11−1A12A21A11−1A22−A21A11−1A12){\displaystyle \operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}=\operatorname {det} ^{2}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}} and it allows to polynomial-time reduce the computation of the permanent of ann×n-matrix with a subset ofkork− 1 rows expressible as linear combinations of another (disjoint) subset of k rows to the computation of the permanent of an (n−k)×(n−k)- or (n−k+ 1)×(n−k+ 1)-matrix correspondingly, hence having introduced a compression operator (analogical to the Gaussian modification applied for calculating the determinant) that "preserves" the permanent in characteristic 3. (Analogically, it would be worth noting that theHamiltonian cyclepolynomial in characteristic 2 does possess its invariant matrix compressions as well, taking into account the fact that ham(A) = 0 for anyn×n-matrixAhaving three equal rows or, ifn> 2, a pair of indexesi,jsuch that itsi-th andj-th rows are identical and itsi-th andj-th columns are identical too.) The closure of that operator defined as the limit of its sequential application together with the transpose transformation (utilized each time the operator leaves the matrix intact) is also an operator mapping, when applied to classes of matrices, one class to another. While the compression operator maps the class of 1-semi-unitary matrices to itself and the classes ofunitaryand 2-semi-unitary ones, the compression-closure of the 1-semi-unitary class (as well as the class of matrices received from unitary ones through replacing one row by an arbitrary row vector — the permanent of such a matrix is, via the Laplace expansion, the sum of the permanents of 1-semi-unitary matrices and, accordingly, polynomial-time computable) is yet unknown and tensely related to the general problem of the permanent's computational complexity in characteristic 3 and the chief question ofP versus NP: as it was shown in (Knezevic & Cohen (2017)), if such a compression-closure is the set of all square matrices over a field of characteristic 3 or, at least, contains a matrix class the permanent's computation on is#3-P-complete(like the class of 2-semi-unitary matrices) then the permanent is computable in polynomial time in this characteristic. Besides, the problem of finding and classifying any possible analogs of the permanent-preserving compressions existing in characteristic 3 for other prime characteristics was formulated (Knezevic & Cohen (2017)), while giving the following identity for ann×nmatrixA{\displaystyle A}and twon-vectors (having all their entries from the set {0, ...,p− 1})α{\displaystyle \alpha }andβ{\displaystyle \beta }such that∑i=1nαi=∑j=1nβj{\textstyle {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}}, valid in an arbitrary prime characteristicp: per⁡(A(α,β))=detp−1(A)per⁡(A−1)((p−1)1→n−β,(p−1)1→n−α)(∏i=1nαi!)(∏j=1nβj!)(−1)n+∑i=1nαi{\displaystyle \operatorname {per} (A^{(\alpha ,\beta )})=\det ^{p-1}(A)\operatorname {per} (A^{-1})^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)\left(\prod _{j=1}^{n}\beta _{j}!\right)(-1)^{n+\sum _{i=1}^{n}\alpha _{i}}} where for ann×m-matrixM{\displaystyle M}, an n-vectorx{\displaystyle x}and an m-vectory{\displaystyle y}, both vectors having all their entries from the set {0, ...,p− 1},M(x,y){\displaystyle M^{(x,y)}}denotes the matrix received fromM{\displaystyle M}via repeatingxi{\displaystyle x_{i}}times itsi-th row fori= 1, ...,nandyj{\displaystyle y_{j}}times itsj-th column forj= 1, ...,m(if some row's or column's multiplicity equals zero it would mean that the row or column was removed, and thus this notion is a generalization of the notion of submatrix), and1→n{\displaystyle {\vec {1}}_{n}}denotes the n-vector all whose entries equal unity. This identity is an exact analog of the classical formula expressing a matrix's minor through a minor of its inverse and hence demonstrates (once more) a kind of duality between the determinant and the permanent as relative immanants. (Actually its own analogue for the hafnian of a symmetricA{\displaystyle A}and an odd prime p ishaf2⁡(A(α,α))=detp−1(A)haf2⁡(A−1)((p−1)1→n−α,(p−1)1→n−α)(∏i=1nαi!)2(−1)n(p−1)/2+n+∑i=1nαi{\textstyle \operatorname {haf} ^{2}(A^{(\alpha ,\alpha )})=\det ^{p-1}(A)\operatorname {haf} ^{2}(A^{-1})^{((p-1){\vec {1}}_{n}-\alpha ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)^{2}(-1)^{n(p-1)/2+n+\sum _{i=1}^{n}\alpha _{i}}}). And, as an even wider generalization for the partial inverse case in a prime characteristic p, forA11{\displaystyle A_{11}},A22{\displaystyle A_{22}}being square,A11{\displaystyle A_{11}}beinginvertibleand of sizen1{\displaystyle {n_{1}}}xn1{\displaystyle {n_{1}}}, and∑i=1nαi=∑j=1nβj{\textstyle {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}}, there holds also the identity per⁡(A11A12A21A22)(α,β)=detp−1(A11)per⁡(A11−1A11−1A12A21A11−1A22−A21A11−1A12)((p−1)1→n−β,(p−1)1→n−α)(∏i=1nα1,i!)(∏j=1nβ1,j!)(−1)n1+∑i=1nα1,i{\displaystyle \operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}^{(\alpha ,\beta )}={\det }^{p-1}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{1,i}!\right)\left(\prod _{j=1}^{n}\beta _{1,j}!\right)(-1)^{n_{1}+\sum _{i=1}^{n}\alpha _{1,i}}} where the common row/column multiplicity vectorsα{\displaystyle \alpha }andβ{\displaystyle \beta }for the matrixA{\displaystyle A}generate the corresponding row/column multiplicity vectorsαs{\displaystyle \alpha _{s}}andβt{\displaystyle \beta _{t}}, s,t = 1,2, for its blocks (the same concernsA{\displaystyle A}'s partial inverse in the equality's right side). When the entries ofAare nonnegative, the permanent can be computedapproximatelyinprobabilisticpolynomial time, up to an error of εM, whereMis the value of the permanent and ε > 0 is arbitrary. In other words, there exists afully polynomial-time randomized approximation scheme(FPRAS) (Jerrum, Sinclair & Vigoda (2001)). The most difficult step in the computation is the construction of an algorithm tosamplealmostuniformlyfrom the set of all perfect matchings in a given bipartite graph: in other words, a fully polynomial almost uniform sampler (FPAUS). This can be done using aMarkov chain Monte Carloalgorithm that uses aMetropolis ruleto define and run aMarkov chainwhose distribution is close to uniform, and whosemixing timeis polynomial. It is possible to approximately count the number of perfect matchings in a graph via theself-reducibilityof the permanent, by using the FPAUS in combination with a well-known reduction from sampling to counting due toJerrum, Valiant & Vazirani (1986). LetM(G){\displaystyle M(G)}denote the number of perfect matchings inG{\displaystyle G}. Roughly, for any particular edgee{\displaystyle e}inG{\displaystyle G}, by sampling many matchings inG{\displaystyle G}and counting how many of them are matchings inG∖e{\displaystyle G\setminus e}, one can obtain an estimate of the ratioρ=M(G)M(G∖e){\textstyle \rho ={\frac {M(G)}{M(G\setminus e)}}}. The numberM(G){\displaystyle M(G)}is thenρM(G∖e){\displaystyle \rho M(G\setminus e)}, whereM(G∖e){\displaystyle M(G\setminus e)}can be approximated by applying the same method recursively. Another class of matrices for which the permanent is of particular interest, is thepositive-semidefinite matrices.[7]Using a technique ofStockmeyer counting, they can be computed within the classBPPNP{\displaystyle {\textsf {BPP}}^{\textsf {NP}}}, but this is considered an infeasible class in general. It is NP-hard to approximate permanents of PSD matrices within a subexponential factor, and it is conjectured to beBPPNP{\displaystyle {\textsf {BPP}}^{\textsf {NP}}}-hard[8]If further constraints on thespectrumare imposed, there are more efficient algorithms known. One randomized algorithm is based on the model ofboson samplingand it uses the tools proper toquantum optics, to represent the permanent of positive-semidefinite matrices as the expected value of a specific random variable. The latter is then approximated by its sample mean.[9]This algorithm, for a certain set of positive-semidefinite matrices, approximates their permanent in polynomial time up to an additive error, which is more reliable than that of the standard classical polynomial-time algorithm by Gurvits.[10]
https://en.wikipedia.org/wiki/Ryser_formula
TheHilbert curve(also known as theHilbert space-filling curve) is acontinuousfractalspace-filling curvefirst described by the German mathematicianDavid Hilbertin 1891,[1]as a variant of the space-fillingPeano curvesdiscovered byGiuseppe Peanoin 1890.[2] Because it is space-filling, itsHausdorff dimensionis 2 (precisely, its image is theunit square, whose dimension is 2 in any definition of dimension; its graph is a compact sethomeomorphicto the closed unit interval, with Hausdorff dimension 1). The Hilbert curve is constructed as a limit ofpiecewise linear curves. The length of then{\displaystyle n}th curve is2n−12n{\displaystyle \textstyle 2^{n}-{1 \over 2^{n}}}, i.e., the length grows exponentially withn{\displaystyle n}, even though each curve is contained in a square with area1{\displaystyle 1}. Both the true Hilbert curve and its discrete approximations are useful because they give a mapping between 1D and 2D space that preserves locality fairly well.[4]This means that two data points which are close to each other in one-dimensional space are also close to each other after folding. The converse cannot always be true. Because of this locality property, the Hilbert curve is widely used in computer science. For example, the range ofIP addressesused by computers can be mapped into a picture using the Hilbert curve. Code to generate the image would map from 2D to 1D to find the color of each pixel, and the Hilbert curve is sometimes used because it keeps nearby IP addresses close to each other in the picture.[5]The locality property of the Hilbert curve has also been used to design algorithms for exploring regions with mobile robots[6][7]and indexing geospatial location data.[8] In an algorithm called Riemersma dithering,grayscalephotographs can be converted to aditheredblack-and-white image using thresholding, with the leftover amount from each pixel added to the next pixel along the Hilbert curve. Code to do this would map from 1D to 2D, and the Hilbert curve is sometimes used because it does not create the distracting patterns that would be visible to the eye if the order were simply left to right across each row of pixels.[9]Hilbert curves in higher dimensions are an instance of a generalization ofGray codes, and are sometimes used for similar purposes, for similar reasons. For multidimensional databases, Hilbert order has been proposed to be used instead ofZ orderbecause it has better locality-preserving behavior. For example, Hilbert curves have been used to compress and accelerateR-treeindexes[10](seeHilbert R-tree). They have also been used to help compress data warehouses.[11][12] The linear distance of any point along the curve can be converted to coordinates inndimensions for a givenn, and vice versa, using any of several standard mathematical techniques such as Skilling's method.[13][14] It is possible to implement Hilbert curves efficiently even when the data space does not form a square.[15]Moreover, there are several possible generalizations of Hilbert curves to higher dimensions.[16][17] The Hilbert Curve can be expressed by arewrite system(L-system). Here, "F" means "draw forward", "+" means "turn left 90°", "-" means "turn right 90°" (seeturtle graphics), and "A" and "B" are ignored during drawing. Graphics Gems II[18][promotion?]discusses Hilbert curve coherency, and provides implementation. The Hilbert Curve is commonly used amongrenderingimages or videos. Common programs such asBlenderandCinema 4Duse the Hilbert Curve to trace the objects, and render the scene.[citation needed] Theslicersoftware used to convert 3D models into toolpaths for a3D printertypically has the Hilbert curve as an option for an infill pattern.
https://en.wikipedia.org/wiki/Hilbert_curve
Inmathematics, aDirac measureassigns a size to a set based solely on whether it contains a fixed elementxor not. It is one way of formalizing the idea of theDirac delta function, an important tool in physics and other technical fields. ADirac measureis ameasureδxon a setX(with anyσ-algebraofsubsetsofX) defined for a givenx∈Xand any(measurable) setA⊆Xby where1Ais theindicator functionofA. The Dirac measure is aprobability measure, and in terms of probability it represents thealmost sureoutcomexin thesample spaceX. We can also say that the measure is a singleatomatx; however, treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of adelta sequence[dubious–discuss]. The Dirac measures are theextreme pointsof the convex set of probability measures onX. The name is a back-formation from theDirac delta function; considered as aSchwartz distribution, for example on thereal line, measures can be taken to be a special kind of distribution. The identity which, in the form is often taken to be part of the definition of the "delta function", holds as a theorem ofLebesgue integration. Letδxdenote the Dirac measure centred on some fixed pointxin somemeasurable space(X, Σ). Suppose that(X,T)is atopological spaceand thatΣis at least as fine as theBorelσ-algebraσ(T)onX. Adiscrete measureis similar to the Dirac measure, except that it is concentrated at countably many points instead of a single point. More formally, ameasureon thereal lineis called adiscrete measure(in respect to theLebesgue measure) if itssupportis at most acountable set.
https://en.wikipedia.org/wiki/Dirac_measure
Inmathematics, particularly inlinear algebra,tensor analysis, anddifferential geometry, theLevi-Civita symbolorLevi-Civita epsilonrepresents a collection of numbers defined from thesign of a permutationof thenatural numbers1, 2, ...,n, for some positive integern. It is named after the Italian mathematician and physicistTullio Levi-Civita. Other names include thepermutationsymbol,antisymmetric symbol, oralternating symbol, which refer to itsantisymmetricproperty and definition in terms of permutations. The standard letters to denote the Levi-Civita symbol are the Greek lower caseepsilonεorϵ, or less commonly the Latin lower casee. Index notation allows one to display permutations in a way compatible with tensor analysis:εi1i2…in{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}}whereeachindexi1,i2, ...,intakes values1, 2, ...,n. There arennindexed values ofεi1i2...in, which can be arranged into ann-dimensional array. The key defining property of the symbol istotal antisymmetryin the indices. When any two indices are interchanged, equal or not, the symbol is negated:ε…ip…iq…=−ε…iq…ip….{\displaystyle \varepsilon _{\dots i_{p}\dots i_{q}\dots }=-\varepsilon _{\dots i_{q}\dots i_{p}\dots }.} If any two indices are equal, the symbol is zero. When all indices are unequal, we have:εi1i2…in=(−1)pε12…n,{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}=(-1)^{p}\varepsilon _{1\,2\,\dots n},}wherep(called the parity of the permutation) is the number of pairwise interchanges of indices necessary to unscramblei1,i2, ...,ininto the order1, 2, ...,n, and the factor(−1)pis called thesign, or signatureof the permutation. The valueε1 2 ...nmust be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors chooseε1 2 ...n= +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal. This choice is used throughout this article. The term "n-dimensional Levi-Civita symbol" refers to the fact that the number of indices on the symbolnmatches thedimensionalityof thevector spacein question, which may beEuclideanornon-Euclidean, for example,R3{\displaystyle \mathbb {R} ^{3}}orMinkowski space. The values of the Levi-Civita symbol are independent of anymetric tensorandcoordinate system. Also, the specific term "symbol" emphasizes that it is not atensorbecause of how it transforms between coordinate systems; however it can be interpreted as atensor density. The Levi-Civita symbol allows thedeterminantof a square matrix, and thecross productof two vectors in three-dimensional Euclidean space, to be expressed inEinstein index notation. The Levi-Civita symbol is most often used in three and four dimensions, and to some extent in two dimensions, so these are given here before defining the general case. Intwo dimensions, the Levi-Civita symbol is defined by:εij={+1if(i,j)=(1,2)−1if(i,j)=(2,1)0ifi=j{\displaystyle \varepsilon _{ij}={\begin{cases}+1&{\text{if }}(i,j)=(1,2)\\-1&{\text{if }}(i,j)=(2,1)\\\;\;\,0&{\text{if }}i=j\end{cases}}}The values can be arranged into a 2 × 2antisymmetric matrix:(ε11ε12ε21ε22)=(01−10){\displaystyle {\begin{pmatrix}\varepsilon _{11}&\varepsilon _{12}\\\varepsilon _{21}&\varepsilon _{22}\end{pmatrix}}={\begin{pmatrix}0&1\\-1&0\end{pmatrix}}} Use of the two-dimensional symbol is common in condensed matter, and in certain specialized high-energy topics likesupersymmetry[1]andtwistor theory,[2]where it appears in the context of 2-spinors. Inthree dimensions, the Levi-Civita symbol is defined by:[3]εijk={+1if(i,j,k)is(1,2,3),(2,3,1),or(3,1,2),−1if(i,j,k)is(3,2,1),(1,3,2),or(2,1,3),0ifi=j,orj=k,ork=i{\displaystyle \varepsilon _{ijk}={\begin{cases}+1&{\text{if }}(i,j,k){\text{ is }}(1,2,3),(2,3,1),{\text{ or }}(3,1,2),\\-1&{\text{if }}(i,j,k){\text{ is }}(3,2,1),(1,3,2),{\text{ or }}(2,1,3),\\\;\;\,0&{\text{if }}i=j,{\text{ or }}j=k,{\text{ or }}k=i\end{cases}}} That is,εijkis1if(i,j,k)is aneven permutationof(1, 2, 3),−1if it is anodd permutation, and 0 if any index is repeated. In three dimensions only, thecyclic permutationsof(1, 2, 3)are all even permutations, similarly theanticyclic permutationsare all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of(1, 2, 3)and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a3 × 3 × 3array: whereiis the depth (blue:i= 1;red:i= 2;green:i= 3),jis the row andkis the column. Some examples:ε132=−ε123=−1ε312=−ε213=−(−ε123)=1ε231=−ε132=−(−ε123)=1ε232=−ε232=0{\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}}&=-1\\\varepsilon _{\color {Violet}{3}\color {BrickRed}{1}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {BrickRed}{1}\color {Violet}{3}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}&=0\end{aligned}}} Infour dimensions, the Levi-Civita symbol is defined by:εijkl={+1if(i,j,k,l)is an even permutation of(1,2,3,4)−1if(i,j,k,l)is an odd permutation of(1,2,3,4)0otherwise{\displaystyle \varepsilon _{ijkl}={\begin{cases}+1&{\text{if }}(i,j,k,l){\text{ is an even permutation of }}(1,2,3,4)\\-1&{\text{if }}(i,j,k,l){\text{ is an odd permutation of }}(1,2,3,4)\\\;\;\,0&{\text{otherwise}}\end{cases}}} These values can be arranged into a4 × 4 × 4 × 4array, although in 4 dimensions and higher this is difficult to draw. Some examples:ε1432=−ε1234=−1ε2134=−ε1234=−1ε4321=−ε1324=−(−ε1234)=1ε3243=−ε3243=0{\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}\color {Violet}{3}\color {RedViolet}{4}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}})=1\\\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}=-\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}&=0\end{aligned}}} More generally, inndimensions, the Levi-Civita symbol is defined by:[4]εa1a2a3…an={+1if(a1,a2,a3,…,an)is an even permutation of(1,2,3,…,n)−1if(a1,a2,a3,…,an)is an odd permutation of(1,2,3,…,n)0otherwise{\displaystyle \varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}={\begin{cases}+1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an even permutation of }}(1,2,3,\dots ,n)\\-1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an odd permutation of }}(1,2,3,\dots ,n)\\\;\;\,0&{\text{otherwise}}\end{cases}}} Thus, it is thesign of the permutationin the case of a permutation, and zero otherwise. Using thecapital pi notationΠfor ordinary multiplication of numbers, an explicit expression for the symbol is:[citation needed]εa1a2a3…an=∏1≤i<j≤nsgn⁡(aj−ai)=sgn⁡(a2−a1)sgn⁡(a3−a1)⋯sgn⁡(an−a1)sgn⁡(a3−a2)sgn⁡(a4−a2)⋯sgn⁡(an−a2)⋯sgn⁡(an−an−1){\displaystyle {\begin{aligned}\varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}&=\prod _{1\leq i<j\leq n}\operatorname {sgn}(a_{j}-a_{i})\\&=\operatorname {sgn}(a_{2}-a_{1})\operatorname {sgn}(a_{3}-a_{1})\dotsm \operatorname {sgn}(a_{n}-a_{1})\operatorname {sgn}(a_{3}-a_{2})\operatorname {sgn}(a_{4}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{n-1})\end{aligned}}}where thesignum function(denotedsgn) returns the sign of its argument while discarding theabsolute valueif nonzero. The formula is valid for all index values, and for anyn(whenn= 0orn= 1, this is theempty product). However, computing the formula above naively has atime complexityofO(n2), whereas the sign can be computed from the parity of the permutation from itsdisjoint cyclesin onlyO(nlog(n))cost. A tensor whose components in anorthonormal basisare given by the Levi-Civita symbol (a tensor ofcovariantrankn) is sometimes called apermutation tensor. Under the ordinary transformation rules for tensors the Levi-Civita symbol is unchanged under pure rotations, consistent with that it is (by definition) the same in all coordinate systems related by orthogonal transformations. However, the Levi-Civita symbol is apseudotensorbecause under anorthogonal transformationofJacobian determinant−1, for example, areflectionin an odd number of dimensions, itshouldacquire a minus sign if it were a tensor. As it does not change at all, the Levi-Civita symbol is, by definition, a pseudotensor. As the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is apseudovector, not a vector.[5] Under a generalcoordinate change, the components of the permutation tensor are multiplied by theJacobianof thetransformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, its components can differ from those of the Levi-Civita symbol by an overall factor. If the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not.[5] In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of theHodge dual.[citation needed] Summation symbols can be eliminated by usingEinstein notation, where an index repeated between two or more terms indicates summation over that index. For example, In the following examples, Einstein notation is used. In two dimensions, when alli,j,m,neach take the values 1 and 2:[3] In three dimensions, when alli,j,k,m,neach take values 1, 2, and 3:[3] The Levi-Civita symbol is related to theKronecker delta. In three dimensions, the relationship is given by the following equations (vertical lines denote the determinant):[4] A special case of this result occurs when one of the indices is repeated and summed over: In Einstein notation, the duplication of theiindex implies the sum oni. The previous is then denotedεijkεimn=δjmδkn−δjnδkm. If two indices are repeated (and summed over), this further reduces to: Inndimensions, when alli1, ...,in,j1, ...,jntake values1, 2, ...,n:[citation needed] where the exclamation mark (!) denotes thefactorial, andδα...β...is thegeneralized Kronecker delta. For anyn, the property follows from the facts that The particular case of (8) withk=n−2{\textstyle k=n-2}isεi1…in−2jkεi1…in−2lm=(n−2)!(δjlδkm−δjmδkl).{\displaystyle \varepsilon _{i_{1}\dots i_{n-2}jk}\varepsilon ^{i_{1}\dots i_{n-2}lm}=(n-2)!(\delta _{j}^{l}\delta _{k}^{m}-\delta _{j}^{m}\delta _{k}^{l})\,.} In general, forndimensions, one can write the product of two Levi-Civita symbols as:εi1i2…inεj1j2…jn=|δi1j1δi1j2…δi1jnδi2j1δi2j2…δi2jn⋮⋮⋱⋮δinj1δinj2…δinjn|.{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}\varepsilon _{j_{1}j_{2}\dots j_{n}}={\begin{vmatrix}\delta _{i_{1}j_{1}}&\delta _{i_{1}j_{2}}&\dots &\delta _{i_{1}j_{n}}\\\delta _{i_{2}j_{1}}&\delta _{i_{2}j_{2}}&\dots &\delta _{i_{2}j_{n}}\\\vdots &\vdots &\ddots &\vdots \\\delta _{i_{n}j_{1}}&\delta _{i_{n}j_{2}}&\dots &\delta _{i_{n}j_{n}}\\\end{vmatrix}}.}Proof:Both sides change signs upon switching two indices, so without loss of generality assumei1≤⋯≤in,j1≤⋯≤jn{\displaystyle i_{1}\leq \cdots \leq i_{n},j_{1}\leq \cdots \leq j_{n}}. If someic=ic+1{\displaystyle i_{c}=i_{c+1}}then left side is zero, and right side is also zero since two of its rows are equal. Similarly forjc=jc+1{\displaystyle j_{c}=j_{c+1}}. Finally, ifi1<⋯<in,j1<⋯<jn{\displaystyle i_{1}<\cdots <i_{n},j_{1}<\cdots <j_{n}}, then both sides are 1. For (1), both sides are antisymmetric with respect ofijandmn. We therefore only need to consider the casei≠jandm≠n. By substitution, we see that the equation holds forε12ε12, that is, fori=m= 1andj=n= 2. (Both sides are then one). Since the equation is antisymmetric inijandmn, any set of values for these can be reduced to the above case (which holds). The equation thus holds for all values ofijandmn. Using (1), we have for (2) Here we used theEinstein summation conventionwithigoing from 1 to 2. Next, (3) follows similarly from (2). To establish (5), notice that both sides vanish wheni≠j. Indeed, ifi≠j, then one can not choosemandnsuch that both permutation symbols on the left are nonzero. Then, withi=jfixed, there are only two ways to choosemandnfrom the remaining two indices. For any such indices, we have (no summation), and the result follows. Then (6) follows since3! = 6and for any distinct indicesi,j,ktaking values1, 2, 3, we have In linear algebra, thedeterminantof a3 × 3square matrixA= [aij]can be written[6] Similarly the determinant of ann×nmatrixA= [aij]can be written as[5] where eachirshould be summed over1, ...,n, or equivalently: where now eachirand eachjrshould be summed over1, ...,n. More generally, we have the identity[5] Let(e1,e2,e3){\displaystyle (\mathbf {e_{1}} ,\mathbf {e_{2}} ,\mathbf {e_{3}} )}apositively orientedorthonormal basis of a vector space. If(a1,a2,a3)and(b1,b2,b3)are the coordinates of thevectorsaandbin this basis, then their cross product can be written as a determinant:[5] hence also using the Levi-Civita symbol, and more simply: In Einstein notation, the summation symbols may be omitted, and theith component of their cross product equals[4] The first component is then by cyclic permutations of1, 2, 3the others can be derived immediately, without explicitly calculating them from the above formulae: From the above expression for the cross product, we have: Ifc= (c1,c2,c3)is a third vector, then thetriple scalar productequals From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any pair of arguments. For example, IfF= (F1,F2,F3)is a vector field defined on someopen setofR3{\displaystyle \mathbb {R} ^{3}}as afunctionofpositionx= (x1,x2,x3)(usingCartesian coordinates). Then theith component of thecurlofFequals[4] which follows from the cross product expression above, substituting components of thegradientvectoroperator(nabla). In any arbitrarycurvilinear coordinate systemand even in the absence of ametricon themanifold, the Levi-Civita symbol as defined above may be considered to be atensor densityfield in two different ways. It may be regarded as acontravarianttensor density of weight +1 or as a covariant tensor density of weight −1. Inndimensions using the generalized Kronecker delta,[7][8] Notice that these are numerically identical. In particular, the sign is the same. On apseudo-Riemannian manifold, one may define a coordinate-invariant covariant tensor field whose coordinate representation agrees with the Levi-Civita symbol wherever the coordinate system is such that the basis of the tangent space is orthonormal with respect to the metric and matches a selected orientation. This tensor should not be confused with the tensor density field mentioned above. The presentation in this section closely followsCarroll 2004. The covariant Levi-Civita tensor (also known as theRiemannian volume form) in any coordinate system that matches the selected orientation is wheregabis the representation of the metric in that coordinate system. We can similarly consider a contravariant Levi-Civita tensor by raising the indices with the metric as usual, but notice that if themetric signaturecontains an odd number of negative eigenvaluesq, then the sign of the components of this tensor differ from the standard Levi-Civita symbol:[9] wheresgn(det[gab]) = (−1)q,εa1…an{\displaystyle \varepsilon _{a_{1}\dots a_{n}}}is the usual Levi-Civita symbol discussed in the rest of this article, and we used the definition of the metricdeterminantin the derivation. More explicitly, when the tensor and basis orientation are chosen such thatE01…n=+|det[gab]|{\textstyle E_{01\dots n}=+{\sqrt {\left|\det[g_{ab}]\right|}}}, we have thatE01…n=sgn⁡(det[gab])|det[gab]|{\displaystyle E^{01\dots n}={\frac {\operatorname {sgn}(\det[g_{ab}])}{\sqrt {\left|\det[g_{ab}]\right|}}}}. From this we can infer the identity, where is the generalized Kronecker delta. In Minkowski space (the four-dimensionalspacetimeofspecial relativity), the covariant Levi-Civita tensor is where the sign depends on the orientation of the basis. The contravariant Levi-Civita tensor is The following are examples of the general identity above specialized to Minkowski space (with the negative sign arising from the odd number of negatives in the signature of the metric tensor in either sign convention): This article incorporates material fromLevi-Civita permutation symbolonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Levi-Civita_symbol
Inphysics,Minkowski space(orMinkowski spacetime) (/mɪŋˈkɔːfski,-ˈkɒf-/[1]) is the main mathematical description ofspacetimein the absence ofgravitation. It combinesinertialspaceandtimemanifoldsinto afour-dimensionalmodel. The model helps show how aspacetime intervalbetween any twoeventsis independent of theinertial frame of referencein which they are recorded. MathematicianHermann Minkowskideveloped it from the work ofHendrik Lorentz,Henri Poincaré, and others said it "was grown on experimental physical grounds". Minkowski space is closely associated withEinstein'stheories ofspecial relativityandgeneral relativityand is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due tolength contractionandtime dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events.[nb 1]Minkowski space differs fromfour-dimensional Euclidean spaceinsofar as it treats time differently from the three spatial dimensions. In 3-dimensionalEuclidean space, theisometry group(maps preserving the regularEuclidean distance) is theEuclidean group. It is generated byrotations,reflectionsandtranslations. When time is appended as a fourth dimension, the further transformations of translations in time andLorentz boostsare added, and the group of all these transformations is called thePoincaré group. Minkowski's model follows special relativity, where motion causestime dilationchanging the scale applied to the frame in motion and shifts the phase of light. Minkowski space is apseudo-Euclidean spaceequipped with anisotropic quadratic formcalled thespacetime intervalor theMinkowski norm squared. An event in Minkowski space for which the spacetime interval is zero is on thenull coneof the origin, called thelight conein Minkowski space. Using thepolarization identitythe quadratic form is converted to asymmetric bilinear formcalled theMinkowski inner product, though it is not a geometricinner product. Another misnomer isMinkowski metric,[2]but Minkowski space is not ametric space. Thegroupof transformations for Minkowski space that preserves the spacetime interval (as opposed to the spatial Euclidean distance) is theLorentz group(as opposed to theGalilean group). In his second relativity paper in 1905,Henri Poincaréshowed[3]how, by taking time to be an imaginary fourthspacetimecoordinateict, wherecis thespeed of lightandiis theimaginary unit,Lorentz transformationscan be visualized as ordinary rotations of the four-dimensional Euclidean sphere. The four-dimensional spacetime can be visualized as a four-dimensional space, with each point representing an event in spacetime. TheLorentz transformationscan then be thought of as rotations in this four-dimensional space, where the rotation axis corresponds to the direction of relative motion between the two observers and the rotation angle is related to their relative velocity. To understand this concept, one should consider the coordinates of an event in spacetime represented as a four-vector(t,x,y,z). A Lorentz transformation is represented by amatrixthat acts on the four-vector, changing its components. This matrix can be thought of as a rotation matrix in four-dimensional space, which rotates the four-vector around a particular axis.x2+y2+z2+(ict)2=constant.{\displaystyle x^{2}+y^{2}+z^{2}+(ict)^{2}={\text{constant}}.} Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations and are interpreted in the ordinary sense. The "rotation" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is aLorentz boostin physical spacetime withrealinertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary, which turns rotations into rotations in hyperbolic space (seehyperbolic rotation). This idea, which was mentioned only briefly by Poincaré, was elaborated by Minkowski in a paper inGermanpublished in 1908 called "The Fundamental Equations for Electromagnetic Processes in Moving Bodies".[4]He reformulatedMaxwell equationsas a symmetrical set of equations in the four variables(x,y,z,ict)combined with redefined vector variables for electromagnetic quantities, and he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context. From his reformulation, he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensionalspacetime continuum. In a further development in his 1908 "Space and Time" lecture,[5]Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables(x,y,z,t)of space and time in the coordinate form in a four-dimensional realvector space. Points in this space correspond to events in spacetime. In this space, there is a definedlight-coneassociated with each point, and events not on the light cone are classified by their relation to the apex asspacelikeortimelike. It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity. In the English translation of Minkowski's paper, the Minkowski metric, as defined below, is referred to as theline element. The Minkowski inner product below appears unnamed when referring toorthogonality(which he callsnormality) of certain vectors, and the Minkowski norm squared is referred to (somewhat cryptically, perhaps this is a translation dependent) as "sum". Minkowski's principal tool is theMinkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g.,proper timeandlength contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics torelativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and thePoincaré groupas symmetry group of spacetime)followingfrom the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application orderivationof the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for whichflatMinkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian. Minkowski, aware of the fundamental restatement of the theory which he had made, said The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth, space by itself and time by itself are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. Though Minkowski took an important step for physics,Albert Einsteinsaw its limitation: At a time when Minkowski was giving the geometrical interpretation of special relativity by extending the Euclidean three-space to aquasi-Euclideanfour-space that included time, Einstein was already aware that this is not valid, because it excludes the phenomenon ofgravitation. He was still far from the study ofcurvilinear coordinatesandRiemannian geometry, and the heavy mathematical apparatus entailed.[6] For further historical information see referencesGalison (1979),Corry (1997)andWalter (1999). Wherevis velocity,x,y, andzareCartesiancoordinates in 3-dimensional space,cis the constant representing the universal speed limit, andtis time, the four-dimensional vectorv= (ct,x,y,z) = (ct,r)is classified according to the sign ofc2t2−r2. A vector istimelikeifc2t2>r2,spacelikeifc2t2<r2, andnullorlightlikeifc2t2=r2. This can be expressed in terms of the sign ofη(v,v), also calledscalar product, as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the spacetime interval under Lorentz transformation. The set of allnull vectorsat an event[nb 2]of Minkowski space constitutes thelight coneof that event. Given a timelike vectorv, there is aworldlineof constant velocity associated with it, represented by a straight line in a Minkowski diagram. Once a direction of time is chosen,[nb 3]timelike and null vectors can be further decomposed into various classes. For timelike vectors, one has Null vectors fall into three classes: Together with spacelike vectors, there are 6 classes in all. Anorthonormalbasis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases, it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called anull basis. Vector fieldsare called timelike, spacelike, or null if the associated vectors are timelike, spacelike, or null at each point where the field is defined. Time-like vectors have special importance in the theory of relativity as they correspond to events that are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light. Of most interest are time-like vectors that aresimilarly directed, i.e. all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise because both forward and backward cones are convex, whereas the space-like region is not convex. Thescalar productof two time-like vectorsu1= (t1,x1,y1,z1)andu2= (t2,x2,y2,z2)isη(u1,u2)=u1⋅u2=c2t1t2−x1x2−y1y2−z1z2.{\displaystyle \eta (u_{1},u_{2})=u_{1}\cdot u_{2}=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}.} Positivity of scalar product: An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversedCauchy–Schwarz inequalitybelow. It follows that if the scalar product of two vectors is zero, then one of these, at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs. Using the positivity property of time-like vectors, it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light cone because of convexity). The norm of a time-like vectoru= (ct,x,y,z)is defined as‖u‖=η(u,u)=c2t2−x2−y2−z2{\displaystyle \left\|u\right\|={\sqrt {\eta (u,u)}}={\sqrt {c^{2}t^{2}-x^{2}-y^{2}-z^{2}}}} The reversed Cauchy inequalityis another consequence of the convexity of either light cone.[7]For two distinct similarly directed time-like vectorsu1andu2this inequality isη(u1,u2)>‖u1‖‖u2‖{\displaystyle \eta (u_{1},u_{2})>\left\|u_{1}\right\|\left\|u_{2}\right\|}or algebraically,c2t1t2−x1x2−y1y2−z1z2>(c2t12−x12−y12−z12)(c2t22−x22−y22−z22){\displaystyle c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}>{\sqrt {\left(c^{2}t_{1}^{2}-x_{1}^{2}-y_{1}^{2}-z_{1}^{2}\right)\left(c^{2}t_{2}^{2}-x_{2}^{2}-y_{2}^{2}-z_{2}^{2}\right)}}} From this, the positive property of the scalar product can be seen. For two similarly directed time-like vectorsuandw, the inequality is[8]‖u+w‖≥‖u‖+‖w‖,{\displaystyle \left\|u+w\right\|\geq \left\|u\right\|+\left\|w\right\|,}where the equality holds when the vectors arelinearly dependent. The proof uses the algebraic definition with the reversed Cauchy inequality:[9]‖u+w‖2=‖u‖2+2(u,w)+‖w‖2≥‖u‖2+2‖u‖‖w‖+‖w‖2=(‖u‖+‖w‖)2.{\displaystyle {\begin{aligned}\left\|u+w\right\|^{2}&=\left\|u\right\|^{2}+2\left(u,w\right)+\left\|w\right\|^{2}\\[5mu]&\geq \left\|u\right\|^{2}+2\left\|u\right\|\left\|w\right\|+\left\|w\right\|^{2}=\left(\left\|u\right\|+\left\|w\right\|\right)^{2}.\end{aligned}}} The result now follows by taking the square root on both sides. It is assumed below that spacetime is endowed with a coordinate system corresponding to aninertial frame. This provides anorigin, which is necessary for spacetime to be modeled as a vector space. This addition is not required, and more complex treatments analogous to anaffine spacecan remove the extra structure. However, this is not the introductory convention and is not covered here. For an overview, Minkowski space is a4-dimensionalrealvector spaceequipped with a non-degenerate,symmetric bilinear formon thetangent spaceat each point in spacetime, here simply called theMinkowski inner product, withmetric signatureeither(+ − − −)or(− + + +). The tangent space at each event is a vector space of the same dimension as spacetime,4. In practice, one need not be concerned with the tangent spaces. The vector space structure of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g.Lee (2003, Proposition 3.8.) orLee (2012, Proposition 3.13.) These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as[10](x0,x1,x2,x3)↔x0e0|p+x1e1|p+x2e2|p+x3e3|p↔x0e0|q+x1e1|q+x2e2|q+x3e3|q{\displaystyle {\begin{aligned}\left(x^{0},\,x^{1},\,x^{2},\,x^{3}\right)\ &\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{p}+\left.x^{1}\mathbf {e} _{1}\right|_{p}+\left.x^{2}\mathbf {e} _{2}\right|_{p}+\left.x^{3}\mathbf {e} _{3}\right|_{p}\\&\leftrightarrow \ \left.x^{0}\mathbf {e} _{0}\right|_{q}+\left.x^{1}\mathbf {e} _{1}\right|_{q}+\left.x^{2}\mathbf {e} _{2}\right|_{q}+\left.x^{3}\mathbf {e} _{3}\right|_{q}\end{aligned}}}with basis vectors in the tangent spaces defined byeμ|p=∂∂xμ|pore0|p=(1000), etc.{\displaystyle \left.\mathbf {e} _{\mu }\right|_{p}=\left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p}{\text{ or }}\mathbf {e} _{0}|_{p}=\left({\begin{matrix}1\\0\\0\\0\end{matrix}}\right){\text{, etc}}.} Here,pandqare any two events, and the second basis vector identification is referred to asparallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first-order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with adirectional derivativeoperator on the set of smooth functions. This is promoted to adefinitionof tangent vectors in manifoldsnotnecessarily being embedded inRn. This definition of tangent vectors is not the only possible one, as ordinaryn-tuples can be used as well. A tangent vector at a pointpmay be defined, here specialized to Cartesian coordinates in Lorentz frames, as4 × 1column vectorsvassociated toeachLorentz frame related by Lorentz transformationΛsuch that the vectorvin a frame related to some frame byΛtransforms according tov→ Λv. This is thesameway in which the coordinatesxμtransform. Explicitly,x′μ=Λμνxν,v′μ=Λμνvν.{\displaystyle {\begin{aligned}x'^{\mu }&={\Lambda ^{\mu }}_{\nu }x^{\nu },\\v'^{\mu }&={\Lambda ^{\mu }}_{\nu }v^{\nu }.\end{aligned}}} This definition is equivalent to the definition given above under a canonical isomorphism. For some purposes, it is desirable to identify tangent vectors at a pointpwithdisplacement vectorsatp, which is, of course, admissible by essentially the same canonical identification.[11]The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting inMisner, Thorne & Wheeler (1973). They offer various degrees of sophistication (and rigor) depending on which part of the material one chooses to read. The metric signature refers to which sign the Minkowski inner product yields when given space (spaceliketo be specific, defined further down) and time basis vectors (timelike) as arguments. Further discussion about this theoretically inconsequential but practically necessary choice for purposes of internal consistency and convenience is deferred to the hide box below. See also the page treatingsign conventionin Relativity. In general, but with several exceptions, mathematicians and general relativists prefer spacelike vectors to yield a positive sign,(− + + +), while particle physicists tend to prefer timelike vectors to yield a positive sign,(+ − − −). Authors covering several areas of physics, e.g.Steven WeinbergandLandau and Lifshitz((− + + +)and(+ − − −)respectively) stick to one choice regardless of topic. Arguments for the former convention include "continuity" from the Euclidean case corresponding to the non-relativistic limitc→ ∞. Arguments for the latter include that minus signs, otherwise ubiquitous in particle physics, go away. Yet other authors, especially of introductory texts, e.g.Kleppner & Kolenkow (1978), donotchoose a signature at all, but instead, opt to coordinatize spacetime such that the timecoordinate(but not time itself!) is imaginary. This removes the need for theexplicitintroduction of ametric tensor(which may seem like an extra burden in an introductory course), and one needsnotbe concerned withcovariant vectorsandcontravariant vectors(or raising and lowering indices) to be described below. The inner product is instead affected by a straightforward extension of thedot productinR3toR3×C. This works in the flat spacetime of special relativity, but not in the curved spacetime of general relativity, seeMisner, Thorne & Wheeler (1973, Box 2.1, Farewell toict) (who, by the way use(− + + +)). MTW also argues that it hides the trueindefinitenature of the metric and the true nature of Lorentz boosts, which are not rotations. It also needlessly complicates the use of tools ofdifferential geometrythat are otherwise immediately available and useful for geometrical description and calculation – even in the flat spacetime of special relativity, e.g. of the electromagnetic field. Mathematically associated with the bilinear form is atensorof type(0,2)at each point in spacetime, called theMinkowski metric.[nb 4]The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the4×4matrix representing the bilinear form. For comparison, ingeneral relativity, aLorentzian manifoldLis likewise equipped with ametric tensorg, which is a nondegenerate symmetric bilinear form on the tangent spaceTpLat each pointpofL. In coordinates, it may be represented by a4×4matrixdepending on spacetime position. Minkowski space is thus a comparatively simple special case of aLorentzian manifold. Its metric tensor is in coordinates with the same symmetric matrix at every point ofM, and its arguments can, per above, be taken as vectors in spacetime itself. Introducing more terminology (but not more structure), Minkowski space is thus apseudo-Euclidean spacewith total dimensionn= 4andsignature(1, 3)or(3, 1). Elements of Minkowski space are calledevents. Minkowski space is often denotedR1,3orR3,1to emphasize the chosen signature, or justM. It is an example of apseudo-Riemannian manifold. Then mathematically, the metric is a bilinear form on an abstract four-dimensional real vector spaceV, that is,η:V×V→R{\displaystyle \eta :V\times V\rightarrow \mathbf {R} }whereηhas signature(+, -, -, -), and signature is a coordinate-invariant property ofη. The space of bilinear maps forms a vector space which can be identified withM∗⊗M∗{\displaystyle M^{*}\otimes M^{*}}, andηmay be equivalently viewed as an element of this space. By making a choice of orthonormal basis{eμ}{\displaystyle \{e_{\mu }\}},M:=(V,η){\displaystyle M:=(V,\eta )}can be identified with the spaceR1,3:=(R4,ημν){\displaystyle \mathbf {R} ^{1,3}:=(\mathbf {R} ^{4},\eta _{\mu \nu })}. The notation is meant to emphasize the fact thatMandR1,3{\displaystyle \mathbf {R} ^{1,3}}are not just vector spaces but have added structure.ημν=diag(+1,−1,−1,−1){\displaystyle \eta _{\mu \nu }={\text{diag}}(+1,-1,-1,-1)}. An interesting example of non-inertial coordinates for (part of) Minkowski spacetime is theBorn coordinates. Another useful set of coordinates is thelight-cone coordinates. The Minkowski inner product is not aninner product, since it has non-zeronull vectors. Since it is not adefinite bilinear formit is calledindefinite. The Minkowski metricηis the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally, aconstantpseudo-Riemannian metric in Cartesian coordinates. As such, it is a nondegenerate symmetric bilinear form, a type(0, 2)tensor. It accepts two argumentsup,vp, vectors inTpM,p∈M, the tangent space atpinM. Due to the above-mentioned canonical identification ofTpMwithMitself, it accepts argumentsu,vwith bothuandvinM. As a notational convention, vectorsvinM, called4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldfacev. The latter is generally reserved for the3-vector part (to be introduced below) of a4-vector. The definition[12]u⋅v=η(u,v){\displaystyle u\cdot v=\eta (u,\,v)}yields an inner product-like structure onM, previously and also henceforth, called theMinkowski inner product, similar to the Euclideaninner product, but it describes a different geometry. It is also called therelativistic dot product. If the two arguments are the same,u⋅u=η(u,u)≡‖u‖2≡u2,{\displaystyle u\cdot u=\eta (u,u)\equiv \|u\|^{2}\equiv u^{2},}the resulting quantity will be called theMinkowski norm squared. The Minkowski inner product satisfies the following properties. The first two conditions imply bilinearity. The most important feature of the inner product and norm squared is thatthese are quantities unaffected by Lorentz transformations. In fact, it can be taken as the defining property of a Lorentz transformation in that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally forallclassical groups definable this way inclassical group. There, the matrixΦis identical in the caseO(3, 1)(the Lorentz group) to the matrixηto be displayed below. Minkowski space is constructed so that thespeed of lightwill be the same constant regardless of the reference frame in which it is measured. This property results from the relation of the time axis to a space axis. Two eventsuandvareorthogonalwhen the bilinear form is zero for them:η(v,w) = 0. When bothuandvare both space-like, then they areperpendicular, but if one is time-like and the other space-like, then the relation ishyperbolic orthogonality. The relation is preserved in a change of reference frames and consequently the computation of light speed yields a constant result. The change of reference frame is called aLorentz boostand in mathematics it is ahyperbolic rotation. Each reference frame is associated with ahyperbolic angle, which is zero for the rest frame in Minkowski space. Such a hyperbolic angle has been labelledrapiditysince it is associated with the speed of the frame. From thesecond postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that thespacetime intervalbetween two arbitrary events called1and2is:[13]c2(t1−t2)2−(x1−x2)2−(y1−y2)2−(z1−z2)2.{\displaystyle c^{2}\left(t_{1}-t_{2}\right)^{2}-\left(x_{1}-x_{2}\right)^{2}-\left(y_{1}-y_{2}\right)^{2}-\left(z_{1}-z_{2}\right)^{2}.}This quantity is not consistently named in the literature. The interval is sometimes referred to as the square root of the interval as defined here.[14][15] The invariance of the interval under coordinate transformations between inertial frames follows from the invariance ofc2t2−x2−y2−z2{\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}}provided the transformations are linear. Thisquadratic formcan be used to define a bilinear formu⋅v=c2t1t2−x1x2−y1y2−z1z2{\displaystyle u\cdot v=c^{2}t_{1}t_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}}via thepolarization identity. This bilinear form can in turn be written asu⋅v=uT[η]v,{\displaystyle u\cdot v=u^{\textsf {T}}\,[\eta ]\,v,}where[η]is a4×4{\displaystyle 4\times 4}matrix associated withη. While possibly confusing, it is common practice to denote[η]with justη. The matrix is read off from the explicit bilinear form asη=(10000−10000−10000−1),{\displaystyle \eta =\left({\begin{array}{r}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}}\right)\!,}and the bilinear formu⋅v=η(u,v),{\displaystyle u\cdot v=\eta (u,v),}with which this section started by assuming its existence, is now identified. For definiteness and shorter presentation, the signature(− + + +)is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map givenhere) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensorηhas been used in a derivation, go back to the earliest point where it was used, substituteηfor−η, and retrace forward to the desired formula with the desired metric signature. A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors{e0,e1,e2,e3}such thatη(e0,e0)=−η(e1,e1)=−η(e2,e2)=−η(e3,e3)=1{\displaystyle \eta (e_{0},e_{0})=-\eta (e_{1},e_{1})=-\eta (e_{2},e_{2})=-\eta (e_{3},e_{3})=1}and for whichη(eμ,eν)=0{\displaystyle \eta (e_{\mu },e_{\nu })=0}whenμ≠ν.{\textstyle \mu \neq \nu \,.} These conditions can be written compactly in the formη(eμ,eν)=ημν.{\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.} Relative to a standard basis, the components of a vectorvare written(v0,v1,v2,v3)where theEinstein notationis used to writev=vμeμ. The componentv0is called thetimelike componentofvwhile the other three components are called thespatial components. The spatial components of a4-vectorvmay be identified with a3-vectorv= (v1,v2,v3). In terms of components, the Minkowski inner product between two vectorsvandwis given by η(v,w)=ημνvμwν=v0w0+v1w1+v2w2+v3w3=vμwμ=vμwμ,{\displaystyle \eta (v,w)=\eta _{\mu \nu }v^{\mu }w^{\nu }=v^{0}w_{0}+v^{1}w_{1}+v^{2}w_{2}+v^{3}w_{3}=v^{\mu }w_{\mu }=v_{\mu }w^{\mu },}andη(v,v)=ημνvμvν=v0v0+v1v1+v2v2+v3v3=vμvμ.{\displaystyle \eta (v,v)=\eta _{\mu \nu }v^{\mu }v^{\nu }=v^{0}v_{0}+v^{1}v_{1}+v^{2}v_{2}+v^{3}v_{3}=v^{\mu }v_{\mu }.} Herelowering of an indexwith the metric was used. There are many possible choices of standard basis obeying the conditionη(eμ,eν)=ημν.{\displaystyle \eta (e_{\mu },e_{\nu })=\eta _{\mu \nu }.}Any two such bases are related in some sense by a Lorentz transformation, either by a change-of-basis matrixΛνμ{\displaystyle \Lambda _{\nu }^{\mu }}, a real4 × 4matrix satisfyingΛρμημνΛσν=ηρσ.{\displaystyle \Lambda _{\rho }^{\mu }\eta _{\mu \nu }\Lambda _{\sigma }^{\nu }=\eta _{\rho \sigma }.}orΛ, a linear map on the abstract vector space satisfying, for any pair of vectorsu,v,η(Λu,Λv)=η(u,v).{\displaystyle \eta (\Lambda u,\Lambda v)=\eta (u,v).} Then if two different bases exist,{e0,e1,e2,e3}and{e′0,e′1,e′2,e′3},eμ′=eνΛμν{\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }}can be represented aseμ′=eνΛμν{\displaystyle e_{\mu }'=e_{\nu }\Lambda _{\mu }^{\nu }}oreμ′=Λeμ{\displaystyle e_{\mu }'=\Lambda e_{\mu }}. While it might be tempting to think ofΛνμ{\displaystyle \Lambda _{\nu }^{\mu }}andΛas the same thing, mathematically, they are elements of different spaces, and act on the space of standard bases from different sides. Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces ofMand thecotangent spacesofM. At a point inM, the tangent and cotangent spaces aredual vector spaces(so the dimension of the cotangent space at an event is also4). Just as an authentic inner product on a vector space with one argument fixed, byRiesz representation theorem, may be expressed as the action of alinear functionalon the vector space, the same holds for the Minkowski inner product of Minkowski space.[17] Thus ifvμare the components of a vector in tangent space, thenημνvμ=vνare the components of a vector in the cotangent space (a linear functional). Due to the identification of vectors in tangent spaces with vectors inMitself, this is mostly ignored, and vectors with lower indices are referred to ascovariant vectors. In this latter interpretation, the covariant vectors are (almost always implicitly) identified with vectors (linear functionals) in the dual of Minkowski space. The ones with upper indices arecontravariant vectors. In the same fashion, the inverse of the map from tangent to cotangent spaces, explicitly given by the inverse ofηin matrix representation, can be used to defineraising of an index. The components of this inverse are denotedημν. It happens thatημν=ημν. These maps between a vector space and its dual can be denotedη♭(eta-flat) andη♯(eta-sharp) by the musical analogy.[18] Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear function can be characterized by two objects: itskernel, which is ahyperplanepassing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or1-form(though the latter is usually reserved for covectorfields). One quantum mechanical analogy explored in the literature is that of ade Broglie wave(scaled by a factor of Planck's reduced constant) associated with amomentum four-vectorto illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many times the arrow pierces the planes.[16]The mathematical reference,Lee (2003), offers the same geometrical view of these objects (but mentions no piercing). Theelectromagnetic field tensoris adifferential 2-form, which geometrical description can as well be found in MTW. One may, of course, ignore geometrical views altogether (as is the style in e.g.Weinberg (2002)andLandau & Lifshitz 2002) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to asindex gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly. Given a bilinear formη:M×M→R{\displaystyle \eta :M\times M\rightarrow \mathbf {R} }, the lowered version of a vector can be thought of as the partial evaluation ofη{\displaystyle \eta }, that is, there is an associated partial evaluation mapη(⋅,−):M→M∗;v↦η(v,⋅).{\displaystyle \eta (\cdot ,-):M\rightarrow M^{*};v\mapsto \eta (v,\cdot ).} The lowered vectorη(v,⋅)∈M∗{\displaystyle \eta (v,\cdot )\in M^{*}}is then the dual mapu↦η(v,u){\displaystyle u\mapsto \eta (v,u)}. Note it does not matter which argument is partially evaluated due to the symmetry ofη{\displaystyle \eta }. Non-degeneracy is then equivalent to injectivity of the partial evaluation map, or equivalently non-degeneracy indicates that the kernel of the map is trivial. In finite dimension, as is the case here, and noting that the dimension of a finite-dimensional space is equal to the dimension of the dual, this is enough to conclude the partial evaluation map is a linear isomorphism fromM{\displaystyle M}toM∗{\displaystyle M^{*}}. This then allows the definition of the inverse partial evaluation map,η−1:M∗→M,{\displaystyle \eta ^{-1}:M^{*}\rightarrow M,}which allows the inverse metric to be defined asη−1:M∗×M∗→R,η−1(α,β)=η(η−1(α),η−1(β)){\displaystyle \eta ^{-1}:M^{*}\times M^{*}\rightarrow \mathbf {R} ,\eta ^{-1}(\alpha ,\beta )=\eta (\eta ^{-1}(\alpha ),\eta ^{-1}(\beta ))}where the two different usages ofη−1{\displaystyle \eta ^{-1}}can be told apart by the argument each is evaluated on. This can then be used to raise indices. If a coordinate basis is used, the metricη−1is indeed the matrix inverse toη. The present purpose is to show semi-rigorously howformallyone may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced. A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearanceημνdxμ⊗dxν=ημνdxμ⊙dxν=ημνdxμdxν.{\displaystyle \eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }=\eta _{\mu \nu }dx^{\mu }\odot dx^{\nu }=\eta _{\mu \nu }dx^{\mu }dx^{\nu }.} Explanation: The coordinate differentials are 1-form fields. They are defined as theexterior derivativeof the coordinate functionsxμ. These quantities evaluated at a pointpprovide a basis for the cotangent space atp. Thetensor product(denoted by the symbol⊗) yields a tensor field of type(0, 2), i.e. the type that expects two contravariant vectors as arguments. On the right-hand side, thesymmetric product(denoted by the symbol⊙or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric.[19]The notation on the far right is also sometimes used for the related, but different,line element. It isnota tensor. For elaboration on the differences and similarities, seeMisner, Thorne & Wheeler (1973, Box 3.2 and section 13.2.) Tangentvectors are, in this formalism, given in terms of a basis of differential operators of the first order,∂∂xμ|p,{\displaystyle \left.{\frac {\partial }{\partial x^{\mu }}}\right|_{p},}wherepis an event. This operator applied to a functionfgives thedirectional derivativeoffatpin the direction of increasingxμwithxν,ν≠μfixed. They provide a basis for the tangent space atp. The exterior derivativedfof a functionfis acovector field, i.e. an assignment of a cotangent vector to each pointp, by definition such thatdf(X)=Xf,{\displaystyle df(X)=Xf,}for eachvector fieldX. A vector field is an assignment of a tangent vector to each pointp. In coordinatesXcan be expanded at each pointpin the basis given by the∂/∂xν|p. Applying this withf=xμ, the coordinate function itself, andX= ∂/∂xν, called acoordinate vector field, one obtainsdxμ(∂∂xν)=∂xμ∂xν=δνμ.{\displaystyle dx^{\mu }\left({\frac {\partial }{\partial x^{\nu }}}\right)={\frac {\partial x^{\mu }}{\partial x^{\nu }}}=\delta _{\nu }^{\mu }.} Since this relation holds at each pointp, thedxμ|pprovide a basis for the cotangent space at eachpand the basesdxμ|pand∂/∂xν|paredualto each other,dxμ|p(∂∂xν|p)=δνμ.{\displaystyle \left.dx^{\mu }\right|_{p}\left(\left.{\frac {\partial }{\partial x^{\nu }}}\right|_{p}\right)=\delta _{\nu }^{\mu }.}at eachp. Furthermore, one hasα⊗β(a,b)=α(a)β(b){\displaystyle \alpha \otimes \beta (a,b)=\alpha (a)\beta (b)}for general one-forms on a tangent spaceα,βand general tangent vectorsa,b. (This can be taken as a definition, but may also be proved in a more general setting.) Thus when the metric tensor is fed two vectors fieldsa,b, both expanded in terms of the basis coordinate vector fields, the result isημνdxμ⊗dxν(a,b)=ημνaμbν,{\displaystyle \eta _{\mu \nu }dx^{\mu }\otimes dx^{\nu }(a,b)=\eta _{\mu \nu }a^{\mu }b^{\nu },}whereaμ,bνare thecomponent functionsof the vector fields. The above equation holds at each pointp, and the relation may as well be interpreted as the Minkowski metric atpapplied to two tangent vectors atp. As mentioned, in a vector space, such as modeling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right-hand side of the above equation can be employed directly, without regard to the spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from. This situation changes ingeneral relativity. There one hasg(p)μνdxμ|pdxν|p(a,b)=g(p)μνaμbν,{\displaystyle g(p)_{\mu \nu }\left.dx^{\mu }\right|_{p}\left.dx^{\nu }\right|_{p}(a,b)=g(p)_{\mu \nu }a^{\mu }b^{\nu },}where nowη→g(p), i.e.,gis still a metric tensor but now depending on spacetime and is a solution ofEinstein's field equations. Moreover,a,bmustbe tangent vectors at spacetime pointpand can no longer be moved around freely. Letx,y∈M. Here, Supposex∈Mis timelike. Then thesimultaneous hyperplaneforxis{y:η(x,y) = 0}. Since thishyperplanevaries asxvaries, there is arelativity of simultaneityin Minkowski space. A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be4(2or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature. Complexified Minkowski space is defined asMc=M⊕iM.[20]Its real part is the Minkowski space offour-vectors, such as thefour-velocityand thefour-momentum, which are independent of the choice oforientationof the space. The imaginary part, on the other hand, may consist of four pseudovectors, such asangular velocityandmagnetic moment, which change their direction with a change of orientation. Apseudoscalariis introduced, which also changes sign with a change of orientation. Thus, elements ofMcare independent of the choice of the orientation. Theinner product-like structure onMcis defined asu⋅v=η(u,v)for anyu,v∈Mc. A relativistic purespinof anelectronor any half spin particle is described byρ∈Mcasρ=u+is, whereuis the four-velocity of the particle, satisfyingu2= 1andsis the 4D spin vector,[21]which is also thePauli–Lubanski pseudovectorsatisfyings2= −1andu⋅s= 0. Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. Ifn≥ 2,n-dimensional Minkowski space is a vector space of real dimensionnon which there is a constant Minkowski metric of signature(n− 1, 1)or(1,n− 1). These generalizations are used in theories where spacetime is assumed to have more or less than4dimensions.String theoryandM-theoryare two examples wheren> 4. In string theory, there appearsconformal field theorieswith1 + 1spacetime dimensions. de Sitter spacecan be formulated as a submanifold of generalized Minkowski space as can the model spaces ofhyperbolic geometry(see below). As aflat spacetime, the three spatial components of Minkowski spacetime always obey thePythagorean Theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significantgravitation. However, in order to take gravity into account, physicists use the theory ofgeneral relativity, which is formulated in the mathematics ofdifferential geometryofdifferential manifolds. When this geometry is used as a model of spacetime, it is known ascurved spacetime. Even in curved spacetime, Minkowski space is still a good description in aninfinitesimal regionsurrounding any point (barring gravitational singularities).[nb 5]More abstractly, it can be said that in the presence of gravity spacetime is described by a curved 4-dimensionalmanifoldfor which thetangent spaceto any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity. The meaning of the termgeometryfor the Minkowski space depends heavily on the context. Minkowski space is not endowed with Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by themodel spacesinhyperbolic geometry(negative curvature) and the geometry modeled by thesphere(positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not ametric spaceand not a Riemannian manifold with a Riemannian metric. However, Minkowski space containssubmanifoldsendowed with a Riemannian metric yielding hyperbolic geometry. Model spaces of hyperbolic geometry of low dimension, say 2 or 3,cannotbe isometrically embedded in Euclidean space with one more dimension, i.e.R3{\displaystyle \mathbf {R} ^{3}}orR4{\displaystyle \mathbf {R} ^{4}}respectively, with the Euclidean metricg¯{\displaystyle {\overline {g}}}, preventing easy visualization.[nb 6][22]By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension.[23]Hyperbolic spacescanbe isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metricη{\displaystyle \eta }. DefineHR1(n)⊂Mn+1{\displaystyle \mathbf {H} _{R}^{1(n)}\subset \mathbf {M} ^{n+1}}to be the upper sheet (ct>0{\displaystyle ct>0}) of thehyperboloidHR1(n)={(ct,x1,…,xn)∈Mn:c2t2−(x1)2−⋯−(xn)2=R2,ct>0}{\displaystyle \mathbf {H} _{R}^{1(n)}=\left\{\left(ct,x^{1},\ldots ,x^{n}\right)\in \mathbf {M} ^{n}:c^{2}t^{2}-\left(x^{1}\right)^{2}-\cdots -\left(x^{n}\right)^{2}=R^{2},ct>0\right\}}in generalized Minkowski spaceMn+1{\displaystyle \mathbf {M} ^{n+1}}of spacetime dimensionn+1.{\displaystyle n+1.}This is one of thesurfaces of transitivityof the generalized Lorentz group. Theinduced metricon this submanifold,hR1(n)=ι∗η,{\displaystyle h_{R}^{1(n)}=\iota ^{*}\eta ,}thepullbackof the Minkowski metricη{\displaystyle \eta }under inclusion, is aRiemannian metric. With this metricHR1(n){\displaystyle \mathbf {H} _{R}^{1(n)}}is aRiemannian manifold. It is one of the model spaces of Riemannian geometry, thehyperboloid modelofhyperbolic space. It is a space of constant negative curvature−1/R2{\displaystyle -1/R^{2}}.[24]The 1 in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and thenfor its dimension. A2(2){\displaystyle 2(2)}corresponds to thePoincaré disk model, while3(n){\displaystyle 3(n)}corresponds to thePoincaré half-space modelof dimensionn.{\displaystyle n.} In the definition aboveι:HR1(n)→Mn+1{\displaystyle \iota :\mathbf {H} _{R}^{1(n)}\rightarrow \mathbf {M} ^{n+1}}is theinclusion mapand the superscript star denotes thepullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration thatHR1(n){\displaystyle \mathbf {H} _{R}^{1(n)}}actually is a hyperbolic space. Behavior of tensors under inclusion:For inclusion maps from a submanifoldSintoMand a covariant tensorαof orderkonMit holds thatι∗α(X1,X2,…,Xk)=α(ι∗X1,ι∗X2,…,ι∗Xk)=α(X1,X2,…,Xk),{\displaystyle \iota ^{*}\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(\iota _{*}X_{1},\,\iota _{*}X_{2},\,\ldots ,\,\iota _{*}X_{k}\right)=\alpha \left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right),}whereX1,X1, …,Xkare vector fields onS. The subscript star denotes the pushforward (to be introduced later), and it is in this special case simply the identity map (as is the inclusion map). The latter equality holds because a tangent space to a submanifold at a point is in a canonical way a subspace of the tangent space of the manifold itself at the point in question. One may simply writeι∗α=α|S,{\displaystyle \iota ^{*}\alpha =\alpha |_{S},}meaning (with slightabuse of notation) the restriction ofαto accept as input vectors tangent to somes∈Sonly. Pullback of tensors under general maps:The pullback of a covariantk-tensorα(one taking only contravariant vectors as arguments) under a mapF:M→Nis a linear mapF∗:TF(p)kN→TpkM,{\displaystyle F^{*}\colon T_{F(p)}^{k}N\rightarrow T_{p}^{k}M,}where for any vector spaceV,TkV=V∗⊗V∗⊗⋯⊗V∗⏟ktimes.{\displaystyle T^{k}V=\underbrace {V^{*}\otimes V^{*}\otimes \cdots \otimes V^{*}} _{k{\text{ times}}}.} It is defined byF∗(α)(X1,X2,…,Xk)=α(F∗X1,F∗X2,…,F∗Xk),{\displaystyle F^{*}(\alpha )\left(X_{1},\,X_{2},\,\ldots ,\,X_{k}\right)=\alpha \left(F_{*}X_{1},\,F_{*}X_{2},\,\ldots ,\,F_{*}X_{k}\right),}where the subscript star denotes thepushforwardof the mapF, andX1,X2, …, Xkare vectors inTpM. (This is in accord with what was detailed about the pullback of the inclusion map. In the general case here, one cannot proceed as simply becauseF∗X1≠X1in general.) The pushforward of vectors under general maps:Heuristically, pulling back a tensor top∈MfromF(p) ∈Nfeeding it vectors residing atp∈Mis by definition the same as pushing forward the vectors fromp∈MtoF(p) ∈Nfeeding them to the tensor residing atF(p) ∈N. Further unwinding the definitions, the pushforwardF∗:TMp→TNF(p)of a vector field under a mapF:M→Nbetween manifolds is defined byF∗(X)f=X(f∘F),{\displaystyle F_{*}(X)f=X(f\circ F),}wherefis a function onN. WhenM=Rm,N=Rnthe pushforward ofFreduces toDF:Rm→Rn, the ordinarydifferential, which is given by theJacobian matrixof partial derivatives of the component functions. The differential is the best linear approximation of a functionFfromRmtoRn. The pushforward is the smooth manifold version of this. It acts between tangent spaces, and is in coordinates represented by the Jacobian matrix of thecoordinate representationof the function. The corresponding pullback is thedual mapfrom the dual of the range tangent space to the dual of the domain tangent space, i.e. it is a linear map,F∗:TF(p)∗N→Tp∗M.{\displaystyle F^{*}\colon T_{F(p)}^{*}N\rightarrow T_{p}^{*}M.} In order to exhibit the metric, it is necessary to pull it back via a suitableparametrization. A parametrization of a submanifoldSof a manifoldMis a mapU⊂Rm→Mwhose range is an open subset ofS. IfShas the same dimension asM, a parametrization is just the inverse of a coordinate mapφ:M→U⊂Rm. The parametrization to be used is the inverse ofhyperbolic stereographic projection. This is illustrated in the figure to the right forn= 2. It is instructive to compare tostereographic projectionfor spheres. Stereographic projectionσ:HnR→Rnand its inverseσ−1:Rn→HnRare given byσ(τ,x)=u=RxR+τ,σ−1(u)=(τ,x)=(RR2+|u|2R2−|u|2,2R2uR2−|u|2),{\displaystyle {\begin{aligned}\sigma (\tau ,\mathbf {x} )=\mathbf {u} &={\frac {R\mathbf {x} }{R+\tau }},\\\sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )&=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right),\end{aligned}}}where, for simplicity,τ≡ct. The(τ,x)are coordinates onMn+1and theuare coordinates onRn. LetHRn={(τ,x1,…,xn)⊂M:−τ2+(x1)2+⋯+(xn)2=−R2,τ>0}{\displaystyle \mathbf {H} _{R}^{n}=\left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\subset \mathbf {M} :-\tau ^{2}+\left(x^{1}\right)^{2}+\cdots +\left(x^{n}\right)^{2}=-R^{2},\tau >0\right\}}and letS=(−R,0,…,0).{\displaystyle S=(-R,0,\ldots ,0).} IfP=(τ,x1,…,xn)∈HRn,{\displaystyle P=\left(\tau ,x^{1},\ldots ,x^{n}\right)\in \mathbf {H} _{R}^{n},}then it is geometrically clear that the vectorPS→{\displaystyle {\overrightarrow {PS}}}intersects the hyperplane{(τ,x1,…,xn)∈M:τ=0}{\displaystyle \left\{\left(\tau ,x^{1},\ldots ,x^{n}\right)\in M:\tau =0\right\}}once in point denotedU=(0,u1(P),…,un(P))≡(0,u).{\displaystyle U=\left(0,u^{1}(P),\ldots ,u^{n}(P)\right)\equiv (0,\mathbf {u} ).} One hasS+SU→=U⇒SU→=U−S,S+SP→=P⇒SP→=P−S{\displaystyle {\begin{aligned}S+{\overrightarrow {SU}}&=U\Rightarrow {\overrightarrow {SU}}=U-S,\\S+{\overrightarrow {SP}}&=P\Rightarrow {\overrightarrow {SP}}=P-S\end{aligned}}}orSU→=(0,u)−(−R,0)=(R,u),SP→=(τ,x)−(−R,0)=(τ+R,x)..{\displaystyle {\begin{aligned}{\overrightarrow {SU}}&=(0,\mathbf {u} )-(-R,\mathbf {0} )=(R,\mathbf {u} ),\\{\overrightarrow {SP}}&=(\tau ,\mathbf {x} )-(-R,\mathbf {0} )=(\tau +R,\mathbf {x} ).\end{aligned}}.} By construction of stereographic projection one hasSU→=λ(τ)SP→.{\displaystyle {\overrightarrow {SU}}=\lambda (\tau ){\overrightarrow {SP}}.} This leads to the system of equationsR=λ(τ+R),u=λx.{\displaystyle {\begin{aligned}R&=\lambda (\tau +R),\\\mathbf {u} &=\lambda \mathbf {x} .\end{aligned}}} The first of these is solved forλand one obtains for stereographic projectionσ(τ,x)=u=RxR+τ.{\displaystyle \sigma (\tau ,\mathbf {x} )=\mathbf {u} ={\frac {R\mathbf {x} }{R+\tau }}.} Next, the inverseσ−1(u) = (τ,x)must be calculated. Use the same considerations as before, but now withU=(0,u)P=(τ(u),x(u)).,{\displaystyle {\begin{aligned}U&=(0,\mathbf {u} )\\P&=(\tau (\mathbf {u} ),\mathbf {x} (\mathbf {u} )).\end{aligned}},}one getsτ=R(1−λ)λ,x=uλ,{\displaystyle {\begin{aligned}\tau &={\frac {R(1-\lambda )}{\lambda }},\\\mathbf {x} &={\frac {\mathbf {u} }{\lambda }},\end{aligned}}}but now withλdepending onu. The condition forPlying in the hyperboloid is−τ2+|x|2=−R2,{\displaystyle -\tau ^{2}+|\mathbf {x} |^{2}=-R^{2},}or−R2(1−λ)2λ2+|u|2λ2=−R2,{\displaystyle -{\frac {R^{2}(1-\lambda )^{2}}{\lambda ^{2}}}+{\frac {|\mathbf {u} |^{2}}{\lambda ^{2}}}=-R^{2},}leading toλ=R2−|u|22R2.{\displaystyle \lambda ={\frac {R^{2}-|u|^{2}}{2R^{2}}}.} With thisλ, one obtainsσ−1(u)=(τ,x)=(RR2+|u|2R2−|u|2,2R2uR2−|u|2).{\displaystyle \sigma ^{-1}(\mathbf {u} )=(\tau ,\mathbf {x} )=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).} One hashR1(n)=η|HR1(n)=(dx1)2+⋯+(dxn)2−dτ2{\displaystyle h_{R}^{1(n)}=\eta |_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}\right)^{2}+\cdots +\left(dx^{n}\right)^{2}-d\tau ^{2}}and the mapσ−1:Rn→HR1(n);σ−1(u)=(τ(u),x(u))=(RR2+|u|2R2−|u|2,2R2uR2−|u|2).{\displaystyle \sigma ^{-1}:\mathbf {R} ^{n}\rightarrow \mathbf {H} _{R}^{1(n)};\quad \sigma ^{-1}(\mathbf {u} )=(\tau (\mathbf {u} ),\,\mathbf {x} (\mathbf {u} ))=\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}},\,{\frac {2R^{2}\mathbf {u} }{R^{2}-|u|^{2}}}\right).} The pulled back metric can be obtained by straightforward methods of calculus;(σ−1)∗η|HR1(n)=(dx1(u))2+⋯+(dxn(u))2−(dτ(u))2.{\displaystyle \left.\left(\sigma ^{-1}\right)^{*}\eta \right|_{\mathbf {H} _{R}^{1(n)}}=\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}-\left(d\tau (\mathbf {u} )\right)^{2}.} One computes according to the standard rules for computing differentials (though one is really computing the rigorously defined exterior derivatives),dx1(u)=d(2R2u1R2−|u|2)=∂∂u12R2u1R2−|u|2du1+⋯+∂∂un2R2u1R2−|u|2dun+∂∂τ2R2u1R2−|u|2dτ,⋮dxn(u)=d(2R2unR2−|u|2)=⋯,dτ(u)=d(RR2+|u|2R2−|u|2)=⋯,{\displaystyle {\begin{aligned}dx^{1}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}\right)={\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}+\cdots +{\frac {\partial }{\partial u^{n}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{n}+{\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ,\\&\ \ \vdots \\dx^{n}(\mathbf {u} )&=d\left({\frac {2R^{2}u^{n}}{R^{2}-|u|^{2}}}\right)=\cdots ,\\d\tau (\mathbf {u} )&=d\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)=\cdots ,\end{aligned}}}and substitutes the results into the right hand side. This yields(σ−1)∗hR1(n)=4R2[(du1)2+⋯+(dun)2](R2−|u|2)2≡hR2(n).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.} One has∂∂u12R2u1R2−|u|2du1=2(R2−|u|2)+4R2(u1)2(R2−|u|2)2du1,∂∂u22R2u1R2−|u|2du2=4R2u1u2(R2−|u|2)2du2,{\displaystyle {\begin{aligned}{\frac {\partial }{\partial u^{1}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{1}&={\frac {2\left(R^{2}-|u|^{2}\right)+4R^{2}\left(u^{1}\right)^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{1},\\{\frac {\partial }{\partial u^{2}}}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}du^{2}&={\frac {4R^{2}u^{1}u^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}du^{2},\end{aligned}}}and∂∂τ2R2u1R2−|u|2dτ2=0.{\displaystyle {\frac {\partial }{\partial \tau }}{\frac {2R^{2}u^{1}}{R^{2}-|u|^{2}}}d\tau ^{2}=0.} With this one may writedx1(u)=2R2(R2−|u|2)du1+4R2u1(u⋅du)(R2−|u|2)2,{\displaystyle dx^{1}(\mathbf {u} )={\frac {2R^{2}\left(R^{2}-|u|^{2}\right)du^{1}+4R^{2}u^{1}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{2}}},}from which(dx1(u))2=4R2(r2−|u|2)2(du1)2+16R4(R2−|u|2)(u⋅du)u1du1+16R4(u1)2(u⋅du)2(R2−|u|2)4.{\displaystyle \left(dx^{1}(\mathbf {u} )\right)^{2}={\frac {4R^{2}\left(r^{2}-|u|^{2}\right)^{2}\left(du^{1}\right)^{2}+16R^{4}\left(R^{2}-|u|^{2}\right)\left(\mathbf {u} \cdot d\mathbf {u} \right)u^{1}du^{1}+16R^{4}\left(u^{1}\right)^{2}\left(\mathbf {u} \cdot d\mathbf {u} \right)^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.} Summing this formula one obtains(dx1(u))2+⋯+(dxn(u))2=4R2(R2−|u|2)2[(du1)2+⋯+(dun)2]+16R4(R2−|u|2)(u⋅du)(u⋅du)+16R4|u|2(u⋅du)2(R2−|u|2)4=4R2(R2−|u|2)2[(du1)2+⋯+(dun)2](R2−|u|2)4+R216R4(u⋅du)(R2−|u|2)4.{\displaystyle {\begin{aligned}&\left(dx^{1}(\mathbf {u} )\right)^{2}+\cdots +\left(dx^{n}(\mathbf {u} )\right)^{2}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]+16R^{4}\left(R^{2}-|u|^{2}\right)(\mathbf {u} \cdot d\mathbf {u} )(\mathbf {u} \cdot d\mathbf {u} )+16R^{4}|u|^{2}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}\\={}&{\frac {4R^{2}\left(R^{2}-|u|^{2}\right)^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{4}}}+R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )}{\left(R^{2}-|u|^{2}\right)^{4}}}.\end{aligned}}} Similarly, forτone getsdτ=∑i=1n∂∂uiRR2+|u|2R2+|u|2dui+∂∂τRR2+|u|2R2+|u|2dτ=∑i=1nR44R2uidui(R2−|u|2),{\displaystyle d\tau =\sum _{i=1}^{n}{\frac {\partial }{\partial u^{i}}}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}du^{i}+{\frac {\partial }{\partial \tau }}R{\frac {R^{2}+|u|^{2}}{R^{2}+|u|^{2}}}d\tau =\sum _{i=1}^{n}R^{4}{\frac {4R^{2}u^{i}du^{i}}{\left(R^{2}-|u|^{2}\right)}},}yielding−dτ2=−(R4R4(u⋅du)(R2−|u|2)2)2=−R216R4(u⋅du)2(R2−|u|2)4.{\displaystyle -d\tau ^{2}=-\left(R{\frac {4R^{4}\left(\mathbf {u} \cdot d\mathbf {u} \right)}{\left(R^{2}-|u|^{2}\right)^{2}}}\right)^{2}=-R^{2}{\frac {16R^{4}(\mathbf {u} \cdot d\mathbf {u} )^{2}}{\left(R^{2}-|u|^{2}\right)^{4}}}.} Now add this contribution to finally get(σ−1)∗hR1(n)=4R2[(du1)2+⋯+(dun)2](R2−|u|2)2≡hR2(n).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}={\frac {4R^{2}\left[\left(du^{1}\right)^{2}+\cdots +\left(du^{n}\right)^{2}\right]}{\left(R^{2}-|u|^{2}\right)^{2}}}\equiv h_{R}^{2(n)}.} This last equation shows that the metric on the ball is identical to the Riemannian metrich2(n)Rin thePoincaré ball model, another standard model of hyperbolic geometry. The pullback can be computed in a different fashion. By definition,(σ−1)∗hR1(n)(V,V)=hR1(n)((σ−1)∗V,(σ−1)∗V)=η|HR1(n)((σ−1)∗V,(σ−1)∗V).{\displaystyle \left(\sigma ^{-1}\right)^{*}h_{R}^{1(n)}(V,\,V)=h_{R}^{1(n)}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right)=\eta |_{\mathbf {H} _{R}^{1(n)}}\left(\left(\sigma ^{-1}\right)_{*}V,\,\left(\sigma ^{-1}\right)_{*}V\right).} In coordinates,(σ−1)∗V=(σ−1)∗Vi∂∂ui=Vi∂xj∂ui∂∂xj+Vi∂τ∂ui∂∂τ=Vi∂xj∂ui∂∂xj+Vi∂τ∂ui∂∂τ=Vxj∂∂xj+Vτ∂∂τ.{\displaystyle \left(\sigma ^{-1}\right)_{*}V=\left(\sigma ^{-1}\right)_{*}V^{i}{\frac {\partial }{\partial u^{i}}}=V^{i}{\frac {\partial x^{j}}{\partial u^{i}}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial \tau }{\partial u^{i}}}{\frac {\partial }{\partial \tau }}=V^{i}{\frac {\partial }{x}}^{j}{\partial u^{i}}{\frac {\partial }{\partial x^{j}}}+V^{i}{\frac {\partial }{\tau }}{\partial u^{i}}{\frac {\partial }{\partial \tau }}=Vx^{j}{\frac {\partial }{\partial x^{j}}}+V\tau {\frac {\partial }{\partial \tau }}.} One has from the formula forσ–1Vxj=Vi∂∂ui(2R2ujR2−|u|2)=2R2VjR2−|u|2−4R2uj⟨V,u⟩(R2−|u|2)2,(hereV|u|2=2∑k=1nVkuk≡2⟨V,u⟩)Vτ=V(RR2+|u|2R2−|u|2)=4R3⟨V,u⟩(R2−|u|2)2.{\displaystyle {\begin{aligned}Vx^{j}&=V^{i}{\frac {\partial }{\partial u^{i}}}\left({\frac {2R^{2}u^{j}}{R^{2}-|u|^{2}}}\right)={\frac {2R^{2}V^{j}}{R^{2}-|u|^{2}}}-{\frac {4R^{2}u^{j}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}},\quad \left({\text{here }}V|u|^{2}=2\sum _{k=1}^{n}V^{k}u^{k}\equiv 2\langle \mathbf {V} ,\,\mathbf {u} \rangle \right)\\V\tau &=V\left(R{\frac {R^{2}+|u|^{2}}{R^{2}-|u|^{2}}}\right)={\frac {4R^{3}\langle \mathbf {V} ,\,\mathbf {u} \rangle }{\left(R^{2}-|u|^{2}\right)^{2}}}.\end{aligned}}} Lastly,η(σ∗−1V,σ∗−1V)=∑j=1n(Vxj)2−(Vτ)2=4R4|V|2(R2−|u|2)2=hR2(n)(V,z,V),{\displaystyle \eta \left(\sigma _{*}^{-1}V,\,\sigma _{*}^{-1}V\right)=\sum _{j=1}^{n}\left(Vx^{j}\right)^{2}-(V\tau )^{2}={\frac {4R^{4}|V|^{2}}{\left(R^{2}-|u|^{2}\right)^{2}}}=h_{R}^{2(n)}(V,z,V),}and the same conclusion is reached. Media related toMinkowski diagramsat Wikimedia Commons
https://en.wikipedia.org/wiki/Minkowski_space#Standard_basis
The't Hooft symbolis a collection of numbers which allows one to express thegeneratorsof theSU(2)Lie algebra in terms of thegeneratorsof Lorentz algebra. The symbol is a blend between theKronecker deltaand theLevi-Civita symbol. It was introduced byGerard 't Hooft. It is used in the construction of theBPST instanton. ημνa{\displaystyle \eta _{\mu \nu }^{a}}is the 't Hooft symbol:ημνa={ϵaμνμ,ν=1,2,3−δaνμ=4δaμν=40μ=ν=4{\displaystyle \eta _{\mu \nu }^{a}={\begin{cases}\epsilon ^{a\mu \nu }&\mu ,\nu =1,2,3\\-\delta ^{a\nu }&\mu =4\\\delta ^{a\mu }&\nu =4\\0&\mu =\nu =4\end{cases}}}Whereδaν{\displaystyle \delta ^{a\nu }}andδaμ{\displaystyle \delta ^{a\mu }}are instances of the Kronecker delta, andϵaμν{\displaystyle \epsilon ^{a\mu \nu }}is the Levi–Civita symbol. In other words, they are defined by (a=1,2,3;μ,ν=1,2,3,4;ϵ1234=+1{\displaystyle a=1,2,3;~\mu ,\nu =1,2,3,4;~\epsilon _{1234}=+1}) ηaμν=ϵaμν4+δaμδν4−δaνδμ4η¯aμν=ϵaμν4−δaμδν4+δaνδμ4{\displaystyle {\begin{aligned}\eta _{a\mu \nu }&=\epsilon _{a\mu \nu 4}+\delta _{a\mu }\delta _{\nu 4}-\delta _{a\nu }\delta _{\mu 4}\\[1ex]{\bar {\eta }}_{a\mu \nu }&=\epsilon _{a\mu \nu 4}-\delta _{a\mu }\delta _{\nu 4}+\delta _{a\nu }\delta _{\mu 4}\end{aligned}}}where the latter are the anti-self-dual 't Hooft symbols. In matrix form, the 't Hooft symbols areη1μν=[000100100−100−1000],η2μν=[00−10000110000−100],η3μν=[0100−1000000100−10],{\displaystyle \eta _{1\mu \nu }={\begin{bmatrix}0&0&0&1\\0&0&1&0\\0&-1&0&0\\-1&0&0&0\end{bmatrix}},\quad \eta _{2\mu \nu }={\begin{bmatrix}0&0&-1&0\\0&0&0&1\\1&0&0&0\\0&-1&0&0\end{bmatrix}},\quad \eta _{3\mu \nu }={\begin{bmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&1\\0&0&-1&0\end{bmatrix}},}and their anti-self-duals are the following:η¯1μν=[000−100100−1001000],η¯2μν=[00−10000−110000100],η¯3μν=[0100−1000000−10010].{\displaystyle {\bar {\eta }}_{1\mu \nu }={\begin{bmatrix}0&0&0&-1\\0&0&1&0\\0&-1&0&0\\1&0&0&0\end{bmatrix}},\quad {\bar {\eta }}_{2\mu \nu }={\begin{bmatrix}0&0&-1&0\\0&0&0&-1\\1&0&0&0\\0&1&0&0\end{bmatrix}},\quad {\bar {\eta }}_{3\mu \nu }={\begin{bmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&-1\\0&0&1&0\end{bmatrix}}.} They satisfy the self-duality and the anti-self-duality properties:ηaμν=12ϵμνρσηaρσ,η¯aμν=−12ϵμνρση¯aρσ{\displaystyle \eta _{a\mu \nu }={\tfrac {1}{2}}\epsilon _{\mu \nu \rho \sigma }\eta _{a\rho \sigma }\ ,\qquad {\bar {\eta }}_{a\mu \nu }=-{\tfrac {1}{2}}\epsilon _{\mu \nu \rho \sigma }{\bar {\eta }}_{a\rho \sigma }} Some other properties are ηaμν=−ηaνμ,{\displaystyle \eta _{a\mu \nu }=-\eta _{a\nu \mu }\ ,}ϵabcηbμνηcρσ=δμρηaνσ+δνσηaμρ−δμσηaνρ−δνρηaμσ{\displaystyle \epsilon _{abc}\eta _{b\mu \nu }\eta _{c\rho \sigma }=\delta _{\mu \rho }\eta _{a\nu \sigma }+\delta _{\nu \sigma }\eta _{a\mu \rho }-\delta _{\mu \sigma }\eta _{a\nu \rho }-\delta _{\nu \rho }\eta _{a\mu \sigma }}ηaμνηaρσ=δμρδνσ−δμσδνρ+ϵμνρσ,{\displaystyle \eta _{a\mu \nu }\eta _{a\rho \sigma }=\delta _{\mu \rho }\delta _{\nu \sigma }-\delta _{\mu \sigma }\delta _{\nu \rho }+\epsilon _{\mu \nu \rho \sigma }\ ,}ηaμρηbμσ=δabδρσ+ϵabcηcρσ,{\displaystyle \eta _{a\mu \rho }\eta _{b\mu \sigma }=\delta _{ab}\delta _{\rho \sigma }+\epsilon _{abc}\eta _{c\rho \sigma }\ ,}ϵμνρθηaσθ=δσμηaνρ+δσρηaμν−δσνηaμρ,{\displaystyle \epsilon _{\mu \nu \rho \theta }\eta _{a\sigma \theta }=\delta _{\sigma \mu }\eta _{a\nu \rho }+\delta _{\sigma \rho }\eta _{a\mu \nu }-\delta _{\sigma \nu }\eta _{a\mu \rho }\ ,}ηaμνηaμν=12,ηaμνηbμν=4δab,ηaμρηaμσ=3δρσ.{\displaystyle \eta _{a\mu \nu }\eta _{a\mu \nu }=12\ ,\quad \eta _{a\mu \nu }\eta _{b\mu \nu }=4\delta _{ab}\ ,\quad \eta _{a\mu \rho }\eta _{a\mu \sigma }=3\delta _{\rho \sigma }\ .} The same holds forη¯{\displaystyle {\bar {\eta }}}except for η¯aμνη¯aρσ=δμρδνσ−δμσδνρ−ϵμνρσ.{\displaystyle {\bar {\eta }}_{a\mu \nu }{\bar {\eta }}_{a\rho \sigma }=\delta _{\mu \rho }\delta _{\nu \sigma }-\delta _{\mu \sigma }\delta _{\nu \rho }-\epsilon _{\mu \nu \rho \sigma }\ .} andϵμνρθη¯aσθ=−δσμη¯aνρ−δσρη¯aμν+δσνη¯aμρ,{\displaystyle \epsilon _{\mu \nu \rho \theta }{\bar {\eta }}_{a\sigma \theta }=-\delta _{\sigma \mu }{\bar {\eta }}_{a\nu \rho }-\delta _{\sigma \rho }{\bar {\eta }}_{a\mu \nu }+\delta _{\sigma \nu }{\bar {\eta }}_{a\mu \rho }\ ,} Obviouslyηaμνη¯bμν=0{\displaystyle \eta _{a\mu \nu }{\bar {\eta }}_{b\mu \nu }=0}due to different duality properties. Many properties of these are tabulated in the appendix of 't Hooft's paper[1]and also in the article by Belitsky et al.[2]
https://en.wikipedia.org/wiki/%27t_Hooft_symbol
Innumber theory, theunit functionis a completelymultiplicative functionon the positive integers defined as: It is called the unit function because it is theidentity elementforDirichlet convolution.[1] It may be described as the "indicator functionof 1" within the set of positive integers. It is also written asu(n){\displaystyle u(n)}(not to be confused withμ(n){\displaystyle \mu (n)}, which generally denotes theMöbius function). Thisnumber theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Unit_function
Unary coding,[nb 1]or theunary numeral systemand also sometimes calledthermometer code, is anentropy encodingthat represents anatural number,n, with a code of lengthn+ 1 ( orn), usuallynones followed by a zero (ifnatural numberis understood asnon-negative integer) or withn− 1 ones followed by a zero (ifnatural numberis understood asstrictly positive integer). For example 5 is represented as 111110 or 11110. Some representations usenorn− 1 zeros followed by a one. The ones and zeros are interchangeablewithout loss of generality. Unary coding is both aprefix-free codeand aself-synchronizing code. Unary coding is an optimally efficient encoding for the following discreteprobability distribution forn=1,2,3,...{\displaystyle n=1,2,3,...}. In symbol-by-symbol coding, it is optimal for anygeometric distribution for whichk≥ φ = 1.61803398879..., thegolden ratio, or, more generally, for any discrete distribution for which forn=1,2,3,...{\displaystyle n=1,2,3,...}. Although it is the optimal symbol-by-symbol coding for such probability distributions,Golomb codingachieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason,arithmetic encodingperforms better for general probability distributions, as in the last case above. Examples of unary code uses include: Unary coding is used in theneural circuitsresponsible forbirdsongproduction.[1][2]The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness. All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus representstrictly positive integers. These codes are guaranteed to end validly on any length of data ( when reading arbitrary data ) and in the ( separate ) write cycle allow for the use and transmission of an extra bit ( the one used for the first bit ) while maintaining overall and per-integer unary code lengths of exactly N. Following is an example ofuniquely decodableunary codes that is not aprefix codeand is not instantaneously decodable (need look-ahead to decode) These codes also ( when writing unsigned integers ) allow for the use and transmission of an extra bit ( the one used for the first bit ). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data. Following set of unary codes are symmetric and can be read in any direction. It is also instantaneously decodable in either direction. For unary values where the maximum is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. It involves starting with numerical '0' or '-1' (2n−1{\displaystyle \operatorname {2} ^{n}-1\,}) and the maximum number of digits then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'. Canonical codes canrequire less processing time to decodewhen they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length in that case. A generalized version of unary coding was presented bySubhash Kakto represent numbers much more efficiently than standard unary coding.[3]Here's an example of generalized unary coding for integers from 0 through 15 that requires only 7 bits (where three bits are arbitrarily chosen in place of a single one in standard unary to show the number). Note that the representation is cyclic where one uses markers to represent higher integers in higher cycles. Generalized unary coding requires that the range of numbers to be represented to be pre-specified because this range determines the number of bits that are needed.
https://en.wikipedia.org/wiki/Unary_coding
Instatistics,semiparametric regressionincludesregressionmodels that combineparametricandnonparametricmodels. They are often used in situations where the fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. Semiparametric regression models are a particular type ofsemiparametric modellingand, since semiparametric models contain a parametric component, they rely on parametric assumptions and may bemisspecifiedandinconsistent, just like a fully parametric model. Many different semiparametric regression methods have been proposed and developed. The most popular methods are the partially linear, index and varying coefficient models. Apartially linear modelis given by whereYi{\displaystyle Y_{i}}is the dependent variable,Xi{\displaystyle X_{i}}is ap×1{\displaystyle p\times 1}vector of explanatory variables,β{\displaystyle \beta }is ap×1{\displaystyle p\times 1}vector of unknown parameters andZi∈Rq{\displaystyle Z_{i}\in \operatorname {R} ^{q}}. The parametric part of the partially linear model is given by the parameter vectorβ{\displaystyle \beta }while the nonparametric part is the unknown functiong(Zi){\displaystyle g\left(Z_{i}\right)}. The data is assumed to be i.i.d. withE(ui|Xi,Zi)=0{\displaystyle E\left(u_{i}|X_{i},Z_{i}\right)=0}and the model allows for a conditionallyheteroskedasticerror processE(ui2|x,z)=σ2(x,z){\displaystyle E\left(u_{i}^{2}|x,z\right)=\sigma ^{2}\left(x,z\right)}of unknown form. This type of model was proposed by Robinson (1988) and extended to handle categorical covariates by Racine and Li (2007). This method is implemented by obtaining an{\displaystyle {\sqrt {n}}}consistent estimator ofβ{\displaystyle \beta }and then deriving an estimator ofg(Zi){\displaystyle g\left(Z_{i}\right)}from thenonparametric regressionofYi−Xi′β^{\displaystyle Y_{i}-X'_{i}{\hat {\beta }}}onz{\displaystyle z}using an appropriate nonparametric regression method.[1] A single index model takes the form whereY{\displaystyle Y},X{\displaystyle X}andβ0{\displaystyle \beta _{0}}are defined as earlier and the error termu{\displaystyle u}satisfiesE(u|X)=0{\displaystyle E\left(u|X\right)=0}. The single index model takes its name from the parametric part of the modelx′β{\displaystyle x'\beta }which is ascalarsingle index. The nonparametric part is the unknown functiong(⋅){\displaystyle g\left(\cdot \right)}. The single index model method developed by Ichimura (1993) is as follows. Consider the situation in whichy{\displaystyle y}is continuous. Given a known form for the functiong(⋅){\displaystyle g\left(\cdot \right)},β0{\displaystyle \beta _{0}}could be estimated using thenonlinear least squaresmethod to minimize the function Since the functional form ofg(⋅){\displaystyle g\left(\cdot \right)}is not known, we need to estimate it. For a given value forβ{\displaystyle \beta }an estimate of the function usingkernelmethod. Ichimura (1993) proposes estimatingg(Xi′β){\displaystyle g\left(X'_{i}\beta \right)}with theleave-one-outnonparametric kernelestimator ofG(Xi′β){\displaystyle G\left(X'_{i}\beta \right)}. If the dependent variabley{\displaystyle y}is binary andXi{\displaystyle X_{i}}andui{\displaystyle u_{i}}are assumed to beindependent, Klein and Spady (1993) propose a technique for estimatingβ{\displaystyle \beta }usingmaximum likelihoodmethods. The log-likelihood function is given by whereg^−i(Xi′β){\displaystyle {\hat {g}}_{-i}\left(X'_{i}\beta \right)}is theleave-one-outestimator. Hastie and Tibshirani (1993) propose a smooth coefficient model given by whereXi{\displaystyle X_{i}}is ak×1{\displaystyle k\times 1}vector andβ(z){\displaystyle \beta \left(z\right)}is a vector of unspecified smooth functions ofz{\displaystyle z}. γ(⋅){\displaystyle \gamma \left(\cdot \right)}may be expressed as
https://en.wikipedia.org/wiki/Semiparametric_regression
Instatisticsandnumerical analysis,isotonic regressionormonotonicregressionis the technique of fitting a free-form line to a sequence of observations such that the fitted line isnon-decreasing(or non-increasing) everywhere, and lies as close to the observations as possible. Isotonic regression has applications instatistical inference. For example, one might use it to fit an isotonic curve to the means of some set of experimental results when an increase in those means according to some particular ordering is expected. A benefit of isotonic regression is that it is not constrained by any functional form, such as the linearity imposed bylinear regression, as long as the function is monotonic increasing. Another application is nonmetricmultidimensional scaling,[1]where a low-dimensionalembeddingfor data points is sought such that order of distances between points in the embedding matchesorder of dissimilaritybetween points. Isotonic regression is used iteratively to fit ideal distances to preserve relative dissimilarity order. Isotonic regression is also used inprobabilistic classificationto calibrate the predicted probabilities ofsupervised machine learningmodels.[2] Isotonic regression for the simply ordered case with univariatex,y{\displaystyle x,y}has been applied to estimating continuous dose-response relationships in fields such as anesthesiology and toxicology. Narrowly speaking, isotonic regression only provides point estimates at observed values ofx.{\displaystyle x.}Estimation of the complete dose-response curve without any additional assumptions is usually done via linear interpolation between the point estimates.[3] Software for computing isotone (monotonic) regression has been developed forR,[4][5][6]Stata, andPython.[7] Let(x1,y1),…,(xn,yn){\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})}be a given set of observations, where theyi∈R{\displaystyle y_{i}\in \mathbb {R} }and thexi{\displaystyle x_{i}}fall in somepartially ordered set. For generality, each observation(xi,yi){\displaystyle (x_{i},y_{i})}may be given a weightwi≥0{\displaystyle w_{i}\geq 0}, although commonlywi=1{\displaystyle w_{i}=1}for alli{\displaystyle i}. Isotonic regression seeks a weightedleast-squaresfity^i≈yi{\displaystyle {\hat {y}}_{i}\approx y_{i}}for alli{\displaystyle i}, subject to the constraint thaty^i≤y^j{\displaystyle {\hat {y}}_{i}\leq {\hat {y}}_{j}}wheneverxi≤xj{\displaystyle x_{i}\leq x_{j}}. This gives the followingquadratic program(QP) in the variablesy^1,…,y^n{\displaystyle {\hat {y}}_{1},\ldots ,{\hat {y}}_{n}}: whereE={(i,j):xi≤xj}{\displaystyle E=\{(i,j):x_{i}\leq x_{j}\}}specifies the partial ordering of the observed inputsxi{\displaystyle x_{i}}(and may be regarded as the set of edges of somedirected acyclic graph(dag) with vertices1,2,…n{\displaystyle 1,2,\ldots n}). Problems of this form may be solved by generic quadratic programming techniques. In the usual setting where thexi{\displaystyle x_{i}}values fall in atotally ordered setsuch asR{\displaystyle \mathbb {R} }, we may assumeWLOGthat the observations have been sorted so thatx1≤x2≤⋯≤xn{\displaystyle x_{1}\leq x_{2}\leq \cdots \leq x_{n}}, and takeE={(i,i+1):1≤i<n}{\displaystyle E=\{(i,i+1):1\leq i<n\}}. In this case, a simpleiterative algorithmfor solving the quadratic program is thepool adjacent violators algorithm. Conversely, Best and Chakravarti[8]studied the problem as anactive setidentification problem, and proposed a primal algorithm. These two algorithms can be seen as each other's dual, and both have acomputational complexityofO(n){\displaystyle O(n)}on already sorted data.[8] To complete the isotonic regression task, we may then choose any non-decreasing functionf(x){\displaystyle f(x)}such thatf(xi)=y^i{\displaystyle f(x_{i})={\hat {y}}_{i}}for all i. Any such function obviously solves and can be used to predict they{\displaystyle y}values for new values ofx{\displaystyle x}. A common choice whenxi∈R{\displaystyle x_{i}\in \mathbb {R} }would be to interpolate linearly between the points(xi,y^i){\displaystyle (x_{i},{\hat {y}}_{i})}, as illustrated in the figure, yielding a continuous piecewise linear function: As this article's first figure shows, in the presence of monotonicity violations the resulting interpolated curve will have flat (constant) intervals. In dose-response applications it is usually known thatf(x){\displaystyle f(x)}is not only monotone but alsosmooth. The flat intervals are incompatible withf(x){\displaystyle f(x)}'s assumed shape, and can be shown to be biased. A simple improvement for such applications, named centered isotonic regression (CIR), was developed by Oron and Flournoy and shown to substantially reduce estimation error for both dose-response and dose-finding applications.[9]Both CIR and the standard isotonic regression for the univariate, simply ordered case, are implemented in the R package "cir".[4]This package also provides analytical confidence-interval estimates.
https://en.wikipedia.org/wiki/Isotonic_regression
Instatisticsandeconometrics, themultinomial probit modelis a generalization of theprobit modelused when there are several possible categories that thedependent variablecan fall into. As such, it is an alternative to themultinomial logitmodel as one method ofmulticlass classification. It is not to be confused with themultivariateprobit model, which is used to model correlated binary outcomes for more than one independent variable. It is assumed that we have a series of observationsYi, fori= 1...n, of the outcomes of multi-way choices from acategorical distributionof sizem(there arempossible choices). Along with each observationYiis a set ofkobserved valuesx1,i, ...,xk,iof explanatory variables (also known asindependent variables, predictor variables, features, etc.). Some examples: The multinomial probit model is astatistical modelthat can be used to predict the likely outcome of an unobserved multi-way trial given the associated explanatory variables. In the process, the model attempts to explain the relative effect of differing explanatory variables on the different outcomes. Formally, the outcomesYiare described as beingcategorically-distributeddata, where each outcome valuehfor observationioccurs with an unobserved probabilitypi,hthat is specific to the observationiat hand because it is determined by the values of the explanatory variables associated with that observation. That is: or equivalently for each ofmpossible values ofh. Multinomial probit is often written in terms of alatent variable model: where Then That is, Note that this model allows for arbitrary correlation between theerror variables, so that it doesn't necessarily respectindependence of irrelevant alternatives. WhenΣ{\displaystyle \scriptstyle {\boldsymbol {\Sigma }}}is the identity matrix (such that there is no correlation orheteroscedasticity), the model is calledindependent probit. For details on how the equations are estimated, see the articleProbit model.
https://en.wikipedia.org/wiki/Multinomial_probit
Instatistics, thegeneralized Dirichlet distribution(GD) is a generalization of theDirichlet distributionwith a more general covariance structure and almost twice the number of parameters. Random vectors with a GD distribution are completelyneutral.[1] The density function ofp1,…,pk−1{\displaystyle p_{1},\ldots ,p_{k-1}}is where we definepk=1−∑i=1k−1pi{\textstyle p_{k}=1-\sum _{i=1}^{k-1}p_{i}}. HereB(x,y){\displaystyle B(x,y)}denotes theBeta function. This reduces to the standard Dirichlet distribution ifbi−1=ai+bi{\displaystyle b_{i-1}=a_{i}+b_{i}}for2⩽i⩽k−1{\displaystyle 2\leqslant i\leqslant k-1}(b0{\displaystyle b_{0}}is arbitrary). For example, ifk=4, then the density function ofp1,p2,p3{\displaystyle p_{1},p_{2},p_{3}}is wherep1+p2+p3<1{\displaystyle p_{1}+p_{2}+p_{3}<1}andp4=1−p1−p2−p3{\displaystyle p_{4}=1-p_{1}-p_{2}-p_{3}}. Connor and Mosimann define the PDF as they did for the following reason. Define random variablesz1,…,zk−1{\displaystyle z_{1},\ldots ,z_{k-1}}withz1=p1,z2=p2/(1−p1),z3=p3/(1−(p1+p2)),…,zi=pi/(1−(p1+⋯+pi−1)){\displaystyle z_{1}=p_{1},z_{2}=p_{2}/\left(1-p_{1}\right),z_{3}=p_{3}/\left(1-(p_{1}+p_{2})\right),\ldots ,z_{i}=p_{i}/\left(1-\left(p_{1}+\cdots +p_{i-1}\right)\right)}. Thenp1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}have the generalized Dirichlet distribution as parametrized above, if thezi{\displaystyle z_{i}}are independentbetawith parametersai,bi{\displaystyle a_{i},b_{i}},i=1,…,k−1{\displaystyle i=1,\ldots ,k-1}. Wong[2]gives the slightly more concise form forx1+⋯+xk≤1{\displaystyle x_{1}+\cdots +x_{k}\leq 1} whereγj=βj−αj+1−βj+1{\displaystyle \gamma _{j}=\beta _{j}-\alpha _{j+1}-\beta _{j+1}}for1≤j≤k−1{\displaystyle 1\leq j\leq k-1}andγk=βk−1{\displaystyle \gamma _{k}=\beta _{k}-1}. Note that Wong defines a distribution over ak{\displaystyle k}dimensional space (implicitly definingxk+1=1−∑i=1kxi{\textstyle x_{k+1}=1-\sum _{i=1}^{k}x_{i}}) while Connor and Mosiman use ak−1{\displaystyle k-1}dimensional space withxk=1−∑i=1k−1xi{\textstyle x_{k}=1-\sum _{i=1}^{k-1}x_{i}}. IfX=(X1,…,Xk)∼GDk(α1,…,αk;β1,…,βk){\displaystyle X=\left(X_{1},\ldots ,X_{k}\right)\sim GD_{k}\left(\alpha _{1},\ldots ,\alpha _{k};\beta _{1},\ldots ,\beta _{k}\right)}, then whereδj=rj+1+rj+2+⋯+rk{\displaystyle \delta _{j}=r_{j+1}+r_{j+2}+\cdots +r_{k}}forj=1,2,⋯,k−1{\displaystyle j=1,2,\cdots ,k-1}andδk=0{\displaystyle \delta _{k}=0}. Thus As stated above, ifbi−1=ai+bi{\displaystyle b_{i-1}=a_{i}+b_{i}}for2≤i≤k{\displaystyle 2\leq i\leq k}then the distribution reduces to a standard Dirichlet. This condition is different from the usual case, in which setting the additional parameters of the generalized distribution to zero results in the original distribution. However, in the case of the GDD, this results in a very complicated density function. SupposeX=(X1,…,Xk)∼GDk(α1,…,αk;β1,…,βk){\displaystyle X=\left(X_{1},\ldots ,X_{k}\right)\sim GD_{k}\left(\alpha _{1},\ldots ,\alpha _{k};\beta _{1},\ldots ,\beta _{k}\right)}is generalized Dirichlet, and thatY∣X{\displaystyle Y\mid X}ismultinomialwithn{\displaystyle n}trials (hereY=(Y1,…,Yk){\displaystyle Y=\left(Y_{1},\ldots ,Y_{k}\right)}). WritingYj=yj{\displaystyle Y_{j}=y_{j}}for1≤j≤k{\displaystyle 1\leq j\leq k}andyk+1=n−∑i=1kyi{\textstyle y_{k+1}=n-\sum _{i=1}^{k}y_{i}}the joint posterior ofX|Y{\displaystyle X|Y}is a generalized Dirichlet distribution with whereα′j=αj+yj{\displaystyle {\alpha '}_{j}=\alpha _{j}+y_{j}}andβ′j=βj+∑i=j+1k+1yi{\displaystyle {\beta '}_{j}=\beta _{j}+\sum _{i=j+1}^{k+1}y_{i}}for1⩽j⩽k.{\displaystyle 1\leqslant j\leqslant k.} Wong gives the following system as an example of how the Dirichlet and generalized Dirichlet distributions differ. He posits that a large urn contains balls ofk+1{\displaystyle k+1}different colours. The proportion of each colour is unknown. WriteX=(X1,…,Xk){\displaystyle X=(X_{1},\ldots ,X_{k})}for the proportion of the balls with colourj{\displaystyle j}in the urn. Experiment 1. Analyst 1 believes thatX∼D(α1,…,αk,αk+1){\displaystyle X\sim D(\alpha _{1},\ldots ,\alpha _{k},\alpha _{k+1})}(ie,X{\displaystyle X}is Dirichlet with parametersαi{\displaystyle \alpha _{i}}). The analyst then makesk+1{\displaystyle k+1}glass boxes and putsαi{\displaystyle \alpha _{i}}marbles of colouri{\displaystyle i}in boxi{\displaystyle i}(it is assumed that theαi{\displaystyle \alpha _{i}}are integers≥1{\displaystyle \geq 1}). Then analyst 1 draws a ball from the urn, observes its colour (say colourj{\displaystyle j}) and puts it in boxj{\displaystyle j}. He can identify the correct box because they are transparent and the colours of the marbles within are visible. The process continues untiln{\displaystyle n}balls have been drawn. The posterior distribution is then Dirichlet with parameters being the number of marbles in each box. Experiment 2. Analyst 2 believes thatX{\displaystyle X}follows a generalized Dirichlet distribution:X∼GD(α1,…,αk;β1,…,βk){\displaystyle X\sim GD(\alpha _{1},\ldots ,\alpha _{k};\beta _{1},\ldots ,\beta _{k})}. All parameters are again assumed to be positive integers. The analyst makesk+1{\displaystyle k+1}wooden boxes. The boxes have two areas: one for balls and one for marbles. The balls are coloured but the marbles are not coloured. Then forj=1,…,k{\displaystyle j=1,\ldots ,k}, he putsαj{\displaystyle \alpha _{j}}balls of colourj{\displaystyle j}, andβj{\displaystyle \beta _{j}}marbles, in to boxj{\displaystyle j}. He then puts a ball of colourk+1{\displaystyle k+1}in boxk+1{\displaystyle k+1}. The analyst then draws a ball from the urn. Because the boxes are wood, the analyst cannot tell which box to put the ball in (as he could in experiment 1 above); he also has a poor memory and cannot remember which box contains which colour balls. He has to discover which box is the correct one to put the ball in. He does this by opening box 1 and comparing the balls in it to the drawn ball. If the colours differ, the box is the wrong one. The analyst places a marble in box 1 and proceeds to box 2. He repeats the process until the balls in the box match the drawn ball, at which point he places the ball in the box with the other balls of matching colour. The analyst then draws another ball from the urn and repeats untiln{\displaystyle n}balls are drawn. The posterior is then generalized Dirichlet with parametersα{\displaystyle \alpha }being the number of balls, andβ{\displaystyle \beta }the number of marbles, in each box. Note that in experiment 2, changing the order of the boxes has a non-trivial effect, unlike experiment 1.
https://en.wikipedia.org/wiki/Generalized_Dirichlet_distribution