text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Avery large database, (originally writtenvery large data base) orVLDB,[1]is a database that contains a very large amount of data, so much that it can require specialized architectural, management, processing and maintenance methodologies.[2][3][4][5]
The vague adjectives ofveryandlargeallow for a broad and subjective interpretation, but attempts at defining a metric and threshold have been made. Early metrics were the size of the database in acanonical formviadatabase normalizationor the time for a full database operation like abackup. Technology improvements have continually changed what is consideredvery large.[6][7]
One definition has suggested that a database has become a VLDB when it is "too large to be maintained within the window of opportunity… the time when the database is quiet".[8]
There is no absolute amount of data that can be cited. For example, onecannotsay that any database with more than 1 TB of data is considered a VLDB. This absolute amount of data has varied over time as computer processing, storage and backup methods have become better able to handle larger amounts of data.[5]That said, VLDB issues may start to appear when 1 TB is approached,[8][9]and are more than likely to have appeared as 30 TB or so is exceeded.[10]
Key areas where a VLDB may present challenges include configuration, storage, performance, maintenance, administration, availability and server resources.[11]: 11
Careful configuration of databases that lie in the VLDB realm is necessary to alleviate or reduce issues raised by VLDB databases.[11]: 36–53[12]
The complexities of managing a VLDB can increase exponentially for thedatabase administratoras database size increases.[13]
When dealing with VLDB operations relating to maintenance and recovery such as database reorganizations and file copies which were quite practical on a non-VLDB take very significant amounts of time and resources for a VLDB database.[14]In particular it typically infeasible to meet a typicalrecovery time objective(RTO), the maximum expected time a database is expected to be unavailable due to interruption, by methods which involve copying files from disk or other storage archives.[13]To overcome these issues techniques such as clustering, cloned/replicated/standby databases, file-snapshots, storage snapshots or a backup manager may help achieve the RTO and availability, although individual methods may have limitations, caveats, license, and infrastructure requirements while some may risk data loss and not meet the recovery point objective (RPO).[15][16][13][17][18]For many systems only geographically remote solutions may be acceptable.[19]
Best practice is for backup and recovery to be architectured in terms of the overall availability and business continuity solution.[20][21]
Given the same infrastructure there may typically be a decrease in performance, that is increase inresponse timeas database size increases. Some accesses will simply have more data to process (scan) which will take proportionally longer (linear time); while the indexes used to access data may grow slightly in height requiring perhaps an extra storage access to reach the data (sub-linear time).[22]Other effects can becachingbecoming less efficient because proportionally less data can be cached and while someindexessuch as theB+automatically sustain well with growth others such as ahash tablemay need to be rebuilt.
Should an increase in database size cause the number of accessors of the database to increase then more server and network resources may be consumed, and the risk ofcontentionwill increase. Some solutions to regaining performance includepartitioning,clustering, possibly withsharding, or use of adatabase machine.[23]: 390[24]
Partitioning may be able assist the performance of bulk operations on a VLDB including backup and recovery.,[25]bulk movements due toinformation lifecycle management(ILM),[26]: 3[27]: 105–118reducing contention[27]: 327–329as well as allowing optimization of some query processing.[27]: 215–230
In order to satisfy needs of a VLDB the databasestorageneeds to have low accesslatencyandcontention, highthroughput, andhigh availability.
The increasing size of a VLDB may put pressure on server and network resources and a bottleneck may appear that may require infrastructure investment to resolve.[13][28]
VLDB is not the same asbig data, but the storage aspect ofbig datamay involve a VLDB database.[2]That said some of the storage solutions supportingbig datawere designed from the start to support large volumes of data, so database administrators may not encounter VLDB issues that older versions of traditionalRDBMS's might encounter.[29]
|
https://en.wikipedia.org/wiki/Very_large_database
|
Inapplied mathematics,topological data analysis(TDA) is an approach to the analysis of datasets using techniques fromtopology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particularmetricchosen and providesdimensionality reductionand robustness to noise. Beyond this, it inheritsfunctoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.[citation needed]
The initial motivation is to study the shape of data. TDA has combinedalgebraic topologyand other tools from pure mathematics to allow mathematically rigorous study of "shape". The main tool ispersistent homology, an adaptation ofhomologytopoint clouddata. Persistent homology has been applied to many types of data across many fields. Moreover, its mathematical foundation is also of theoretical importance. The unique features of TDA make it a promising bridge between topology and geometry.[citation needed]
TDA is premised on the idea that the shape of data sets contains relevant information. Real high-dimensional data is typically sparse, and tends to have relevant low dimensional features. One task of TDA is to provide a precise characterization of this fact. For example, the trajectory of a simple predator-prey system governed by theLotka–Volterra equations[1]forms a closed circle in state space. TDA provides tools to detect and quantify such recurrent motion.[2]
Many algorithms for data analysis, including those used in TDA, require setting various parameters. Without priordomain knowledge, the correct collection of parameters for a data set is difficult to choose. The main insight ofpersistent homologyis to use the information obtained from all parameter values by encoding this huge amount of information into an understandable and easy-to-represent form. With TDA, there is a mathematical interpretation when the information is ahomology group. In general, the assumption is that features that persist for a wide range of parameters are "true" features. Features persisting for only a narrow range of parameters are presumed to be noise, although the theoretical justification for this is unclear.[3]
Precursors to the full concept of persistent homology appeared gradually over time.[4]In 1990, Patrizio Frosini introduced a pseudo-distance between submanifolds, and later the size function, which on 1-dim curves is equivalent to the 0th persistent homology.[5][6]Nearly a decade later,Vanessa Robinsstudied the images of homomorphisms induced by inclusion.[7]Finally, shortly thereafter,Herbert Edelsbrunneret al. introduced the concept of persistent homology together with an efficient algorithm and its visualization as a persistence diagram.[8]Gunnar Carlssonet al. reformulated the initial definition and gave an equivalent visualization method calledpersistence barcodes,[9]interpreting persistence in the language of commutative algebra.[10]
In algebraic topology the persistent homology has emerged through the work of Sergey Barannikov on Morse theory. The set of critical values of smooth Morse function was canonically partitioned into pairs "birth-death", filtered complexes were classified, their invariants, equivalent to persistence diagram and persistence barcodes, together with the efficient algorithm for their calculation, were described under the name of canonical forms in 1994 by Barannikov.[11][12]
Some widely used concepts are introduced below. Note that some definitions may vary from author to author.
Apoint cloudis often defined as a finite set of points in some Euclidean space, but may be taken to be any finite metric space.
TheČech complexof a point cloud is thenerveof thecoverof balls of a fixed radius around each point in the cloud.
Apersistence moduleU{\displaystyle \mathbb {U} }indexed byZ{\displaystyle \mathbb {Z} }is a vector spaceUt{\displaystyle U_{t}}for eacht∈Z{\displaystyle t\in \mathbb {Z} }, and a linear maputs:Us→Ut{\displaystyle u_{t}^{s}\colon U_{s}\to U_{t}}whenevers≤t{\displaystyle s\leq t}, such thatutt=1{\displaystyle u_{t}^{t}=1}for allt{\displaystyle t}andutsusr=utr{\displaystyle u_{t}^{s}u_{s}^{r}=u_{t}^{r}}wheneverr≤s≤t.{\displaystyle r\leq s\leq t.}[13]An equivalent definition is a functor fromZ{\displaystyle \mathbb {Z} }considered as a partially ordered set to the category of vector spaces.
Thepersistent homology groupPH{\displaystyle PH}of a point cloud is the persistence module defined asPHk(X)=∏Hk(Xr){\displaystyle PH_{k}(X)=\prod H_{k}(X_{r})}, whereXr{\displaystyle X_{r}}is the Čech complex of radiusr{\displaystyle r}of the point cloudX{\displaystyle X}andHk{\displaystyle H_{k}}is the homology group.
Apersistence barcodeis amultisetof intervals inR{\displaystyle \mathbb {R} }, and apersistence diagramis a multiset of points inΔ{\displaystyle \Delta }(:={(u,v)∈R2∣u,v≥0,u≤v}{\displaystyle :=\{(u,v)\in \mathbb {R} ^{2}\mid u,v\geq 0,u\leq v\}}).
TheWasserstein distancebetween two persistence diagramsX{\displaystyle X}andY{\displaystyle Y}is defined asWp[Lq](X,Y):=infφ:X→Y[∑x∈X(‖x−φ(x)‖q)p]1/p{\displaystyle W_{p}[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\left[\sum _{x\in X}(\Vert x-\varphi (x)\Vert _{q})^{p}\right]^{1/p}}where1≤p,q≤∞{\displaystyle 1\leq p,q\leq \infty }andφ{\displaystyle \varphi }ranges over bijections betweenX{\displaystyle X}andY{\displaystyle Y}. Please refer to figure 3.1 in Munch[14]for illustration.
Thebottleneck distancebetweenX{\displaystyle X}andY{\displaystyle Y}isW∞[Lq](X,Y):=infφ:X→Ysupx∈X‖x−φ(x)‖q.{\displaystyle W_{\infty }[L_{q}](X,Y):=\inf _{\varphi :X\to Y}\sup _{x\in X}\Vert x-\varphi (x)\Vert _{q}.}This is a special case of Wasserstein distance, lettingp=∞{\displaystyle p=\infty }.
The first classification theorem for persistent homology appeared in 1994[11]via Barannikov's canonical forms. Theclassification theoreminterpreting persistence in the language of commutative algebra appeared in 2005:[10]for a finitely generated persistence moduleC{\displaystyle C}with fieldF{\displaystyle F}coefficients,H(C;F)≃⨁ixti⋅F[x]⊕(⨁jxrj⋅(F[x]/(xsj⋅F[x]))).{\displaystyle H(C;F)\simeq \bigoplus _{i}x^{t_{i}}\cdot F[x]\oplus \left(\bigoplus _{j}x^{r_{j}}\cdot (F[x]/(x^{s_{j}}\cdot F[x]))\right).}Intuitively, the free parts correspond to the homology generators that appear at filtration levelti{\displaystyle t_{i}}and never disappear, while the torsion parts correspond to those that appear at filtration levelrj{\displaystyle r_{j}}and last forsj{\displaystyle s_{j}}steps of the filtration (or equivalently, disappear at filtration levelsj+rj{\displaystyle s_{j}+r_{j}}).[11]
Persistent homology is visualized through a barcode or persistence diagram. The barcode has its root in abstract mathematics. Namely, the category of finite filtered complexes over a field is semi-simple. Any filtered complex is isomorphic to its canonical form, a direct sum of one- and two-dimensional simple filtered complexes.
Stability is desirable because it provides robustness against noise. IfX{\displaystyle X}is any space which is homeomorphic to a simplicial complex, andf,g:X→R{\displaystyle f,g:X\to \mathbb {R} }are continuous tame[15]functions, then the persistence vector spaces{Hk(f−1([0,r]))}{\displaystyle \{H_{k}(f^{-1}([0,r]))\}}and{Hk(g−1([0,r]))}{\displaystyle \{H_{k}(g^{-1}([0,r]))\}}are finitely presented, andW∞(D(f),D(g))≤‖f−g‖∞{\displaystyle W_{\infty }(D(f),D(g))\leq \lVert f-g\rVert _{\infty }}, whereW∞{\displaystyle W_{\infty }}refers to the bottleneck distance[16]andD{\displaystyle D}is the map taking a continuous tame function to the persistence diagram of itsk{\displaystyle k}-th homology.
The basic workflow in TDA is:[17]
Graphically speaking,
The first algorithm over all fields for persistent homology in algebraic topology setting was described by Barannikov[11]through reduction to the canonical form by upper-triangular matrices. The algorithm for persistent homology overF2{\displaystyle F_{2}}was given by Edelsbrunner et al.[8]Afra Zomorodian and Carlsson gave the practical algorithm to compute persistent homology over all fields.[10]Edelsbrunner and Harer's book gives general guidance on computational topology.[19]
One issue that arises in computation is the choice of complex. TheČech complexand theVietoris–Rips complexare most natural at first glance; however, their size grows rapidly with the number of data points. The Vietoris–Rips complex is preferred over the Čech complex because its definition is simpler and the Čech complex requires extra effort to define in a general finite metric space. Efficient ways to lower the computational cost of homology have been studied. For example, the α-complex and witness complex are used to reduce the dimension and size of complexes.[20]
Recently,Discrete Morse theoryhas shown promise for computational homology because it can reduce a given simplicial complex to a much smaller cellular complex which is homotopic to the original one.[21]This reduction can in fact be performed as the complex is constructed by usingmatroid theory, leading to further performance increases.[22]Another recent algorithm saves time by ignoring the homology classes with low persistence.[23]
Various software packages are available, such asjavaPlex,Dionysus,Perseus,PHAT,DIPHA,GUDHI,Ripser, andTDAstats. A comparison between these tools is done by Otter et al.[24]Giotto-tdais a Python package dedicated to integrating TDA in the machine learning workflow by means of ascikit-learn[1]API. An R packageTDAis capable of calculating recently invented concepts like landscape and the kernel distance estimator.[25]TheTopology ToolKitis specialized for continuous data defined on manifolds of low dimension (1, 2 or 3), as typically found inscientific visualization.Cubicleis optimized for large (gigabyte-scale) grayscale image data in dimension 1, 2 or 3 usingcubical complexesanddiscrete Morse theory. Another R package,TDAstats, uses the Ripser library to calculate persistent homology.[26]
High-dimensional data is impossible to visualize directly. Many methods have been invented to extract a low-dimensional structure from the data set, such asprincipal component analysisandmultidimensional scaling.[27]However, it is important to note that the problem itself is ill-posed, since many different topological features can be found in the same data set. Thus, the study of visualization of high-dimensional spaces is of central importance to TDA, although it does not necessarily involve the use of persistent homology. However, recent attempts have been made to use persistent homology in data visualization.[28]
Carlsson et al. have proposed a general method calledMAPPER.[29]It inherits the idea ofJean-Pierre Serrethat a covering preserves homotopy.[30]A generalized formulation of MAPPER is as follows:
LetX{\displaystyle X}andZ{\displaystyle Z}be topological spaces and letf:X→Z{\displaystyle f\colon X\to Z}be a continuous map. LetU={Uα}α∈A{\displaystyle \mathbb {U} =\{U_{\alpha }\}_{\alpha \in A}}be a finite open covering ofZ{\displaystyle Z}. The output of MAPPER is the nerve of the pullback coverM(U,f):=N(f−1(U)){\textstyle M(\mathbb {U} ,f):=N(f^{-1}(\mathbb {U} ))}, where each preimage is split into its connected components.[28]This is a very general concept, of which the Reeb graph[31]and merge trees are special cases.
This is not quite the original definition.[29]Carlsson et al. chooseZ{\displaystyle Z}to beR{\displaystyle \mathbb {R} }orR2{\displaystyle \mathbb {R} ^{2}}, and cover it with open sets such that at most two intersect.[3]This restriction means that the output is in the form of acomplex network. Because the topology of a finite point cloud is trivial, clustering methods (such assingle linkage) are used to produce the analogue of connected sets in the preimagef−1(U){\displaystyle f^{-1}(U)}when MAPPER is applied to actual data.
Mathematically speaking, MAPPER is a variation of theReeb graph. If theM(U,f){\textstyle M(\mathbb {U} ,f)}is at most one dimensional, then for eachi≥0{\displaystyle i\geq 0},Hi(X)≃H0(N(U);F^i)⊕H1(N(U);F^i−1).{\displaystyle H_{i}(X)\simeq H_{0}(N(\mathbb {U} );{\hat {F}}_{i})\oplus H_{1}(N(\mathbb {U} );{\hat {F}}_{i-1}).}[32]The added flexibility also has disadvantages. One problem is instability, in that some change of the choice of the cover can lead to major change of the output of the algorithm.[33]Work has been done to overcome this problem.[28]
Three successful applications of MAPPER can be found in Carlsson et al.[34]A comment on the applications in this paper by J. Curry is that "a common feature of interest in applications is the presence of flares or tendrils".[35]
A free implementation of MAPPER written by Daniel Müllner and Aravindakshan Babu is availableonline. MAPPER also forms the basis of Ayasdi's AI platform.
Multidimensional persistence is important to TDA. The concept arises in both theory and practice. The first investigation of multidimensional persistence was early in the development of TDA.[36]Carlsson-Zomorodian introduced the theory of multidimensional persistence in[37]and in collaboration with Singh[38]introduced the use of tools from symbolic algebra (Grobner basis methods) to compute MPH modules. Their definition presents multidimensional persistence with n parameters as aZn{\displaystyle \mathbb {Z} ^{n}}graded module over a polynomial ring in n variables. Tools from commutative and homological algebra are applied to the study of multidimensional persistence in work of Harrington-Otter-Schenck-Tillman.[39]The first application to appear in the literature is a method for shape comparison, similar to the invention of TDA.[40]
The definition of ann-dimensional persistence moduleinRn{\displaystyle \mathbb {R} ^{n}}is[35]
It might be worth noting that there are controversies on the definition of multidimensional persistence.[35]
One of the advantages of one-dimensional persistence is its representability by a diagram or barcode. However, discrete complete invariants of multidimensional persistence modules do not exist.[41]The main reason for this is that the structure of the collection of indecomposables is extremely complicated byGabriel's theoremin the theory of quiver representations,[42]although a finitely generated n-dim persistence module can be uniquely decomposed into a direct sum of indecomposables due to the Krull-Schmidt theorem.[43]
Nonetheless, many results have been established. Carlsson and Zomorodian introduced therank invariantρM(u,v){\displaystyle \rho _{M}(u,v)}, defined as theρM(u,v)=rank(xu−v:Mu→Mv){\displaystyle \rho _{M}(u,v)=\mathrm {rank} (x^{u-v}\colon M_{u}\to M_{v})}, in whichM{\displaystyle M}is a finitely generated n-graded module. In one dimension, it is equivalent to the barcode. In the literature, the rank invariant is often referred as the persistent Betti numbers (PBNs).[19]In many theoretical works, authors have used a more restricted definition, an analogue from sublevel set persistence. Specifically, the persistence Betti numbers of a functionf:X→Rk{\displaystyle f:X\to \mathbb {R} ^{k}}are given by the functionβf:Δ+→N{\displaystyle \beta _{f}\colon \Delta ^{+}\to \mathrm {N} }, taking each(u,v)∈Δ+{\displaystyle (u,v)\in \Delta ^{+}}toβf(u,v):=rank(H(X(f≤u)→H(X(f≤v))){\displaystyle \beta _{f}(u,v):=\mathrm {rank} (H(X(f\leq u)\to H(X(f\leq v)))}, whereΔ+:={(u,v)∈Rk×Rk:u≤v}{\displaystyle \Delta ^{+}:=\{(u,v)\in \mathbb {R} ^{k}\times \mathbb {R} ^{k}:u\leq v\}}andX(f≤u):={x∈X:f(x)≤u}{\displaystyle X(f\leq u):=\{x\in X:f(x)\leq u\}}.
Some basic properties include monotonicity and diagonal jump.[44]Persistent Betti numbers will be finite ifX{\displaystyle X}is a compact and locally contractible subspace ofRn{\displaystyle \mathbb {R} ^{n}}.[45]
Using a foliation method, the k-dim PBNs can be decomposed into a family of 1-dim PBNs by dimensionality deduction.[46]This method has also led to a proof that multi-dim PBNs are stable.[47]The discontinuities of PBNs only occur at points(u,v)(u≤v){\displaystyle (u,v)(u\leq v)}where eitheru{\displaystyle u}is a discontinuous point ofρM(⋆,v){\displaystyle \rho _{M}(\star ,v)}orv{\displaystyle v}is a discontinuous point ofρ(u,⋆){\displaystyle \rho (u,\star )}under the assumption thatf∈C0(X,Rk){\displaystyle f\in C^{0}(X,\mathbb {R} ^{k})}andX{\displaystyle X}is a compact, triangulable topological space.[48]
Persistent space, a generalization of persistent diagram, is defined as the multiset of all points with multiplicity larger than 0 and the diagonal.[49]It provides a stable and complete representation of PBNs. An ongoing work by Carlsson et al. is trying to give geometric interpretation of persistent homology, which might provide insights on how to combine machine learning theory with topological data analysis.[50]
The first practical algorithm to compute multidimensional persistence was invented very early.[51]After then, many other algorithms have been proposed, based on such concepts as discrete morse theory[52]and finite sample estimating.[53]
The standard paradigm in TDA is often referred assublevel persistence. Apart from multidimensional persistence, many works have been done to extend this special case.
The nonzero maps in persistence module are restricted by the preorder relationship in the category. However, mathematicians have found that the unanimity of direction is not essential to many results. "The philosophical point is that the decomposition theory of graph representations is somewhat independent of the orientation of the graph edges".[54]Zigzag persistence is important to the theoretical side. The examples given in Carlsson's review paper to illustrate the importance of functorality all share some of its features.[3]
There are some attempts to loosen the stricter restriction of the function.[55]Please refer to theCategorification and cosheavesandImpact on mathematicssections for more information.
It's natural to extend persistence homology to other basic concepts in algebraic topology, such as cohomology and relative homology/cohomology.[56]An interesting application is the computation of circular coordinates for a data set via the first persistent cohomology group.[57]
Normal persistence homology studies real-valued functions. The circle-valued map might be useful, "persistence theory for circle-valued maps promises to play the role for some vector fields as does the standard persistence theory for scalar fields", as commented inDan Burgheleaet al.[58]The main difference is that Jordan cells (very similar in format to theJordan blocksin linear algebra) are nontrivial in circle-valued functions, which would be zero in real-valued case, and combining with barcodes give the invariants of a tame map, under moderate conditions.[58]
Two techniques they use are Morse-Novikov theory[59]and graph representation theory.[60]More recent results can be found in D. Burghelea et al.[61]For example, the tameness requirement can be replaced by the much weaker condition, continuous.
The proof of the structure theorem relies on the base domain being field, so not many attempts have been made on persistence homology with torsion. Frosini defined a pseudometric on this specific module and proved its stability.[62]One of its novelty is that it doesn't depend on some classification theory to define the metric.[63]
One advantage ofcategory theoryis its ability to lift concrete results to a higher level, showing relationships between seemingly unconnected objects. Peter Bubenik et al.[64]offers a short introduction of category theory fitted for TDA.
Category theory is the language of modern algebra, and has been widely used in the study of algebraic geometry and topology. It has been noted that "the key observation of[10]is that the persistence diagram produced by[8]depends only on the algebraic structure carried by this diagram."[65]The use of category theory in TDA has proved to be fruitful.[64][65]
Following the notations made in Bubenik et al.,[65]theindexing categoryP{\textstyle P}is anypreordered set(not necessarilyN{\displaystyle \mathbb {N} }orR{\displaystyle \mathbb {R} }), the target categoryD{\displaystyle D}is any category (instead of the commonly usedVectF{\textstyle \mathrm {Vect} _{\mathbb {F} }}), andfunctorsP→D{\textstyle P\to D}are calledgeneralized persistence modulesinD{\displaystyle D}, overP{\textstyle P}.
One advantage of using category theory in TDA is a clearer understanding of concepts and the discovery of new relationships between proofs. Take two examples for illustration. The understanding of the correspondence between interleaving and matching is of huge importance, since matching has been the method used in the beginning (modified from Morse theory). A summary of works can be found in Vin de Silva et al.[66]Many theorems can be proved much more easily in a more intuitive setting.[63]Another example is the relationship between the construction of different complexes from point clouds. It has long been noticed that Čech and Vietoris-Rips complexes are related. Specifically,Vr(X)⊂C2r(X)⊂V2r(X){\displaystyle V_{r}(X)\subset C_{{\sqrt {2}}r}(X)\subset V_{2r}(X)}.[67]The essential relationship between Cech and Rips complexes can be seen much more clearly in categorical language.[66]
The language of category theory also helps cast results in terms recognizable to the broader mathematical community. Bottleneck distance is widely used in TDA because of the results on stability with respect to the bottleneck distance.[13][16]In fact, the interleaving distance is theterminal objectin a poset category of stable metrics on multidimensional persistence modules in aprime field.[63][68]
Sheaves, a central concept in modernalgebraic geometry, are intrinsically related to category theory. Roughly speaking,sheavesare the mathematical tool for understanding how local information determines global information. Justin Curry regards level set persistence as the study offibersof continuous functions. The objects that he studies are very similar to those by MAPPER, but with sheaf theory as the theoretical foundation.[35]Although no breakthrough in the theory of TDA has yet used sheaf theory, it is promising since there are many beautiful theorems in algebraic geometry relating to sheaf theory. For example, a natural theoretical question is whether different filtration methods result in the same output.[69]
Stability is of central importance to data analysis, since real data carry noises. By usage of category theory, Bubenik et al. have distinguished between soft and hard stability theorems, and proved that soft cases are formal.[65]Specifically, general workflow of TDA is
The soft stability theorem asserts thatHF{\displaystyle HF}isLipschitz continuous, and the hard stability theorem asserts thatJ{\displaystyle J}is Lipschitz continuous.
Bottleneck distance is widely used in TDA. The isometry theorem asserts that theinterleaving distancedI{\displaystyle d_{I}}is equal to the bottleneck distance.[63]Bubenik et al. have abstracted the definition to that between functorsF,G:P→D{\displaystyle F,G\colon P\to D}whenP{\textstyle P}is equipped with a sublinear projection or superlinear family, in which still remains a pseudometric.[65]Considering the magnificent characters of interleaving distance,[70]here we introduce the general definition of interleaving distance(instead of the first introduced one):[13]LetΓ,K∈TransP{\displaystyle \Gamma ,K\in \mathrm {Trans_{P}} }(a function fromP{\textstyle P}toP{\textstyle P}which is monotone and satisfiesx≤Γ(x){\displaystyle x\leq \Gamma (x)}for allx∈P{\textstyle x\in P}). A(Γ,K){\displaystyle (\Gamma ,K)}-interleaving between F and G consists of natural transformationsφ:F⇒GΓ{\displaystyle \varphi \colon F\Rightarrow G\Gamma }andψ:G⇒FK{\displaystyle \psi \colon G\Rightarrow FK}, such that(ψΓ)=φFηKΓ{\displaystyle (\psi \Gamma )=\varphi F\eta _{K\Gamma }}and(φΓ)=ψGηΓK{\displaystyle (\varphi \Gamma )=\psi G\eta _{\Gamma K}}.
The two main results are[65]
These two results summarize many results on stability of different models of persistence.
For the stability theorem of multidimensional persistence, please refer to the subsection of persistence.
The structure theorem is of central importance to TDA; as commented by G. Carlsson, "what makes homology useful as a discriminator between topological spaces is the fact that there is a classification theorem for finitely generated abelian groups".[3](see thefundamental theorem of finitely generated abelian groups).
The main argument used in the proof of the original structure theorem is the standardstructure theorem for finitely generated modules over a principal ideal domain.[10]However, this argument fails if the indexing set is(R,≤){\displaystyle (\mathbb {R} ,\leq )}.[3]
In general, not every persistence module can be decomposed into intervals.[71]Many attempts have been made at relaxing the restrictions of the original structure theorem.[clarification needed]The case for pointwise finite-dimensional persistence modules indexed by a locally finite subset ofR{\displaystyle \mathbb {R} }is solved based on the work of Webb.[72]The most notable result is done by Crawley-Boevey, which solved the case ofR{\displaystyle \mathbb {R} }. Crawley-Boevey's theorem states that any pointwise finite-dimensional persistence module is a direct sum of interval modules.[73]
To understand the definition of his theorem, some concepts need introducing. Anintervalin(R,≤){\displaystyle (\mathbb {R} ,\leq )}is defined as a subsetI⊂R{\displaystyle I\subset \mathbb {R} }having the property that ifr,t∈I{\displaystyle r,t\in I}and if there is ans∈R{\displaystyle s\in \mathbb {R} }such thatr≤s≤t{\displaystyle r\leq s\leq t}, thens∈I{\displaystyle s\in I}as well. Aninterval modulekI{\displaystyle k_{I}}assigns to each elements∈I{\displaystyle s\in I}the vector spacek{\displaystyle k}and assigns the zero vector space to elements inR∖I{\displaystyle \mathbb {R} \setminus I}. All mapsρst{\displaystyle \rho _{s}^{t}}are the zero map, unlesss,t∈I{\displaystyle s,t\in I}ands≤t{\displaystyle s\leq t}, in which caseρst{\displaystyle \rho _{s}^{t}}is the identity map.[35]Interval modules are indecomposable.[74]
Although the result of Crawley-Boevey is a very powerful theorem, it still doesn't extend to the q-tame case.[71]A persistence module isq-tameif the rank ofρst{\displaystyle \rho _{s}^{t}}is finite for alls<t{\displaystyle s<t}. There are examples of q-tame persistence modules that fail to be pointwise finite.[75]However, it turns out that a similar structure theorem still holds if the features that exist only at one index value are removed.[74]This holds because the infinite dimensional parts at each index value do not persist, due to the finite-rank condition.[76]Formally, the observable categoryOb{\displaystyle \mathrm {Ob} }is defined asPers/Eph{\displaystyle \mathrm {Pers} /\mathrm {Eph} }, in whichEph{\displaystyle \mathrm {Eph} }denotes the full subcategory ofPers{\displaystyle \mathrm {Pers} }whose objects are the ephemeral modules (ρst=0{\displaystyle \rho _{s}^{t}=0}whenevers<t{\displaystyle s<t}).[74]
Note that the extended results listed here do not apply to zigzag persistence, since the analogue of a zigzag persistence module overR{\displaystyle \mathbb {R} }is not immediately obvious.
Real data is always finite, and so its study requires us to take stochasticity into account. Statistical analysis gives us the ability to separate true features of the data from artifacts introduced by random noise. Persistent homology has no inherent mechanism to distinguish between low-probability features and high-probability features.
One way to apply statistics to topological data analysis is to study the statistical properties of topological features of point clouds. The study of random simplicial complexes offers some insight into statistical topology. Katharine Turner et al.[77]offers a summary of work in this vein.
A second way is to study probability distributions on the persistence space. The persistence spaceB∞{\displaystyle B_{\infty }}is∐nBn/∽{\displaystyle \coprod _{n}B_{n}/{\backsim }}, whereBn{\displaystyle B_{n}}is the space of all barcodes containing exactlyn{\displaystyle n}intervals and the equivalences are{[x1,y1],[x2,y2],…,[xn,yn]}∽{[x1,y1],[x2,y2],…,[xn−1,yn−1]}{\displaystyle \{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n},y_{n}]\}\backsim \{[x_{1},y_{1}],[x_{2},y_{2}],\ldots ,[x_{n-1},y_{n-1}]\}}ifxn=yn{\displaystyle x_{n}=y_{n}}.[78]This space is fairly complicated; for example, it is not complete under the bottleneck metric. The first attempt made to study it is by Yuriy Mileyko et al.[79]The space of persistence diagramsDp{\displaystyle D_{p}}in their paper is defined asDp:={d∣∑x∈d(2infy∈Δ‖x−y‖)p<∞}{\displaystyle D_{p}:=\left\{d\mid \sum _{x\in d}\left(2\inf _{y\in \Delta }\lVert x-y\rVert \right)^{p}<\infty \right\}}whereΔ{\displaystyle \Delta }is the diagonal line inR2{\displaystyle \mathbb {R} ^{2}}. A nice property is thatDp{\displaystyle D_{p}}is complete and separable in the Wasserstein metricWp(u,v)=(infγ∈Γ(u,v)∫X×Xρp(x,y)dγ(x,y))1/p{\displaystyle W_{p}(u,v)=\left(\inf _{\gamma \in \Gamma (u,v)}\int _{\mathbb {X} \times \mathbb {X} }\rho ^{p}(x,y)\,\mathrm {d} \gamma (x,y)\right)^{1/p}}. Expectation, variance, and conditional probability can be defined in theFréchet sense. This allows many statistical tools to be ported to TDA. Works onnull hypothesis significance test,[80]confidence intervals,[81]and robust estimates[82]are notable steps.
A third way is to consider the cohomology of probabilistic space or statistical systems directly, called information structures and basically consisting in the triple (Ω,Π,P{\displaystyle \Omega ,\Pi ,P}), sample space, random variables and probability laws.[83][84]Random variables are considered as partitions of the n atomic probabilities (seen as a probability (n-1)-simplex,|Ω|=n{\displaystyle |\Omega |=n}) on the lattice of partitions (Πn{\displaystyle \Pi _{n}}). The random variables or modules of measurable functions provide the cochain complexes while the coboundary is considered as the general homological algebra first discovered byGerhard Hochschildwith a left action implementing the action of conditioning. The first cocycle condition corresponds to the chain rule of entropy, allowing to derive uniquely up to the multiplicative constant,Shannon entropyas the first cohomology class. The consideration of a deformed left-action generalises the framework to Tsallis entropies. The information cohomology is an example of ringed topos. Multivariate k-Mutual informationappear in coboundaries expressions, and their vanishing, related to cocycle condition, gives equivalent conditions for statistical independence.[85]Minima of mutual-informations, also called synergy, give rise to interesting independence configurations analog to homotopical links. Because of its combinatorial complexity, only the simplicial subcase of the cohomology and of information structure has been investigated on data. Applied to data, those cohomological tools quantifies statistical dependences and independences, includingMarkov chainsandconditional independence, in the multivariate case.[86]Notably, mutual-informations generalizecorrelation coefficientandcovarianceto non-linear statistical dependences. These approaches were developed independently and only indirectly related to persistence methods, but may be roughly understood in the simplicial case using Hu Kuo Tin Theorem that establishes one-to-one correspondence between mutual-informations functions and finite measurable function of a set with intersection operator, to construct theČech complexskeleton. Information cohomology offers some direct interpretation and application in terms of neuroscience (neural assembly theory and qualitative cognition[87]), statistical physic, and deep neural network for which the structure and learning algorithm are imposed by the complex of random variables and the information chain rule.[88]
Persistence landscapes, introduced by Peter Bubenik, are a different way to represent barcodes, more amenable to statistical analysis.[89]Thepersistence landscapeof a persistent moduleM{\displaystyle M}is defined as a functionλ:N×R→R¯{\displaystyle \lambda :\mathbb {N} \times \mathbb {R} \to {\bar {\mathbb {R} }}},λ(k,t):=sup(m≥0∣βt−m,t−m≥k){\displaystyle \lambda (k,t):=\sup(m\geq 0\mid \beta ^{t-m,t-m}\geq k)}, whereR¯{\displaystyle {\bar {\mathbb {R} }}}denotes theextended real lineandβa,b=dim(im(M(a≤b))){\displaystyle \beta ^{a,b}=\mathrm {dim} (\mathrm {im} (M(a\leq b)))}. The space of persistence landscapes is very nice: it inherits all good properties of barcode representation (stability, easy representation, etc.), but statistical quantities can be readily defined, and some problems in Y. Mileyko et al.'s work, such as the non-uniqueness of expectations,[79]can be overcome. Effective algorithms for computation with persistence landscapes are available.[90]Another approach is to use revised persistence, which is image, kernel and cokernel persistence.[91]
More than one way exists to classify the applications of TDA. Perhaps the most natural way is by field. A very incomplete list of successful applications includes[92]data skeletonization,[93]shape study,[94]graph reconstruction,[95][96][97][98][99]image analysis,[100][101]material,[102][103]progression analysis of disease,[104][105]sensor network,[67]signal analysis,[106]cosmic web,[107]complex network,[108][109][110][111]fractal geometry,[112]viral evolution,[113]propagation of contagions on networks,[114]bacteria classification using molecular spectroscopy,[115]super-resolution microscopy,[116]hyperspectral imaging in physical-chemistry,[117]remote sensing,[118]feature selection,[119]and early warning signs of financial crashes.[120]
Another way is by distinguishing the techniques by G. Carlsson,[78]
one being the study of homological invariants of data on individual data sets, and the other is the use of homological invariants in the study of databases where the data points themselves have geometric structure.
Topological data analysis and persistent homology have had impacts onMorse theory.[121]Morse theory has played a very important role in the theory of TDA, including on computation. Some work in persistent homology has extended results about Morse functions to tame functions or, even to continuous functions[citation needed]. A forgotten result of R. Deheuvels long before the invention of persistent homology extends Morse theory to all continuous functions.[122]
One recent result is that the category ofReeb graphsis equivalent to a particular class of cosheaf.[123]This is motivated by theoretical work in TDA, since the Reeb graph is related to Morse theory and MAPPER is derived from it. The proof of this theorem relies on the interleaving distance.
Persistent homology is closely related tospectral sequences.[124][125]In particular the algorithm bringing a filtered complex to its canonical form[11]permits much faster calculation of spectral sequences than the standard procedure of calculatingEp,qr{\displaystyle E_{p,q}^{r}}groups page by page. Zigzag persistence may turn out to be of theoretical importance to spectral sequences.
TheDatabase of Original & Non-Theoretical Uses of Topology (DONUT)is a database of scholarly articles featuring practical applications of topological data analysis to various areas of science. DONUT was started in 2017 by Barbara Giunti, Janis Lazovskis, and Bastian Rieck,[126]and as of October 2023 currently contains 447 articles.[127]DONUT was featured in the November 2023 issue of theNotices of theAmerican Mathematical Society.[128]
The stability property of topological features to small perturbations has been applied to makeGraph Neural Networksrobust against adversaries. Arafat et. al.[129]proposed a robustness framework which systematically integrates both local and global topological graph feature representations, the impact of which is controlled by the robust regularized topological loss. Given the attacker's budget, they derived stability guarantees on the node representations, establishing an important connection betweenTopological stabilityandAdversarial ML.
|
https://en.wikipedia.org/wiki/Topological_data_analysis
|
XLDB(eXtremelyLargeDataBases) was a yearly conference aboutdatabases,data managementandanalyticsheld from 2007 to 2019. The definition ofextremely largerefers to data sets that are too big in terms of volume (too much), and/or velocity (too fast), and/or variety (too many places, too many formats) to be handled using conventional solutions. This conference dealt with the high-end ofvery large databases(VLDB). It was conceived and chaired by Jacek Becla.
In October 2007, data experts gathered atSLAC National Accelerator Labfor theFirst Workshop on Extremely Large Databases. As a result, the XLDB research community was formed to meet the rapidly growing demands of the largest data systems. In addition to the original invitational workshop, an open conference, tutorials, and annual satellite events on different continents were added. The main event, held annually atStanford Universitygathers over 300 attendees. XLDB is one of the data systems events catering to both academic and industry communities. For 2009, the workshop was co-located withVLDB 2009in France to reach out to non-US research communities.[1]XLDB 2019 followed Stanford's Conference on Systems and Machine Learning (SysML).[2]
The main goals of this community include:[3]
As of 2013, the community consisted of over one thousand members including:
The community met annually atStanford Universitythrough 2019. Occasional satellite events were held inAsiaandEurope.
A detailed report or videos was produced after each workshop.
XLDB events led to initiating an effort to build a new open source, science database calledSciDB.[4]
The XLDB organizers started defining ascience benchmarkfor scientific data management systems called SS-DB.
AtXLDB 2012the XLDB organizers announced that two major databases that support arrays asfirst-class objects(MonetDBSciQL andSciDB) have formed a working group in conjunction with XLDB. This working group is proposing a common syntax (provisionally named “ArrayQL”) for manipulating arrays, including array creation and query.
|
https://en.wikipedia.org/wiki/XLDB
|
TheData Analysis and Real World Interrogation Network(DARWIN EU) is aEuropean Union(EU) initiative coordinated by theEuropean Medicines Agency(EMA) to generate and utilizereal world evidence(RWE) to support the evaluation and supervision of medicines across the EU. The project aims to enhance decision-making in regulatory processes by drawing on anonymized data from routine healthcare settings.[1][2][3]
DARWIN EU was officially launched in 2022 as part of the EMA's broader strategy to harness big data for public health benefits. The network facilitates access to real-world data from a wide array of sources, including electronic health records, disease registries, hospital databases, and biobanks. These data are standardized using the OMOP (Observational Medical Outcomes Partnership) common data model to ensure interoperability and comparability across datasets.[4][5][6]
The key goals of DARWIN EU include:
DARWIN EU is managed by a coordination center based atErasmus University Medical CenterinRotterdam,Netherlands. The center is responsible for expanding the network of data partners, managing study requests, and ensuring the scientific quality of outputs.[1]
As of early 2024, DARWIN EU had completed 14 studies and had 11 more underway. The EMA plans to scale up DARWIN EU's capacity to deliver over 140 studies annually by 2025.[1][4]
As part of the DARWIN EU project scientists at Honeywell'sBrnobranch have developed an AI-powered monitoring system designed to detect early signs of pilot fatigue, inattention, or health issues. Using a camera equipped with artificial intelligence, the system continuously observes the pilot's condition and responds with alerts or wake-up calls if necessary. Even though designed for aviation safety, these technologies could be used in the future to contribute valuable physiological data to the DARWIN EU network—supporting proactive health interventions and contributing to the long-term goals of the European Health Data Space.[7][8]
DARWIN EU plays a crucial role in the EU's regulatory ecosystem by integrating real-world data into evidence-based healthcare policymaking. It is instrumental in advancing personalized medicine, pharmacovigilance, and pandemic preparedness through timely, data-driven insights.[1]
|
https://en.wikipedia.org/wiki/DARWIN_EU
|
Inmathematics, adifferentiable functionof onerealvariable is afunctionwhosederivativeexists at each point in itsdomain. In other words, thegraphof a differentiable function has a non-verticaltangent lineat each interior point in its domain. A differentiable function issmooth(the function is locally well approximated as alinear functionat each interior point) and does not contain any break, angle, orcusp.
Ifx0is an interior point in the domain of a functionf, thenfis said to bedifferentiable atx0if the derivativef′(x0){\displaystyle f'(x_{0})}exists. In other words, the graph offhas a non-vertical tangent line at the point(x0,f(x0)).fis said to be differentiable onUif it is differentiable at every point ofU.fis said to becontinuously differentiableif its derivative is also a continuous function over the domain of the functionf{\textstyle f}. Generally speaking,fis said to be of classCk{\displaystyle C^{k}}if its firstk{\displaystyle k}derivativesf′(x),f′′(x),…,f(k)(x){\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}exist and are continuous over the domain of the functionf{\textstyle f}.
For a multivariable function, as shownhere, the differentiability of it is something more complex than the existence of the partial derivatives of it.
A functionf:U→R{\displaystyle f:U\to \mathbb {R} }, defined on an open setU⊂R{\textstyle U\subset \mathbb {R} }, is said to bedifferentiableata∈U{\displaystyle a\in U}if the derivative
exists. This implies that the function iscontinuousata.
This functionfis said to bedifferentiableonUif it is differentiable at every point ofU. In this case, the derivative offis thus a function fromUintoR.{\displaystyle \mathbb {R} .}
A continuous function is not necessarily differentiable, but a differentiable function is necessarilycontinuous(at every point where it is differentiable) as is shown below (in the sectionDifferentiability and continuity). A function is said to becontinuously differentiableif its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the sectionDifferentiability classes).
The above definition can be extended to define the derivative atboundary points. The derivative of a functionf:A→R{\textstyle f:A\to \mathbb {R} }defined on a closed subsetA⊊R{\textstyle A\subsetneq \mathbb {R} }of the real numbers, evaluated at a boundary pointc{\textstyle c}, can be defined as the following one-sided limit, where the argumentx{\textstyle x}approachesc{\textstyle c}such that it is always withinA{\textstyle A}:
Forx{\textstyle x}to remain withinA{\textstyle A}, which is a subset of the reals, it follows that this limit will be defined as either
Iffis differentiable at a pointx0, thenfmust also becontinuousatx0. In particular, any differentiable function must be continuous at every point in its domain.The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend,cusp, orvertical tangentmay be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or atalmost everypoint. However, a result ofStefan Banachstates that the set of functions that have a derivative at some point is ameagre setin the space of all continuous functions.[1]Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is theWeierstrass function.
A functionf{\textstyle f}is said to becontinuously differentiableif the derivativef′(x){\textstyle f^{\prime }(x)}exists and is itself a continuous function. Although the derivative of a differentiable function never has ajump discontinuity, it is possible for the derivative to have anessential discontinuity. For example, the functionf(x)={x2sin(1/x)ifx≠00ifx=0{\displaystyle f(x)\;=\;{\begin{cases}x^{2}\sin(1/x)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}is differentiable at 0, sincef′(0)=limε→0(ε2sin(1/ε)−0ε)=0{\displaystyle f'(0)=\lim _{\varepsilon \to 0}\left({\frac {\varepsilon ^{2}\sin(1/\varepsilon )-0}{\varepsilon }}\right)=0}exists. However, forx≠0,{\displaystyle x\neq 0,}differentiation rulesimplyf′(x)=2xsin(1/x)−cos(1/x),{\displaystyle f'(x)=2x\sin(1/x)-\cos(1/x)\;,}which has no limit asx→0.{\displaystyle x\to 0.}Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless,Darboux's theoremimplies that the derivative of any function satisfies the conclusion of theintermediate value theorem.
Similarly to howcontinuous functionsare said to be ofclassC0,{\displaystyle C^{0},}continuously differentiable functions are sometimes said to be ofclassC1{\displaystyle C^{1}}. A function is ofclassC2{\displaystyle C^{2}}if the first andsecond derivativeof the function both exist and are continuous. More generally, a function is said to be ofclassCk{\displaystyle C^{k}}if the firstk{\displaystyle k}derivativesf′(x),f′′(x),…,f(k)(x){\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}all exist and are continuous. If derivativesf(n){\displaystyle f^{(n)}}exist for all positive integersn,{\textstyle n,}the function issmoothor equivalently, ofclassC∞.{\displaystyle C^{\infty }.}
Afunction of several real variablesf:Rm→Rnis said to be differentiable at a pointx0ifthere existsalinear mapJ:Rm→Rnsuch that
If a function is differentiable atx0, then all of thepartial derivativesexist atx0, and the linear mapJis given by theJacobian matrix, ann×mmatrix in this case. A similar formulation of the higher-dimensional derivative is provided by thefundamental increment lemmafound in single-variable calculus.
If all the partial derivatives of a function exist in aneighborhoodof a pointx0and are continuous at the pointx0, then the function is differentiable at that pointx0.
However, the existence of the partial derivatives (or even of all thedirectional derivatives) does not guarantee that a function is differentiable at a point. For example, the functionf:R2→Rdefined by
is not differentiable at(0, 0), but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function
is not differentiable at(0, 0), but again all of the partial derivatives and directional derivatives exist.
Incomplex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividingcomplex numbers. So, a functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }is said to be differentiable atx=a{\textstyle x=a}when
Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }, that is complex-differentiable at a pointx=a{\textstyle x=a}is automatically differentiable at that point, when viewed as a functionf:R2→R2{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}. This is because the complex-differentiability implies that
However, a functionf:C→C{\textstyle f:\mathbb {C} \to \mathbb {C} }can be differentiable as a multi-variable function, while not being complex-differentiable. For example,f(z)=z+z¯2{\displaystyle f(z)={\frac {z+{\overline {z}}}{2}}}is differentiable at every point, viewed as the 2-variablereal functionf(x,y)=x{\displaystyle f(x,y)=x}, but it is not complex-differentiable at any point because the limitlimh→0h+h¯2h{\textstyle \lim _{h\to 0}{\frac {h+{\bar {h}}}{2h}}}gives different values for different approaches to 0.
Any function that is complex-differentiable in a neighborhood of a point is calledholomorphicat that point. Such a function is necessarily infinitely differentiable, and in factanalytic.
IfMis adifferentiable manifold, a real or complex-valued functionfonMis said to be differentiable at a pointpif it is differentiable with respect to some (or any) coordinate chart defined aroundp. IfMandNare differentiable manifolds, a functionf:M→Nis said to be differentiable at a pointpif it is differentiable with respect to some (or any) coordinate charts defined aroundpandf(p).
|
https://en.wikipedia.org/wiki/Differentiable_function
|
Algorithmic information theory(AIT) is a branch oftheoretical computer sciencethat concerns itself with the relationship betweencomputationandinformationof computably generated objects (as opposed tostochasticallygenerated), such asstringsor any otherdata structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found ininformation theory.[1]According toGregory Chaitin, it is "the result of puttingShannon'sinformation theoryandTuring'scomputability theoryinto a cocktail shaker and shaking vigorously."[2]
Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in theself-delimitedcase) the same inequalities (except for a constant[3]) thatentropydoes, as in classical information theory;[1]randomness is incompressibility;[4]and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.[5]
AIT principally studies measures of irreducible information content ofstrings(or otherdata structures). Because most mathematical objects can be described in terms of strings, or as thelimit of a sequenceof strings, it can be used to study a wide variety of mathematical objects, includingintegers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field ofmetamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations ofclassical information theoryfor single and fixed objects, formalizing the concept ofrandomness, and finding a meaningfulprobabilistic inferencewithout prior knowledge of theprobability distribution(e.g., whether it isindependent and identically distributed,Markovian, or evenstationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them:algorithmic complexity,algorithmic randomness, andalgorithmic probability.[6][4]
Algorithmic information theory principally studiescomplexitymeasures onstrings(or otherdata structures). Because most mathematical objects can be described in terms of strings, or as thelimit of a sequenceof strings, it can be used to study a wide variety of mathematical objects, includingintegers.
Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressedpossible self-contained representation of that string. A self-contained representation is essentially aprogram—in some fixed but otherwise irrelevant universalprogramming language—that, when run, outputs the original string.
From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.
Unlike classical information theory, algorithmic information theory givesformal,rigorousdefinitions of arandom stringand arandom infinite sequencethat do not depend on physical or philosophicalintuitionsaboutnondeterminismorlikelihood. (The set of random strings depends on the choice of theuniversal Turing machineused to defineKolmogorov complexity, but any choice
gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)
Some of the results of algorithmic information theory, such asChaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction ofChaitin's constantΩ, a real number that expresses the probability that a self-delimiting universal Turing machine willhaltwhen its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). AlthoughΩis easily defined, in anyconsistentaxiomatizabletheoryone can only compute finitely many digits ofΩ, so it is in some senseunknowable, providing an absolute limit on knowledge that is reminiscent ofGödel's incompleteness theorems. Although the digits ofΩcannot be determined, many properties ofΩare known; for example, it is analgorithmically random sequenceand thus its binary digits are evenly distributed (in fact it isnormal).
Algorithmic information theory was founded byRay Solomonoff,[7]who published the basic ideas on which the field is based as part of his invention ofalgorithmic probability—a way to overcome serious problems associated with the application ofBayes' rulesin statistics. He first described his results at a Conference atCaltechin 1960,[8]and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference."[9]Algorithmic information theory was later developed independently byAndrey Kolmogorov, in 1965 andGregory Chaitin, around 1966.
There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based onself-delimiting programsand is mainly due toLeonid Levin(1974).Per Martin-Löfalso contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on theBlum axioms(Blum 1967) was introduced by Mark Burgin in a paper presented for publication byAndrey Kolmogorov(Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).
A binary string is said to be random if theKolmogorov complexityof the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.
An infinite binary sequence is said to be random if, for some constantc, for alln, theKolmogorov complexityof the initial segment of lengthnof the sequence is at leastn−c. It can be shown that almost every sequence (from the point of view of the standardmeasure—"fair coin" orLebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually calledMartin-Löfrandomness, afterPer Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called1-randomnessto distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc.Yongge Wangshowed[10]that all of these randomness concepts are different.
(Related definitions can be made for alphabets other than the set{0,1}{\displaystyle \{0,1\}}.)
Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.
The information content or complexity of an object can be measured by the length of its shortest description. For instance the string
"0101010101010101010101010101010101010101010101010101010101010101"
has the short description "32 repetitions of '01'", while
"1100100001100001110111101110110011111010010000100101011110010110"
presumably has no simple description other than writing down the string itself.
More formally, thealgorithmic complexity (AC)of a stringxis defined as the length of the shortest program that computes or outputsx, where the program is run on some fixed reference universal computer.
A closely related notion is the probability that a universal computer outputs some stringxwhen fed with a program chosen at random. Thisalgorithmic "Solomonoff" probability (AP)is key in addressing the old philosophical problem of induction in a formal way.
The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).
AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.
AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs incomputational complexity theory, has been used to define a universal similarity metric between objects, solves theMaxwell daemonproblem, and many others.
|
https://en.wikipedia.org/wiki/Algorithmic_information_theory
|
Inductive reasoningrefers to a variety ofmethods of reasoningin which the conclusion of an argument is supported not with deductive certainty, but with some degree of probability.[1]Unlikedeductivereasoning(such asmathematical induction), where the conclusion iscertain, given the premises are correct, inductive reasoning produces conclusions that are at bestprobable, given the evidence provided.[2][3]
The types of inductive reasoning include generalization, prediction,statistical syllogism, argument from analogy, and causal inference. There are also differences in how their results are regarded.
A generalization (more accurately, aninductive generalization) proceeds from premises about asampleto a conclusion about thepopulation.[4]The observation obtained from this sample is projected onto the broader population.[4]
For example, if there are 20 balls—either black or white—in an urn: to estimate their respective numbers, asampleof four balls is drawn, three are black and one is white. An inductive generalization may be that there are 15 black and five white balls in the urn. However this is only one of 17 possibilities as to theactualnumber of each color of balls in the urn (thepopulation)-- there may, of course, have been 19 black and just 1 white ball, or only 3 black balls and 17 white, or any mix in between. The probability of each possible distribution being the actual numbers of black and white balls can be estimated using techniques such asBayesian inference, where prior assumptions about the distribution are updated with the observed sample, ormaximum likelihood estimation(MLE), which identifies the distribution most likely given the observed sample.
How much the premises support the conclusion depends upon the number in the sample group, the number in the population, and the degree to which the sample represents the population (which, for a static population, may be achieved by taking a random sample). The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. Thehasty generalizationand thebiased sampleare generalization fallacies.
A statistical generalization is a type of inductive argument in which a conclusion about a population is inferred using astatistically representative sample. For example:
The measure is highly reliable within a well-defined margin of error provided that the selection process was genuinely random and that the numbers of items in the sample having the properties considered are large. It is readily quantifiable. Compare the preceding argument with the following. "Six of the ten people in my book club are Libertarians. Therefore, about 60% of people are Libertarians." The argument is weak because the sample is non-random and the sample size is very small.
Statistical generalizations are also calledstatistical projections[5]andsample projections.[6]
An anecdotal generalization is a type of inductive argument in which a conclusion about a population is inferred using a non-statistical sample.[7]In other words, the generalization is based onanecdotal evidence. For example:
This inference is less reliable (and thus more likely to commit the fallacy of hasty generalization) than a statistical generalization, first, because the sample events are non-random, and second because it is not reducible to a mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate the circumstances affecting performance that will occur in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes calledHumeanafter the philosopher who was first to subject them to philosophical scrutiny.[8]
An inductive prediction draws a conclusion about a future, current, or past instance from a sample of other instances. Like an inductive generalization, an inductive prediction relies on a data set consisting of specific instances of a phenomenon. But rather than conclude with a general statement, the inductive prediction concludes with a specific statement about theprobabilitythat a single instance will (or will not) have an attribute shared (or not shared) by the other instances.[9]
A statisticalsyllogismproceeds from a generalization about a group to a conclusion about an individual.
For example:
This is astatistical syllogism.[10]Even though one cannot be sure Bob will attend university, the exact probability of this outcome is fully assured (given no further information). Twodicto simpliciterfallacies can occur in statistical syllogisms: "accident" and "converse accident".
The process of analogical inference involves noting the shared properties of two or more things and from this basis inferring that they also share some further property:[11]
Analogical reasoning is very frequent incommon sense,science,philosophy,law, and thehumanities, but sometimes it is accepted only as an auxiliary method. A refined approach iscase-based reasoning.[12]
This isanalogical induction, according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in hisSystem of Logic, where he states, "[t]here can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what would otherwise exist, in favor of the conclusion."[13]SeeMill's Methods.
Some thinkers contend that analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events.[citation needed]Analogical induction requires an auxiliary examination of therelevancyof the characteristics cited as common to the pair. In the preceding example, if a premise were added stating that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity.
A pitfall of analogy is that features can becherry-picked: while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharplydissimilar. Thus, analogy can mislead if not all relevant comparisons are made.
A causal inference draws a conclusion about a possible or probable causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.[citation needed]
The two principal methods used to reach inductive generalizations areenumerative inductionandeliminative induction.[14][15]
Enumerative induction is an inductive method in which a generalization is constructed based on thenumberof instances that support it. The more supporting instances, the stronger the conclusion.[14][15]
The most basic form of enumerative induction reasons from particular instances to all instances and is thus an unrestricted generalization.[16]If one observes 100 swans, and all 100 were white, one might infer a probable universalcategorical propositionof the formAll swans are white. As thisreasoning form's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central inphilosophy of science, as enumerative induction has a pivotal role in the traditional model of thescientific method.
This isenumerative induction, also known assimple inductionorsimple predictive induction. It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the conclusionAllis a bold assertion. A single contrary instance foils the argument. And last, quantifying the level of probability in any mathematical form is problematic.[17]By what standard do we measure our Earthly sample of known life against all (possible) life? Suppose we do discover some new organism—such as some microorganism floating in the mesosphere or an asteroid—and it is cellular. Does the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes", and for a good many this "yes" is not only reasonable but incontrovertible. So then justhow muchshould this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all with or without numerical quantification.
This is enumerative induction in itsweak form. It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive.
Eliminative induction, also called variative induction, is an inductive method first put forth byFrancis Bacon;[18]in it a generalization is constructed based on thevarietyof instances that support it. Unlike enumerative induction, eliminative induction reasons based on the various kinds of instances that support a conclusion, rather than the number of instances that support it. As the variety of instances increases, the more possible conclusions based on those instances can be identified as incompatible and eliminated. This, in turn, increases the strength of any conclusion that remains consistent with the various instances. In this context, confidence is the function of how many instances have been identified as incompatible and eliminated. This confidence is expressed as the Baconian probability i|n (read as "i out of n") where n reasons for finding a claim incompatible has been identified and i of these have been eliminated by evidence or argument.[18]
There are three ways of attacking an argument; these ways - known as defeaters indefeasible reasoningliterature - are : rebutting, undermining, and undercutting. Rebutting defeats by offering a counter-example, undermining defeats by questioning the validity of the evidence, and undercutting defeats by pointing out conditions where a conclusion is not true when the inference is. By identifying defeaters and proving them wrong is how this approach builds confidence.[18]
This type of induction may use different methodologies such as quasi-experimentation, which tests and, where possible, eliminates rival hypotheses.[19]Different evidential tests may also be employed to eliminate possibilities that are entertained.[20]
Eliminative induction is crucial to the scientific method and is used to eliminate hypotheses that are inconsistent with observations and experiments.[14][15]It focuses on possible causes instead of observed actual instances of causal connections.[21]
For a move from particular to universal,Aristotlein the 300s BCE used the Greek wordepagogé, whichCicerotranslated into the Latin wordinductio.[22]
Aristotle'sPosterior Analyticscovers the methods of inductive proof in natural philosophy and in the social sciences. The first book ofPosterior Analyticsdescribes the nature and science of demonstration and its elements: including definition, division, intuitive reason of first principles, particular and universal demonstration, affirmative and negative demonstration, the difference between science and opinion, etc.
The ancientPyrrhonistswere the first Western philosophers to point out theProblem of induction: that induction cannot, according to them, justify the acceptance of universal statements as true.[22]
TheEmpiric schoolof ancient Greek medicine employedepilogismas a method of inference. 'Epilogism' is a theory-free method that looks at history through the accumulation of facts without major generalization and with consideration of the consequences of making causal claims.[23]Epilogism is an inference which moves entirely within the domain of visible and evident things, it tries not to invokeunobservables.
TheDogmatic schoolof ancient Greek medicine employedanalogismosas a method of inference.[24]This method used analogy to reason from what was observed to unobservable forces.
In 1620,early modern philosopherFrancis Baconrepudiated the value of mere experience and enumerative induction alone.His methodofinductivismrequired that minute and many-varied observations that uncovered the natural world's structure and causal relations needed to be coupled with enumerative induction in order to have knowledge beyond the present scope of experience. Inductivism therefore required enumerative induction as a component.
The empiricistDavid Hume's 1740 stance found enumerative induction to have no rational, let alone logical, basis; instead, induction was the product of instinct rather than reason, a custom of the mind and an everyday requirement to live. While observations, such as the motion of the sun, could be coupled with the principle of theuniformity of natureto produce conclusions that seemed to be certain, theproblem of inductionarose from the fact that the uniformity of nature was not a logically valid principle, therefore it could not be defended as deductively rational, but also could not be defended as inductively rational by appealing to the fact that the uniformity of nature has accurately described the past and therefore, will likely accurately describe the future because that is an inductive argument and therefore circular since induction is what needs to be justified.
Since Hume first wrote about the dilemma between the invalidity of deductive arguments and the circularity of inductive arguments in support of the uniformity of nature, this supposed dichotomy between merely two modes of inference, deduction and induction, has been contested with the discovery of a third mode of inference known as abduction, orabductive reasoning, which was first formulated and advanced byCharles Sanders Peirce, in 1886, where he referred to it as "reasoning by hypothesis."[25]Inference to the best explanation is often, yet arguably, treated as synonymous to abduction as it was first identified by Gilbert Harman in 1965 where he referred to it as "abductive reasoning," yet his definition of abduction slightly differs from Pierce's definition.[26]Regardless, if abduction is in fact a third mode of inference rationally independent from the other two, then either the uniformity of nature can be rationally justified through abduction, or Hume's dilemma is more of a trilemma. Hume was also skeptical of the application of enumerative induction and reason to reach certainty about unobservables and especially the inference of causality from the fact that modifying an aspect of a relationship prevents or produces a particular outcome.
Awakened from "dogmatic slumber" by a German translation of Hume's work,Kantsought to explain the possibility ofmetaphysics. In 1781, Kant'sCritique of Pure Reasonintroducedrationalismas a path toward knowledge distinct fromempiricism. Kant sorted statements into two types.Analyticstatements are true by virtue of thearrangementof their terms andmeanings, thus analytic statements aretautologies, merely logical truths, true bynecessity. Whereassyntheticstatements hold meanings to refer to states of facts,contingencies. Against both rationalist philosophers likeDescartesandLeibnizas well as against empiricist philosophers likeLockeandHume, Kant'sCritique of Pure Reasonis a sustained argument that in order to have knowledge we need both a contribution of our mind (concepts) as well as a contribution of our senses (intuitions). Knowledge proper is for Kant thus restricted to what we can possibly perceive (phenomena), whereas objects of mere thought ("things in themselves") are in principle unknowable due to the impossibility of ever perceiving them.
Reasoning that the mind must contain its own categories for organizingsense data, making experience of objects inspaceandtime (phenomena)possible, Kant concluded that theuniformity of naturewas ana prioritruth.[27]A class of synthetic statements that was notcontingentbut true by necessity, was thensynthetica priori. Kant thus saved bothmetaphysicsandNewton's law of universal gravitation. On the basis of the argument that what goes beyond our knowledge is "nothing to us,"[28]he discardedscientific realism. Kant's position that knowledge comes about by a cooperation of perception and our capacity to think (transcendental idealism) gave birth to the movement ofGerman idealism.Hegel'sabsolute idealismsubsequently flourished across continental Europe and England.
Positivism, developed byHenri de Saint-Simonand promulgated in the 1830s by his former studentAuguste Comte, was the firstlate modernphilosophy of science. In the aftermath of theFrench Revolution, fearing society's ruin, Comte opposedmetaphysics. Human knowledge had evolved from religion to metaphysics to science, said Comte, which had flowed frommathematicstoastronomytophysicstochemistrytobiologytosociology—in that order—describing increasingly intricate domains. All of society's knowledge had become scientific, with questions oftheologyand ofmetaphysicsbeing unanswerable. Comte found enumerative induction reliable as a consequence of its grounding in available experience. He asserted the use of science, rather than metaphysical truth, as the correct method for the improvement of human society.
According to Comte,scientific methodframes predictions, confirms them, and states laws—positive statements—irrefutable bytheologyor bymetaphysics. Regarding experience as justifying enumerative induction by demonstrating theuniformity of nature,[27]the British philosopherJohn Stuart Millwelcomed Comte's positivism, but thoughtscientific lawssusceptible to recall or revision and Mill also withheld from Comte'sReligion of Humanity. Comte was confident in treatingscientific lawas anirrefutable foundation for all knowledge, and believed that churches, honouring eminent scientists, ought to focus public mindset onaltruism—a term Comte coined—to apply science for humankind's social welfare viasociology, Comte's leading science.
During the 1830s and 1840s, while Comte and Mill were the leading philosophers of science,William Whewellfound enumerative induction not nearly as convincing, and, despite the dominance of inductivism, formulated "superinduction".[29]Whewell argued that "the peculiar import of the termInduction" should be recognised: "there is some Conceptionsuperinducedupon the facts", that is, "the Invention of a new Conception in every inductive inference". The creation of Conceptions is easily overlooked and prior to Whewell was rarely recognised.[29]Whewell explained:
"Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception, men can no longer easily restore them back to detached and incoherent condition in which they were before they were thus combined."[29]
These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termedconsilience—that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate the prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes used the phrase "logic of induction", despite the fact that induction lacks rules and cannot be trained.[29]
In the 1870s, the originator ofpragmatism,C S Peirceperformed vast investigations that clarified the basis ofdeductive inferenceas a mathematical proof (as, independently, didGottlob Frege). Peirce recognized induction but always insisted on a third type of inference that Peirce variously termedabductionorretroductionorhypothesisorpresumption.[30]Later philosophers termed Peirce's abduction, etc.,Inference to the Best Explanation(IBE).[31]
Having highlighted Hume'sproblem of induction,John Maynard Keynesposedlogical probabilityas its answer, or as near a solution as he could arrive at.[32]Bertrand Russellfound Keynes'sTreatise on Probabilitythe best examination of induction, and believed that if read withJean Nicod'sLe Probleme logique de l'inductionas well asR B Braithwaite's review of Keynes's work in the October 1925 issue ofMind, that would cover "most of what is known about induction", although the "subject is technical and difficult, involving a good deal of mathematics".[33]Two decades later,Russellfollowed Keynes in regarding enumerative induction as an "independent logical principle".[34][35][36]Russell found:
"Hume's skepticism rests entirely upon his rejection of the principle of induction. The principle of induction, as applied to causation, says that, ifAhas been found very often accompanied or followed byB, then it is probable that on the next occasion on whichAis observed, it will be accompanied or followed byB. If the principle is to be adequate, a sufficient number of instances must make the probability not far short of certainty. If this principle, or any other from which it can be deduced, is true, then the casual inferences which Hume rejects are valid, not indeed as giving certainty, but as giving a sufficient probability for practical purposes. If this principle is not true, every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's skepticism is inescapable for an empiricist. The principle itself cannot, of course, without circularity, be inferred from observed uniformities, since it is required to justify any such inference. It must, therefore, be, or be deduced from, an independent principle not based on experience. To this extent, Hume has proved that pure empiricism is not a sufficient basis for science. But if this one principle is admitted, everything else can proceed in accordance with the theory that all our knowledge is based on experience. It must be granted that this is a serious departure from pure empiricism, and that those who are not empiricists may ask why, if one departure is allowed, others are forbidden. These, however, are not questions directly raised by Hume's arguments. What these arguments prove—and I do not think the proof can be controverted—is that induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible."[36]
In a 1965 paper,Gilbert Harmanexplained that enumerative induction is not an autonomous phenomenon, but is simply a disguised consequence of Inference to the Best Explanation (IBE).[31]IBE is otherwise synonymous withC S Peirce'sabduction.[31]Many philosophers of science espousingscientific realismhave maintained that IBE is the way that scientists develop approximately true scientific theories about nature.[37]
Inductive reasoning is a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of thepremisesare true.[38]This difference between deductive and inductive reasoning is reflected in the terminology used to describe deductive and inductive arguments. In deductive reasoning, an argument is "valid" when, assuming the argument's premises are true, the conclusionmust betrue. If the argument is valid and the premisesaretrue, then the argument is"sound". In contrast, in inductive reasoning, an argument's premises can never guarantee that the conclusionmust betrue. Instead, an argument is "strong" when, assuming the argument's premises are true, the conclusion isprobablytrue. If the argument is strong and the premises are thought to be true, then the argument is said to be "cogent".[39]Less formally, the conclusion of an inductive argument may be called "probable", "plausible", "likely", "reasonable", or "justified", but never "certain" or "necessary". Logic affords no bridge from the probable to the certain.
The futility of attaining certainty through some critical mass of probability can be illustrated with a coin-toss exercise. Suppose someone tests whether a coin is either a fair one or two-headed. They flip the coin ten times, and ten times it comes up heads. At this point, there is a strong reason to believe it is two-headed. After all, the chance of ten heads in a row is .000976: less than one in one thousand. Then, after 100 flips, every toss has come up heads. Now there is "virtual" certainty that the coin is two-headed, and one can regard it as "true" that the coin is probably two-headed. Still, one can neither logically nor empirically rule out that the next toss will produce tails. No matter how many times in a row it comes up heads, this remains the case. If one programmed a machine to flip a coin over and over continuously, at some point the result would be a string of 100 heads. In the fullness of time, all combinations will appear.
As for the slim prospect of getting ten out of ten heads from a fair coin—the outcome that made the coin appear biased—many may be surprised to learn that the chance of any sequence of heads or tails is equally unlikely (e.g., H-H-T-T-H-T-H-H-H-T) and yet it occurs ineverytrial of ten tosses. That meansallresults for ten tosses have the same probability as getting ten out of ten heads, which is 0.000976. If one records the heads-tails sequences, for whatever result, that exact sequence had a chance of 0.000976.
An argument is deductive when the conclusion is necessary given the premises. That is, the conclusion must be true if the premises are true. For example, after getting 10 heads in a row one might deduce that the coin had met some statistical criterion to be regarded as probably two-sided, a conclusion that would not be falsified even if the next toss yielded tails.
If a deductive conclusion follows duly from its premises, then it is valid; otherwise, it is invalid (that an argument is invalid is not to say its conclusions are false; it may have a true conclusion, just not on account of the premises). An examination of the following examples will show that the relationship between premises and conclusion is such that the truth of the conclusion is already implicit in the premises. Bachelors are unmarried because wesaythey are; we have defined them so. Socrates is mortal because we have included him in a set of beings that are mortal. The conclusion for a valid deductive argument is already contained in the premises since its truth is strictly a matter of logical relations. It cannot say more than its premises. Inductive premises, on the other hand, draw their substance from fact and evidence, and the conclusion accordingly makes a factual claim or prediction. Its reliability varies proportionally with the evidence. Induction wants to reveal somethingnewabout the world. One could say that induction wants to saymorethan is contained in the premises.
To better see the difference between inductive and deductive arguments, consider that it would not make sense to say: "all rectangles so far examined have four right angles, so the next one I see will have four right angles." This would treat logical relations as something factual and discoverable, and thus variable and uncertain. Likewise, speaking deductively we may permissibly say. "All unicorns can fly; I have a unicorn named Charlie; thus Charlie can fly." This deductive argument is valid because the logical relations hold; we are not interested in their factual soundness.
The conclusions of inductive reasoning are inherentlyuncertain. It only deals with the extent to which, given the premises, the conclusion is "credible" according to some theory of evidence. Examples include amany-valued logic,Dempster–Shafer theory, orprobability theorywith rules for inference such asBayes' rule. Unlike deductive reasoning, it does not rely on universals holding over aclosed domain of discourseto draw conclusions, so it can be applicable even in cases ofepistemic uncertainty(technical issues with this may arise however; for example, thesecond axiom of probabilityis a closed-world assumption).[40]
Another crucial difference between these two types of argument is that deductive certainty is impossible in non-axiomatic or empirical systems such asreality, leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems.[41]
Given that "ifAis true then that would causeB,C, andDto be true", an example of deduction would be "Ais true therefore we can deduce thatB,C, andDare true". An example of induction would be "B,C, andDare observed to be true thereforeAmight be true".Ais areasonableexplanation forB,C, andDbeing true.
For example:
Note, however, that the asteroid explanation for the mass extinction is not necessarily correct. Other events with the potential to affect global climate also coincide with theextinction of the non-avian dinosaurs. For example, the release ofvolcanic gases(particularlysulfur dioxide) during the formation of theDeccan TrapsinIndia.
Another example of an inductive argument:
This argument could have been made every time a new biological life form was found, and would have had a correct conclusion every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered.
As a result, the argument may be stated as:
A classical example of an "incorrect" statistical syllogism was presented by John Vickers:
The conclusion fails because the population of swans then known was not actually representative of all swans. A more reasonable conclusion would be: in line with applicable conventions, we might reasonablyexpectall swans in England to be white, at least in the short-term.
Succinctly put: deduction is aboutcertainty/necessity; induction is aboutprobability.[10]Any single assertion will answer to one of these two criteria. Another approach to the analysis of reasoning is that ofmodal logic, which deals with the distinction between the necessary and thepossiblein a way not concerned with probabilities among things deemed possible.
The philosophical definition of inductive reasoning is more nuanced than a simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductivelogical argumentindicate some degree of support (inductive probability) for the conclusion but do notentailit; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms).
Note that the definition ofinductivereasoning described here differs frommathematical induction, which, in fact, is a form ofdeductivereasoning. Mathematical induction is used to provide strict proofs of the properties of recursively defined sets.[42]The deductive nature of mathematical induction derives from its basis in a non-finite number of cases, in contrast with the finite number of cases involved in an enumerative induction procedure likeproof by exhaustion. Both mathematical induction and proof by exhaustion are examples ofcomplete induction. Complete induction is a masked type of deductive reasoning.
Although philosophers at least as far back as thePyrrhonistphilosopherSextus Empiricushave pointed out the unsoundness of inductive reasoning,[43]the classic philosophical critique of theproblem of inductionwas given by the Scottish philosopherDavid Hume.[44]Although the use of inductive reasoning demonstrates considerable success, the justification for its application has been questionable. Recognizing this, Hume highlighted the fact that our mind often draws conclusions from relatively limited experiences that appear correct but which are actually far from certain. In deduction, the truth value of the conclusion is based on the truth of the premise. In induction, however, the dependence of the conclusion on the premise is always uncertain. For example, let us assume that all ravens are black. The fact that there are numerous black ravens supports the assumption. Our assumption, however, becomes invalid once it is discovered that there are white ravens. Therefore, the general rule "all ravens are black" is not the kind of statement that can ever be certain. Hume further argued that it is impossible to justify inductive reasoning: this is because it cannot be justified deductively, so our only option is to justify it inductively. Since this argument is circular, with the help ofHume's forkhe concluded that our use of induction is not logically justifiable .[45]
Hume nevertheless stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position ofsevere skepticism, Hume advocated apractical skepticismbased oncommon sense, where the inevitability of induction is accepted.[46]Bertrand Russellillustrated Hume's skepticism in a story about a chicken who, fed every morning without fail and following the laws of induction, concluded that this feeding would always continue, until his throat was eventually cut by the farmer.[47]
In 1963,Karl Popperwrote, "Induction,i.e.inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure."[48][49]Popper's 1972 bookObjective Knowledge—whose first chapter is devoted to the problem of induction—opens, "I think I have solved a major philosophical problem: theproblem of induction".[49]In Popper's schema, enumerative induction is "a kind of optical illusion" cast by the steps of conjecture and refutation during aproblem shift.[49]An imaginative leap, thetentative solutionis improvised, lacking inductive rules to guide it.[49]The resulting, unrestricted generalization is deductive, an entailed consequence of all explanatory considerations.[49]Controversy continued, however, with Popper's putative solution not generally accepted.[50]
Donald A. Gilliesargues thatrules of inferencesrelated to inductive reasoning are overwhelmingly absent from science, and describes most scientific inferences as "involv[ing] conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules."[51]Gillies also provides a rare counterexample "in the machine learning programs ofAI."[51]
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions.[citation needed]As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the mostlogical conclusionbased on the clues. Examples of these biases include theavailability heuristic,confirmation bias, and thepredictable-world bias.
The availability heuristic is regarded as causing the reasoner to depend primarily upon information that is readily available. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents choose the causes that have been most prevalent in the media such as terrorism, murders, and airplane accidents, rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around them.
Confirmation bias is based on the natural tendency to confirm rather than deny a hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is, in fact, a sociable individual.
The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and therefore believe that they are able to predict outcomes based on what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. In general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth.[52]
As a logic of induction rather than a theory of belief,Bayesian inferencedoes not determine which beliefs area priorirational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by considering an exhaustive list of possibilities, a definite probabilistic characterisation of each of them (in terms of likelihoods) and preciseprior probabilitiesfor them (e.g. based on logic or induction from previous experience) and, when faced with evidence, we adjust the strength of our belief in the given hypotheses in a precise manner usingBayesian logicto yield candidate 'a posteriori probabilities', taking no account of the extent to which the new evidence may happen to give us specific reasons to doubt our assumptions. Otherwise it is advisable to review and repeat as necessary the consideration of possibilities and their characterisation until, perhaps, a stable situation is reached.[53]
Around 1960,Ray Solomonofffounded the theory of universalinductive inference, a theory of prediction based on observations, for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combinesalgorithmic information theorywith the Bayesian framework. Universal inductive inference is based on solid philosophical foundations and 'seems to be an inadequate tool for dealing with any reasonably complex or real-world environment',[54]and can be considered as a mathematically formalizedOccam's razor. Fundamental ingredients of the theory are the concepts ofalgorithmic probabilityandKolmogorov complexity.
Inductive inference typically considers hypothesis classes with a countable size. A recent advance[55]established a sufficient and necessary condition for inductive inference: a finite error bound is guaranteed if and only if the hypothesis class is a countable union of online learnable classes. Notably, this condition allows the hypothesis class to have an uncountable size while remaining learnable within this framework.
|
https://en.wikipedia.org/wiki/Inductive_inference
|
Mill's methodsare five methods ofinductiondescribed byphilosopherJohn Stuart Millin his 1843 bookA System of Logic.[1][2]They are intended to establish acausal relationshipbetween two or more groups of data, analyzing their respective differences and similarities.
If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon.
For a property to be anecessarycondition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design.
Symbolically, the method of agreement can be represented as:
To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, thedependent variablein this case, is that they have a system ofuniversal health care. Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care.
If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an indispensable part of the cause, of the phenomenon.
This method is also known more generally as the most similar systems design within comparative politics.
As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony.
If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon.
Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it.
Symbolically, the Joint method of agreement and difference can be represented as:
Subduct[3]from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents.
If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor.
Symbolically, the Method of Residue can be represented as:
Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.
If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing bothsaltandlead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead.
Symbolically, the method of concomitant variation can be represented as (with ± representing a shift):
Unlike the preceding four inductive methods, the method of concomitant variation doesn't involve theelimination of any circumstance. Changing the magnitude of one factor results in the change in the magnitude of another factor.
|
https://en.wikipedia.org/wiki/Mill%27s_methods
|
Minimum Description Length(MDL) is amodel selectionprinciple where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications ofOccam's razor. The MDL principle can be extended to other forms of inductive inference and learning, for example to estimation and sequential prediction, without explicitly identifying a single model of the data.
MDL has its origins mostly ininformation theoryand has been further developed within the general fields of statistics, theoretical computer science and machine learning, and more narrowlycomputational learning theory.
Historically, there are different, yet interrelated, usages of the definite noun phrase "theminimum description lengthprinciple" that vary in what is meant bydescription:
Selecting the minimum length description of the available data as the best model observes theprincipleidentified as Occam's razor. Prior to the advent of computer programming, generating such descriptions was the intellectual labor of scientific theorists. It was far less formal than it has become in the computer age. If two scientists had a theoretic disagreement, they rarely couldformallyapply Occam's razor to choose between their theories. They would have different data sets and possibly different descriptive languages. Nevertheless, science advanced as Occam's razor was an informal guide in deciding which model was best.
With the advent of formal languages and computer programming Occam's razor was mathematically defined. Models of a given set of observations, encoded as bits of data, could be created in the form of computer programs that output that data. Occam's razor could thenformallyselect the shortest program, measured in bits of thisalgorithmic information, as the best model.
To avoid confusion, note that there is nothing in the MDL principle that implies the model must be produced by a machine. It can be entirely the product of humans. The MDL principle applies regardless of whether the description to be run on a computer is the product of humans, machines or any combination thereof. The MDL principle requiresonlythat the shortest description, when executed, produce the original data set without error.
The distinction in computer programs between programs and literal data applies to all formal descriptions and is sometimes referred to as "two parts" of a description. In statistical MDL learning, such a description is frequently called atwo-part code.
MDL applies in machine learning when algorithms (machines) generate descriptions. Learning occurs when an algorithm generates a shorter description of the same data set.
The theoretic minimum description length of a data set, called itsKolmogorov complexity, cannot, however, be computed. That is to say, even if by random chance an algorithm generates the shortest program of all that outputs the data set, anautomated theorem provercannot prove there is no shorter such program. Nevertheless, given two programs that output the dataset, the MDL principle selects the shorter of the two as embodying the best model.
Recent machine MDL learning of algorithmic, as opposed to statistical, data models have received increasing attention with increasing availability of data, computation resources and theoretic advances.[2][3]Approaches are informed by the burgeoning field ofartificial general intelligence. Shortly before his death,Marvin Minskycame out strongly in favor of this line of research, saying:[4]
It seems to me that the most important discovery since Gödel was the discovery by Chaitin, Solomonoff and Kolmogorov of the concept called Algorithmic Probability which is a fundamental new theory of how to make predictions given a collection of experiences and this is a beautiful theory, everybody should learn it, but it’s got one problem, that is, that you cannot actually calculate what this theory predicts because it is too hard, it requires an infinite amount of work. However, it should be possible to make practical approximations to the Chaitin, Kolmogorov, Solomonoff theory that would make better predictions than anything we have today. Everybody should learn all about that and spend the rest of their lives working on it.
Any set of data can be represented by a string ofsymbolsfrom a finite (say,binary)alphabet.
[The MDL Principle] is based on the following insight: any regularity in a given set of data can be used tocompress the data, i.e. to describe it using fewer symbols than needed to describe the data literally. (Grünwald, 2004)[5]
Based on this, in 1978, Jorma Rissanen published an MDL learning algorithm usingthe statistical notion of informationrather than algorithmic information. Over the past 40 years this has developed into a rich theory of statistical and machine learning procedures with connections toBayesian model selection and averaging, penalization methods such asLassoandRidge, and so on—Grünwald and Roos (2020) give an introduction including all modern developments.[6]Rissanen started out with this idea: all statistical learning is about finding regularities in data, and the best hypothesis to describe the regularities in data is also the one that is able tostatisticallycompress the data most. Like other statistical methods, it can be used for learning the parameters of a model using some data. Usually though, standard statistical methods assume that the general form of a model is fixed. MDL's main strength is that it can also be used for selecting the general form of a model and its parameters. The quantity of interest (sometimes just a model, sometimes just parameters, sometimes both at the same time) is called a hypothesis. The basic idea is then to consider the(lossless)two-stage codethat encodes dataD{\displaystyle D}with lengthL(D){\displaystyle {L(D)}}by first encoding a hypothesisH{\displaystyle H}in the set of considered hypothesesH{\displaystyle {\cal {H}}}and then codingD{\displaystyle D}"with the help of"H{\displaystyle H}; in the simplest context this just means "encoding the deviations of the data from the predictions made byH{\displaystyle H}:
L(D)=minH∈H(L(H)+L(D|H)){\displaystyle {L(D)}=\min _{H\in {\cal {H}}}\ (\ L(H)+L(D|H)\ )\ }
TheH{\displaystyle H}achieving this minimum is then viewed as the best explanation of dataD{\displaystyle D}. As a simple example, take a regression problem: the dataD{\displaystyle D}could consist of a sequence of pointsD=(x1,y1),…,(xn,yn){\displaystyle D=(x_{1},y_{1}),\ldots ,(x_{n},y_{n})}, the setH{\displaystyle {\cal {H}}}could be the set of all polynomials fromX{\displaystyle X}toY{\displaystyle Y}. To describe a polynomialH{\displaystyle H}of degree (say)k{\displaystyle k}, one would first have to discretize the parameters to some precision; one would then have to describe this precision (a natural number); next, one would have to describe the degreek{\displaystyle k}(another natural number), and in the final step, one would have to describek+1{\displaystyle k+1}parameters; the total length would beL(H){\displaystyle L(H)}. One would then describe the points inD{\displaystyle D}using some fixed code for the x-values and then using a code for then{\displaystyle n}deviationsyi−H(xi){\displaystyle y_{i}-H(x_{i})}.
In practice, one often (but not always) uses aprobabilistic model. For example, one associates each polynomialH{\displaystyle H}with the corresponding conditional distribution expressing that givenX{\displaystyle X},Y{\displaystyle Y}is normally distributed with meanH(X){\displaystyle H(X)}and some varianceσ2{\displaystyle \sigma ^{2}}which could either be fixed or added as a free parameter. Then the set of hypothesesH{\displaystyle {\cal {H}}}reduces to the assumption of a linear[clarification needed]model,Y=H(X)+ϵ{\displaystyle Y=H(X)+\epsilon }, withH{\displaystyle H}a polynomial.
Furthermore, one is often not directly interested in specific parameters values, but just, for example, thedegreeof the polynomial. In that case, one setsH{\displaystyle {\cal {H}}}to beH={H0,H1,…}{\displaystyle {\cal {H}}=\{{\cal {H}}_{0},{\cal {H}}_{1},\ldots \}}where eachHj{\displaystyle {\cal {H}}_{j}}represents the hypothesis that the data is best described as a j-th degree polynomial. One then codes dataD{\displaystyle D}given hypothesisHj{\displaystyle {\cal {H}}_{j}}using aone-part codedesigned such that, wheneversomehypothesisH∈Hj{\displaystyle H\in {\cal {H}}_{j}}fits the data well, the codelengthL(D|H){\displaystyle L(D|H)}is short. The design of such codes is calleduniversal coding. There are various types of universal codes one could use, often giving similar lengths for long data sequences but differing for short ones. The 'best' (in the sense that it has a minimax optimality property) are thenormalized maximum likelihood(NML) orShtarkovcodes. A quite useful class of codes are theBayesian marginal likelihood codes.For exponential families of distributions, when Jeffreys prior is used and the parameter space is suitably restricted, these asymptotically coincide with the NML codes; this brings MDL theory in close contact with objective Bayes model selection, in which one also sometimes adopts Jeffreys' prior, albeit for different reasons. The MDL approach to model selection "gives a selection criterion formally identical to theBICapproach"[7]for large number of samples.
A coin is flipped 1000 times, and the numbers of heads and tails are recorded. Consider two model classes:
For this reason, a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. This code could be the normalized maximum likelihood code or a Bayesian code. If such a code is used, then the total codelength based on the second model class would be larger than 1000 bits. Therefore, the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data.
Central to MDL theory is theone-to-one correspondencebetween code lengthfunctionsandprobability distributions(this follows from theKraft–McMillan inequality). For any probability distributionP{\displaystyle P}, it is possible to construct a codeC{\displaystyle C}such that the length (in bits) ofC(x){\displaystyle C(x)}is equal to−log2P(x){\displaystyle -\log _{2}P(x)}; this code minimizes the expected code length. Conversely, given a codeC{\displaystyle C}, one can construct a probability distributionP{\displaystyle P}such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code is equivalent to searching for a good probability distribution.
The description language of statistical MDL is not computationally universal. Therefore it cannot, even in principle, learn models of recursive natural processes.
Statistical MDL learning is very strongly connected toprobability theoryandstatisticsthrough the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent toBayesian inference: code length of model and data together in MDL correspond respectively toprior probabilityandmarginal likelihoodin the Bayesian framework.[8]
While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkovnormalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about thetruedata-generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense.[9][self-published source?][10]In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on theKolmogorov structure function.
According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafepriorsthat would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-calledobjective Bayesiananalysis; there, however, the motivation is usually different.[11]
Rissanen's was not the firstinformation-theoreticapproach to learning; as early as 1968 Wallace and Boulton pioneered a related concept calledminimum message length(MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation:
|
https://en.wikipedia.org/wiki/Minimum_description_length
|
Minimum message length(MML) is a Bayesian information-theoretic method for statistical model comparison and selection.[1]It provides a formalinformation theoryrestatement ofOccam's Razor: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most conciseexplanationof data is more likely to be correct (where theexplanationconsists of the statement of the model, followed by thelossless encodingof the data using the stated model). MML was invented byChris Wallace, first appearing in the seminal paper "An information measure for classification".[2]MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice.[3]It differs from the related concept ofKolmogorov complexityin that it does not require use of aTuring-completelanguage to model data.[4]
Shannon'sA Mathematical Theory of Communication(1948) states that in an optimal code, the message length (in binary) of an eventE{\displaystyle E},length(E){\displaystyle \operatorname {length} (E)}, whereE{\displaystyle E}has probabilityP(E){\displaystyle P(E)}, is given bylength(E)=−log2(P(E)){\displaystyle \operatorname {length} (E)=-\log _{2}(P(E))}.
Bayes's theoremstates that the probability of a (variable) hypothesisH{\displaystyle H}given fixed evidenceE{\displaystyle E}is proportional toP(E|H)P(H){\displaystyle P(E|H)P(H)}, which, by the definition ofconditional probability, is equal toP(H∧E){\displaystyle P(H\land E)}. We want the model (hypothesis) with the highest suchposterior probability. Suppose we encode a message which represents (describes) both model and data jointly. Sincelength(H∧E)=−log2(P(H∧E)){\displaystyle \operatorname {length} (H\land E)=-\log _{2}(P(H\land E))}, the most probable model will have the shortest such message. The message breaks into two parts:−log2(P(H∧E))=−log2(P(H))+−log2(P(E|H)){\displaystyle -\log _{2}(P(H\land E))=-\log _{2}(P(H))+-\log _{2}(P(E|H))}. The first part encodes the model itself. The second part contains information (e.g., values of parameters, or initial conditions, etc.) that, when processed by the model, outputs the observed data.
MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So, an MML metric won't choose a complicated model unless that model pays for itself.
One reason why a model might be longer would be simply because its various parameters are stated to greater precision, thus requiring transmission of more digits. Much of the power of MML derives from its handling of how accurately to state parameters in a model, and a variety of approximations that make this feasible in practice. This makes it possible to usefully compare, say, a model with many parameters imprecisely stated against a model with fewer parameters more accurately stated.
Original Publication:
Books:
Related Links:
|
https://en.wikipedia.org/wiki/Minimum_message_length
|
Theproblem of inductionis a philosophical problem that questions therationalityof predictions about unobserved things based on previous observations. These inferences from the observed to the unobserved are known as "inductive inferences".David Hume, who first formulated the problem in 1739,[1]argued that there is no non-circular way to justify inductive inferences, while he acknowledged that everyone does and must make such inferences.[2]
The traditionalinductivistview is that all claimedempiricallaws, either in everyday life or through thescientific method, can be justified through some form of reasoning. The problem is that many philosophers tried to find such a justification but their proposals were not accepted by others. Identifying the inductivist view as the scientific view,C. D. Broadonce said that induction is "the glory of science and the scandal of philosophy".[3]In contrast,Karl Popper'scritical rationalismclaimed that inductive justifications are never used in science and proposed instead that science is based on the procedure of conjecturinghypotheses,deductivelycalculating consequences, and then empirically attempting tofalsifythem.
Ininductive reasoning, one makes a series of observations andinfersa claim based on them. For instance, from a series of observations that a woman walks her dog by the market at 8 am on Monday, it seems valid to infer that next Monday she will do the same, or that, in general, the woman walks her dog by the market every Monday. That next Monday the woman walks by the market merely adds to the series of observations, but it does not prove she will walk by the market every Monday. First of all, it is not certain, regardless of the number of observations, that the woman always walks by the market at 8 am on Monday. In fact,David Humeeven argued that we cannot claim it is "more probable", since this still requires the assumption that the past predicts the future.
Second, the observations themselves do not establish the validity of inductive reasoning, except inductively.Bertrand Russellillustrated this point inThe Problems of Philosophy:
Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.
The works of thePyrrhonistphilosopherSextus Empiricuscontain the oldest surviving questioning of the validity of inductive reasoning. He wrote:[4]
It is also easy, I consider, to set aside the method of induction. For, when they propose to establish the universal from the particulars by means of induction, they will effect this by a review either of all or of some of the particular instances. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite. Thus on both grounds, as I think, the consequence is that induction is invalidated.
The focus upon the gap between the premises and conclusion present in the above passage appears different from Hume's focus upon thecircular reasoningof induction. However, Weintraub claims inThe Philosophical Quarterly[5]that although Sextus's approach to the problem appears different, Hume's approach was actually an application of another argument raised by Sextus:[6]
Those who claim for themselves to judge the truth are bound to possess acriterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so onad infinitum.
Although thecriterion argumentapplies to both deduction and induction, Weintraub believes that Sextus's argument "is precisely the strategy Hume invokes against induction: it cannot be justified, because the purported justification, being inductive, is circular." She concludes that "Hume's most important legacy is the supposition that the justification of induction is not analogous to that of deduction." She ends with a discussion of Hume's implicit sanction of the validity of deduction, which Hume describes as intuitive in a manner analogous to modernfoundationalism.
TheCārvāka, a materialist and skeptic school of Indian philosophy, used the problem of induction to point out the flaws in using inference as a way to gain valid knowledge. They held that since inference needed an invariable connection between the middle term and the predicate, and further, that since there was no way to establish this invariable connection, that the efficacy of inference as a means of valid knowledge could never be stated.[7][8]
The 9th century Indian skeptic,Jayarasi Bhatta, also made an attack on inference, along with all means of knowledge, and showed by a type of reductio argument that there was no way to conclude universal relations from the observation of particular instances.[9][10]
Medieval writers such asal-GhazaliandWilliam of Ockhamconnected the problem with God's absolute power, asking how we can be certain that the world will continue behaving as expected when God could at any moment miraculously cause the opposite.[11]Duns Scotus, however, argued that inductive inference from a finite number of particulars to a universal generalization was justified by "a proposition reposing in the soul, 'Whatever occurs in a great many instances by a cause that is not free, is the natural effect of that cause.'"[12]Some 17th-centuryJesuitsargued that although God could create the end of the world at any moment, it was necessarily a rare event and hence our confidence that it would not happen very soon was largely justified.[13]
David Hume, a Scottish thinker of the Enlightenment era, is the philosopher most often associated with induction. His formulation of the problem of induction can be found inAn Enquiry concerning Human Understanding, §4. Here, Hume introduces his famous distinction between "relations of ideas" and "matters of fact". Relations of ideas are propositions which can be derived from deductive logic, which can be found in fields such as geometry and algebra. Matters of fact, meanwhile, are not verified through the workings of deductive logic but by experience. Specifically, matters of fact are established by making an inference about causes and effects from repeatedly observed experience. While relations of ideas are supported by reason alone, matters of fact must rely on the connection of a cause and effect through experience. Causes of effects cannot be linked through a priori reasoning, but by positing a "necessary connection" that depends on the "uniformity of nature".
Hume situates his introduction to the problem of induction inA Treatise of Human Naturewithin his larger discussion on the nature of causes and effects (Book I, Part III, Section VI). He writes that reasoning alone cannot establish the grounds of causation. Instead, the human mind imputes causation to phenomena after repeatedly observing a connection between two objects. For Hume, establishing the link between causes and effects relies not on reasoning alone, but the observation of "constant conjunction" throughout one's sensory experience. From this discussion, Hume goes on to present his formulation of the problem of induction inA Treatise of Human Nature, writing "there can be nodemonstrativearguments to prove,that those instances, of which we have had no experience, resemble those, of which we have had experience."
In other words, the problem of induction can be framed in the following way: we cannot apply a conclusion about a particular set of observations to a more general set of observations. While deductive logic allows one to arrive at a conclusion with certainty, inductive logic can only provide a conclusion that is probably true.[non-primary source needed]It is mistaken to frame the difference between deductive and inductive logic as one between general to specific reasoning and specific to general reasoning. This is a common misperception about the difference between inductive and deductive thinking. According to the literal standards of logic, deductive reasoning arrives at certain conclusions while inductive reasoning arrives at probable conclusions.[non-primary source needed]Hume's treatment of induction helps to establish the grounds for probability, as he writes inA Treatise of Human Naturethat "probability is founded on the presumption of a resemblance betwixt those objects, of which we have had experience, and those, of which we have had none" (Book I, Part III, Section VI).[non-primary source needed]
Therefore, Hume establishes induction as the very grounds for attributing causation. There might be many effects which stem from a single cause. Over repeated observation, one establishes that a certain set of effects are linked to a certain set of causes. However, the future resemblance of these connections to connections observed in the past depends on induction. Induction allows one to conclude that "Effect A2" was caused by "Cause A2" because a connection between "Effect A1" and "Cause A1" was observed repeatedly in the past. Given that reason alone can not be sufficient to establish the grounds of induction, Hume implies that induction must be accomplished through imagination. One does not make an inductive reference through a priori reasoning, but through an imaginative step automatically taken by the mind.
Hume does not challenge that induction is performed by the human mind automatically, but rather hopes to show more clearly how much human inference depends on inductive—nota priori—reasoning. He does not deny future uses of induction, but shows that it is distinct from deductive reasoning, helps to ground causation, and wants to inquire more deeply into its validity. Hume offers no solution to the problem of induction himself. He prompts other thinkers and logicians to argue for the validity of induction as an ongoing dilemma for philosophy. A key issue with establishing the validity of induction is that one is tempted to use an inductive inference as a form of justification itself. This is because people commonly justify the validity of induction by pointing to the many instances in the past when induction proved to be accurate. For example, one might argue that it is valid to use inductive inference in the future because this type of reasoning has yielded accurate results in the past. However, this argument relies on an inductive premise itself—that past observations of induction being valid will mean that future observations of induction will also be valid. Thus, many solutions to the problem of induction tend to be circular.
Nelson Goodman'sFact, Fiction, and Forecast(1955) presented a different description of the problem of induction in the chapter entitled "The New Riddle of Induction". Goodman proposed the newpredicate"grue". Something is grue if and only if it has been (or will be, according to a scientific, general hypothesis[14][15]) observed to be green before a certain timet, and blue if observed after that time. The "new" problem of induction is, since all emeralds we have ever seen are both green and grue, why do we suppose that after timetwe will find green but not grue emeralds? The problem here raised is that two different inductions will be true and false under the same conditions. In other words:
One could argue, usingOccam's razor, that greenness is more likely than grueness because the concept of grueness is more complex than that of greenness. Goodman, however, points out that the predicate "grue" only appears more complex than the predicate "green" because we have defined grue in terms of blue and green. If we had always been brought up to think in terms of "grue" and "bleen" (where bleen is blue before timet, and green thereafter), we would intuitively consider "green" to be a crazy and complicated predicate. Goodman believed that which scientific hypotheses we favour depend on which predicates are "entrenched" in our language.[original research?]
Willard Van Orman Quineoffers a practical solution to this problem[16]by making themetaphysicalclaim that only predicates that identify a "natural kind" (i.e. a real property of real things) can be legitimately used in a scientific hypothesis.R. Bhaskaralso offers a practical solution to the problem. He argues that the problem of induction only arises if we deny the possibility of a reason for the predicate, located in the enduring nature of something.[17]For example, we know that all emeralds are green, not because we have only ever seen green emeralds, but because the chemical make-up of emeralds insists that they must be green. If we were to change that structure, they would not be green. For instance, emeralds are a kind of greenberyl, made green by trace amounts of chromium and sometimes vanadium. Without these trace elements, the gems would be colourless.
Although induction is not made by reason, Hume observes that we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of theEnquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses".[18]The result of custom is belief, which is instinctual and much stronger than imagination alone.[19]
In hisTreatise on Probability,John Maynard Keynesnotes:
An inductive argument affirms, not that a certain matter of fact is so, but that relative to certain evidence there is a probability in its favour. The validity of the induction, relative to the original evidence, is not upset, therefore, if, as a fact, the truth turns out to be otherwise.[20]
This approach was endorsed byBertrand Russell.[21]
David Stove's argument for induction, based on thestatistical syllogism, was presented in theRationality of Inductionand was developed from an argument put forward by one of Stove's heroes, the lateDonald Cary Williams(formerly Professor at Harvard) in his bookThe Ground of Induction.[22]Stove argued that it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequently, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified in concluding that it is likely that this subset "matches" the population reasonably closely. The situation would be analogous to drawing a ball out of a barrel of balls, 99% of which are red. In such a case you have a 99% chance of drawing a red ball. Similarly, when getting a sample of ravens the probability is very high that the sample is one of the matching or "representative" ones. So as long as you have no reason to think that your sample is an unrepresentative one, you are justified in thinking that probably (although not certainly) that it is.[23]
An intuitive answer to Hume would be to say that a world inaccessible to any inductive procedure would simply not be conceivable. This intuition was taken into account byKeith Campbellby considering that, to be built, a concept must be reapplied, which demands a certain continuity in its object of application and consequently some openness to induction.[24]Claudio Costahas noted that a future can only be a future of its own past if it holds some identity with it. Moreover, the nearer a future is to the point of junction with its past, the greater are the similarities tendentially involved. Consequently –contraHume – some form of principle of homogeneity (causal or structural) between future and past must be warranted, which would make some inductive procedure always possible.[25]
Karl Popper, aphilosopher of science, sought to solve the problem of induction.[26][27]He argued that science does not use induction, and induction is in fact a myth.[28]Instead, knowledge is created byconjectureand criticism.[29]The main role of observations and experiments in science, he argued, is in attempts to criticize and refute existing theories.[30]
According to Popper, the problem of induction as usually conceived is asking the wrong question: it is asking how to justify theories given they cannot be justified by induction. Popper argued that justification is not needed at all, and seeking justification "begs for an authoritarian answer". Instead, Popper said, what should be done is to look to find and correct errors.[31]Popper regarded theories that have survived criticism as better corroborated in proportion to the amount and stringency of the criticism, but, in sharp contrast to the inductivist theories of knowledge, emphatically as less likely to be true.[clarification needed][32]Popper held that seeking for theories with a high probability of being true was a false goal that is in conflict with the search for knowledge. Science should seek for theories that are most probably false on the one hand (which is the same as saying that they are highly falsifiable and so there are many ways that they could turn out to be wrong), but still all actual attempts to falsify them have failed so far (that they are highly corroborated).
Wesley C. Salmoncriticizes Popper on the grounds that predictions need to be made both for practical purposes and in order to test theories. That means Popperians need to make a selection from the number of unfalsified theories available to them, which is generally more than one. Popperians would wish to choose well-corroborated theories, in their sense of corroboration, but face a dilemma: either they are making the essentially inductive claim that a theory's having survived criticism in the past means it will be a reliable predictor in the future; or Popperian corroboration is no indicator of predictive power at all, so there is no rational motivation for their preferred selection principle.[33]
David Millerhas criticized this kind of criticism by Salmon and others because it makes inductivist assumptions.[34]Popper does not say that corroboration is an indicator of predictive power. The predictive power[according to whom?]is in the theory itself, not in its corroboration. The rational motivation for choosing a well-corroborated theory is that it is simply easier to falsify: Well-corroborated means that at least one kind of experiment (already conducted at least once) could have falsified (but did not actually falsify) the one theory, while the same kind of experiment, regardless of its outcome, could not have falsified the other. So it is rational to choose the well-corroborated theory: It may not be more likely to be true, but if it is actually false, it is easier to get rid of when confronted with the conflicting evidence that will eventually turn up. Accordingly, it is wrong to consider corroboration as a reason, ajustificationfor believing in a theory or as an argument in favor of a theory to convince someone who objects to it.[35]
|
https://en.wikipedia.org/wiki/Problem_of_induction
|
Thenew riddle of inductionwas presented byNelson GoodmaninFact, Fiction, and Forecastas a successor toHume's original problem. It presents the logicalpredicatesgrueandbleenwhich are unusual due to their time-dependence. Many have tried to solve the new riddle on those terms, butHilary Putnamand others have argued such time-dependency depends on the language adopted, and in some languages it is equally true for natural-sounding predicates such as "green". For Goodman they illustrate the problem of projectible predicates and ultimately, which empirical generalizations arelaw-likeand which are not.[1][2]Goodman's construction and use ofgrueandbleenillustrates how philosophers use simple examples inconceptual analysis.
Goodman defined "grue" relative to an arbitrary but fixed timet:[a]an object is grueif and only ifit is observed beforetand is green, or else is not so observed and is blue. An object is "bleen" if and only if it is observed beforetand is blue, or else is not so observed and is green.[3]
For some arbitrary future timet, say January 1, 2035, for all green things observed prior tot, such asemeraldsand well-watered grass, both the predicatesgreenandgrueapply. Likewise for all blue things observed prior tot, such asbluebirdsorblue flowers, both the predicatesblueandbleenapply. On January 2, 2035, however, emeralds and well-watered grass arebleen, and bluebirds or blue flowers aregrue. The predicatesgrueandbleenare not the kinds of predicates used in everyday life or in science, but they apply in just the same way as the predicatesgreenandblueup until some future timet. From the perspective of observers before timetit is indeterminate which predicates are future projectible (greenandblueorgrueandbleen).
In this section, Goodman's new riddle of induction is outlined in order to set the context for his introduction of the predicatesgrueandbleenand thereby illustrate theirphilosophical importance.[2][4]
Goodman posesHume's problem of inductionas a problem of the validity of thepredictionswe make. Since predictions are about what has yet to be observed and because there is no necessary connection between what has been observed and what will be observed, there is no objective justification for these predictions. Deductive logic cannot be used to infer predictions about future observations based on past observations because there are no valid rules of deductive logic for such inferences. Hume's answer was that observations of one kind of event following another kind of event result in habits of regularity (i.e., associating one kind of event with another kind). Predictions are then based on these regularities or habits of mind.
Goodman takes Hume's answer to be a serious one. He rejects other philosophers' objection that Hume is merely explaining the origin of our predictions and not their justification. His view is that Hume has identified something deeper. To illustrate this, Goodman turns to the problem of justifying asystem of rules of deduction. For Goodman, the validity of a deductive system is justified by its conformity to good deductive practice. The justification of rules of a deductive system depends on our judgements about whether to reject or accept specific deductive inferences. Thus, for Goodman, the problem of induction dissolves into the same problem as justifying a deductive system and while, according to Goodman, Hume was on the right track with habits of mind, the problem is more complex than Hume realized.
In the context of justifying rules of induction, this becomes the problem of confirmation of generalizations for Goodman. However, the confirmation is not a problem of justification but instead it is a problem of precisely defining how evidence confirms generalizations. It is with this turn thatgrueandbleenhave their philosophical role in Goodman's view of induction.
The new riddle of induction, for Goodman, rests on our ability to distinguishlawlikefromnon-lawlikegeneralizations.Lawlikegeneralizations are capable of confirmation whilenon-lawlikegeneralizations are not.Lawlikegeneralizations are required for making predictions. Using examples from Goodman, the generalization that all copper conducts electricity is capable of confirmation by a particular piece of copper whereas the generalization that all men in a given room are third sons is notlawlikebut accidental. The generalization that all copper conducts electricity is a basis for predicting that this piece of copper will conduct electricity. The generalization that all men in a given room are third sons, however, is not a basis for predicting that a given man in that room is a third son.
The question, therefore, is what makes some generalizationslawlikeand others accidental. This, for Goodman, becomes a problem of determining which predicates are projectible (i.e., can be used inlawlikegeneralizations that serve as predictions) and which are not. Goodman argues that this is where the fundamental problem lies. This problem is known asGoodman's paradox: from the apparently strong evidence that allemeraldsexamined thus far have been green, one may inductively conclude that all future emeralds will be green. However, whether this prediction islawlikeor not depends on the predicates used in this prediction. Goodman observed that (assumingthas yet to pass) it is equally true that every emerald that has been observed isgrue. Thus, by the same evidence we can conclude that all future emeralds will begrue. The new problem of induction becomes one of distinguishing projectible predicates such asgreenandbluefrom non-projectible predicates such asgrueandbleen.
Hume, Goodman argues, missed this problem. We do not, by habit, form generalizations from all associations of events we have observed but only some of them. All past observed emeralds were green, and we formed a habit of thinking the next emerald will be green, but they were equally grue, and we do not form habits concerning grueness.Lawlikepredictions (or projections) ultimately are distinguishable by the predicates we use. Goodman's solution is to argue thatlawlikepredictions are based on projectible predicates such asgreenandblueand not on non-projectible predicates such asgrueandbleenand what makes predicates projectible is theirentrenchment, which depends on their successful past projections. Thus,grueandbleenfunction in Goodman's arguments to both illustrate the new riddle of induction and to illustrate the distinction between projectible and non-projectible predicates via their relative entrenchment.
One response is to appeal to the artificiallydisjunctivedefinition of grue. The notion of predicateentrenchmentis not required. Goodman said that this does not succeed. If we takegrueandbleenas primitive predicates, we can define green as "grueif first observed beforetandbleenotherwise", and likewise for blue. To deny the acceptability of this disjunctive definition of green would be tobeg the question.
Another proposed resolution that does not require predicateentrenchmentis that "xis grue" is not solely a predicate ofx, but ofxand a timet—we can know that an object is green without knowing the timet, but we cannot know that it is grue. If this is the case, we should not expect "xis grue" to remain true when the time changes. However, one might ask why "xis green" isnotconsidered a predicate of a particular timet—the more common definition ofgreendoes not require any mention of a timet, but the definitiongruedoes. Goodman also addresses and rejects this proposed solution asquestion beggingbecausebluecan be defined in terms ofgrueandbleen, which explicitly refer to time.[5]
Richard Swinburnegets past the objection that green may be redefined in terms ofgrueandbleenby making a distinction based on how we test for the applicability of a predicate in a particular case. He distinguishes between qualitative and locational predicates. Qualitative predicates, like green,canbe assessed without knowing the spatial or temporal relation ofxto a particular time, place or event. Locational predicates, likegrue,cannotbe assessed without knowing the spatial or temporal relation ofxto a particular time, place or event, in this case whetherxis being observed before or after timet. Although green can be given a definition in terms of the locational predicatesgrueandbleen, this is irrelevant to the fact that green meets the criterion for being a qualitative predicate whereasgrueis merely locational. He concludes that if somex's under examination—like emeralds—satisfy both a qualitative and a locational predicate, but projecting these two predicates yields conflicting predictions, namely, whether emeralds examined after timetshall appear grue or green, we should project the qualitative predicate, in this case green.[6]
Rudolf Carnapresponded[7]to Goodman's 1946 article. Carnap's approach to inductive logic is based on the notion ofdegree of confirmationc(h,e) of a given hypothesishby a given evidencee.[b]Bothhandeare logical formulas expressed in a simple languageLwhich allows for
Theuniverse of discourseconsists of denumerably many individuals, each of which is designated by its own constant symbol; such individuals are meant to be regarded as positions ("like space-time points in our actual world") rather than extended physical bodies.[9]A state description is a (usually infinite) conjunction containing every possible ground atomic sentence, either negated or unnegated; such a conjunction describes a possible state of the whole universe.[10]Carnap requires the following semantic properties:
Carnap distinguishes three kinds of properties:
To illuminate this taxonomy, letxbe a variable andaa constant symbol; then an example of 1. could be "xis blue orxis non-warm", an example of 2. "x=a", and an example of 3. "xis red and notx=a".
Based on his theory of inductive logic sketched above, Carnap formalizes Goodman's notion of projectibility of a propertyWas follows: the higher the relative frequency ofWin an observed sample, the higher is the probability that a non-observed individual has the propertyW. Carnap suggests "as a tentative answer" to Goodman, that all purely qualitative properties are projectible, all purely positional properties are non-projectible, and mixed properties require further investigation.[16]
Willard Van Orman Quinediscusses an approach to consider only "natural kinds" as projectible predicates.[17]He first relates Goodman's grue paradox toHempel'sraven paradoxby defining two predicatesFandGto be (simultaneously) projectible if all their shared instances count toward confirmation of the claim "eachFis aG".[18]Then Hempel's paradox just shows that the complements of projectible predicates (such as "is a raven", and "is black") need not be projectible,[g]while Goodman's paradox shows that "is green" is projectible, but "is grue" is not.
Next, Quine reduces projectibility to the subjective notion ofsimilarity. Two green emeralds are usually considered more similar than two grue ones if only one of them is green. Observing a green emerald makes us expect a similar observation (i.e., a green emerald) next time. Green emeralds are anatural kind, but grue emeralds are not. Quine investigates "the dubious scientific standing of a general notion of similarity, or of kind".[19]Both are basic to thought and language, like the logical notions of e.g.identity,negation,disjunction. However, it remains unclear how to relate the logical notions tosimilarityorkind;[h]Quine therefore tries to relate at least the latter two notions to each other.
Relation between similarity and kind
Assuming finitely manykindsonly, the notion ofsimilaritycan be defined by that ofkind: an objectAis more similar toBthan toCifAandBbelong jointly to more kinds[i]thanAandCdo.[21][j]
Vice versa, it remains again unclear how to definekindbysimilarity. Defining e.g. the kind of red things as the set of all things that are more similar to a fixed "paradigmatical" red object than this is to another fixed "foil" non-red object (cf. left picture) isn't satisfactory, since the degree of overall similarity, including e.g. shape, weight, will afford little evidence of degree of redness.[21](In the picture, the yellow paprika might be considered more similar to the red one than the orange.)
An alternative approach inspired byCarnapdefines a natural kind to be asetwhose members are more similar to each other than each non-member is to at least one member.[22][k]However, Goodman[23]argued, that this definition would make the set of all red round things, red wooden things, and round wooden things (cf. right picture) meet the proposed definition of a natural kind,[l]while "surely it is not what anyone means by a kind".[m][24]
While neither of the notions of similarity and kind can be defined by the other, they at least vary together: ifAis reassessed to be more similar toCthan toBrather than the other way around, the assignment ofA,B,Cto kinds will be permuted correspondingly; and conversely.[24]
Basic importance of similarity and kind
In language, every general term owes its generality to some resemblance of the thingsreferredto.Learningto use a word depends on a double resemblance, viz. between the present and past circumstances in which the word was used, and between the present and past phonetic utterances of the word.[25]
Every reasonable expectation depends on resemblance of circumstances, together with our tendency to expect similar causes to have similar effects.[19]This includes any scientific experiment, since it can be reproduced only under similar, but not under completely identical, circumstances. AlreadyHeraclitus' famous saying "No man ever steps in the same river twice" highlighted the distinction between similar and identical circumstances.
Genesis of similarity and kind
In abehavioralsense, humans and other animals have an innate standard of similarity. It is part of our animal birthright, and characteristically animal in its lack of intellectual status, e.g. its alienness to mathematics and logic,[29]cf. bird example.
Induction itself is essentiallyanimal expectationor habit formation.Ostensive learning[30]is a case of induction, and a curiously comfortable one, since each man's spacing of qualities and kind is enough like his neighbor's.[31]In contrast, the "brute irrationality of our sense of similarity" offers little reason to expect it being somehow in tune with the unanimated nature, which we never made.[n]Why inductively obtained theories about it should be trusted is the perennial philosophicalproblem of induction. Quine, followingWatanabe,[32]suggestsDarwin's theory as an explanation: if people's innate spacing of qualities is a gene-linked trait, then the spacing that has made for the most successful inductions will have tended to predominate throughnatural selection.[33]However, this cannot account for the human ability to dynamically refine one's spacing of qualities in the course of getting acquainted with a new area.[o]
In his bookWittgenstein on Rules and Private Language,Saul Kripkeproposed a related argument that leads to skepticism about meaning rather than skepticism about induction, as part of his personal interpretation (nicknamed "Kripkenstein" by some[34]) of theprivate language argument. He proposed a new form of addition, which he calledquus, which is identical with "+" in all cases except those in which either of the numbers added are equal to or greater than 57; in which case the answer would be 5, i.e.:
He then asks how, given certain obvious circumstances, anyone could know that previously when I thought I had meant "+", I had not actually meantquus. Kripke then argues for an interpretation ofWittgensteinas holding that the meanings of words are not individually contained mental entities.
|
https://en.wikipedia.org/wiki/New_riddle_of_induction
|
Ahybrid systemis adynamical systemthat exhibits both continuous and discrete dynamic behavior – a system that can bothflow(described by adifferential equation) andjump(described by astate machine,automaton, or adifference equation).[1]Often, the term "hybrid dynamical system" is used instead of "hybrid system", to distinguish from other usages of "hybrid system", such as the combinationneural netsandfuzzy logic, or of electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, thestateof a hybrid system is defined by the values of thecontinuous variablesand a discretemode. The state changes either continuously, according to aflowcondition, or discretely according to acontrol graph. Continuous flow is permitted as long as so-calledinvariantshold, while discrete transitions can occur as soon as givenjump conditionsare satisfied. Discrete transitions may be associated withevents.
Hybrid systems have been used to model several cyber-physical systems, includingphysical systemswithimpact, logic-dynamiccontrollers, and evenInternetcongestion.
A canonical example of a hybrid system is thebouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after aninelastic collision. A mathematical description of the bouncing ball follows. Letx1{\displaystyle x_{1}}be the height of the ball andx2{\displaystyle x_{2}}be the velocity of the ball. A hybrid system describing the ball is as follows:
Whenx∈C={x1>0}{\displaystyle x\in C=\{x_{1}>0\}}, flow is governed byx˙1=x2,x˙2=−g{\displaystyle {\dot {x}}_{1}=x_{2},{\dot {x}}_{2}=-g},
whereg{\displaystyle g}is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
Whenx∈D={x1=0}{\displaystyle x\in D=\{x_{1}=0\}}, jumps are governed byx1+=x1,x2+=−γx2{\displaystyle x_{1}^{+}=x_{1},x_{2}^{+}=-\gamma x_{2}},
where0<γ<1{\displaystyle 0<\gamma <1}is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor ofγ{\displaystyle \gamma }. Effectively, this describes the nature of the inelastic collision.
The bouncing ball is an especially interesting hybrid system, as it exhibitsZenobehavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making aninfinitenumber of jumps in afiniteamount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.
It is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as0≤λ⊥x1≥0.{\displaystyle 0\leq \lambda \perp x_{1}\geq 0.}Such a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact forceλ{\displaystyle \lambda }. One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.
There are approaches to automaticallyprovingproperties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets,abstraction refinement, andbarrier certificates.
Most verification tasks are undecidable,[2]making general verificationalgorithmsimpossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases[3]implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.[4]
Two basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by ahybrid automaton, ahybrid programor a hybridPetri net. The implicit approach is often represented by guarded equations to result in systems ofdifferential algebraic equations(DAEs) where the active equations may change, for example by means of ahybrid bond graph.
As a unified simulation approach for hybrid system analysis, there is a method based onDEVSformalism in which integrators for differential equations are quantized into atomicDEVSmodels. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software toolPowerDEVS.
|
https://en.wikipedia.org/wiki/Hybrid_system
|
Subsumption architectureis a reactiverobotic architectureheavily associated withbehavior-based roboticswhich was very popular in the 1980s and 90s. The term was introduced byRodney Brooksand colleagues in 1986.[1][2][3]Subsumption has been widely influential inautonomous roboticsand elsewhere inreal-timeAI.
Subsumption architecture is a control architecture that was proposed in opposition to traditionalsymbolic AI. Instead of guiding behavior by symbolicmental representationsof the world, subsumption architecture couples sensory information toaction selectionin an intimate andbottom-upfashion.[4]: 130
It does this by decomposing the complete behavior into sub-behaviors. These sub-behaviors are organized into a hierarchy of layers. Each layer implements a particular level of behavioral competence, and higher levels are able to subsume lower levels (= integrate/combine lower levels to a more comprehensive whole) in order to create viable behavior. For example, a robot's lowest layer could be "avoid an object". The second layer would be "wander around", which runs beneath the third layer "explore the world". Because a robot must have the ability to "avoid objects" in order to "wander around" effectively, the subsumption architecture creates a system in which the higher layers utilize the lower-level competencies. The layers, which all receive sensor-information, work in parallel and generate outputs. These outputs can be commands to actuators, or signals that suppress or inhibit other layers.[5]: 8–12, 15–16
Subsumption architecture attacks the problem of intelligence from a significantly different perspective than traditional AI. Disappointed with the performance ofShakey the robotand similar conscious mind representation-inspired projects,Rodney Brooksstarted creating robots based on a different notion of intelligence, resembling unconscious mind processes. Instead of modelling aspects of human intelligence via symbol manipulation, this approach is aimed atreal-timeinteraction and viable responses to a dynamic lab or office environment.[4]: 130–131
The goal was informed by four key ideas:
The ideas outlined above are still a part of an ongoing debate regarding the nature of intelligence and how the progress of robotics and AI should be fostered.
Each layer is made up by a set of processors that are augmentedfinite-state machines(AFSM), the augmentation being addedinstance variablesto hold programmable data-structures. A layer is amoduleand is responsible for a single behavioral goal, such as "wander around." There is no central control within or between these behavioral modules. All AFSMs continuously and asynchronously receive input from the relevant sensors and send output to actuators (or other AFSMs). Input signals that are not read by the time a new one is delivered end up getting discarded. These discarded signals are common, and is useful for performance because it allows the system to work in real time by dealing with the most immediate information.
Because there is no central control, AFSMs communicate with each other via inhibition and suppression signals. Inhibition signals block signals from reaching actuators or AFSMs, and suppression signals blocks or replaces the inputs to layers or their AFSMs. This system of AFSM communication is how higher layers subsume lower ones (see figure 1), as well as how the architecture deals with priority andaction selectionarbitration in general.[5]: 12–16
The development of layers follows an intuitive progression. First, the lowest layer is created, tested, and debugged. Once that lowest level is running, one creates and attaches the second layer with the proper suppression and inhibition connections to the first layer. After testing and debugging the combined behavior, this process can be repeated for (theoretically) any number of behavioral modules.[5]: 16–20
The following is a small list of robots that utilize the subsumption architecture.
The above are described in detail along with other robots inElephants Don't Play Chess.[6]
The main advantages of the architecture are:
The main disadvantages of the architecture are:
When subsumption architecture was developed, the novel setup and approach of subsumption architecture allowed it to be successful in many important domains where traditional AI had failed, namelyreal-timeinteraction with a dynamic environment. The lack of large memory storage, symbolic representations, and central control, however, places it at a disadvantage at learning complex actions, in-depthmapping, andunderstanding language.
Key papers include:
|
https://en.wikipedia.org/wiki/Subsumption_architecture
|
Cascadingis a particular case ofensemble learningbased on the concatenation of severalclassifiers, using all information collected from the output from a given classifier as additional information for the next classifier in the cascade. Unlike voting or stacking ensembles, which are multiexpert systems, cascading is a multistage one.
Cascading classifiers are trained with several hundred "positive" sample views of a particular object and arbitrary "negative" images of the same size. After the classifier is trained it can be applied to a region of an image and detect the object in question. To search for the object in the entire frame, the search window can be moved across the image and check every location with the classifier. This process is most commonly used inimage processingfor object detection and tracking, primarilyfacial detectionand recognition.
The first cascading classifier was the face detector ofViola and Jones (2001). The requirement for this classifier was to be fast in order to be implemented on low-powerCPUs, such as cameras and phones.
It can be seen from this description that the classifier will not accept faces that are upside down (the eyebrows are not in a correct position) or the side of the face (the nose is no longer in the center, and shadows on the side of the nose might be missing). Separate cascade classifiers have to be trained for every rotation that is not in the image plane (side of face) and will have to be retrained or run on rotated features for every rotation that is in the image plane (face upside down or tilted to the side). Scaling is not a problem, since the features can be scaled (centerpixel, leftpixels and rightpixels have a dimension only relative to the rectangle examined). In recent cascades, pixel value from some part of a rectangle compared to another have been replaced withHaar wavelets.
To have good overall performance, the following criteria must be met:
The training procedure for one stage is therefore to have many weak learners (simple pixel difference operators), train them as a group (raise their weight if they give correct result), but be mindful of having only a few active weak learners so the computation time remains low.
The first detector of Viola & Jones had 38 stages, with 1 feature in the first stage, then 10, 25, 25, 50 in the next five stages, for a total of 6000 features. The first stages remove unwanted rectangles rapidly to avoid paying the computational costs of the next stages, so that computational time is spent analyzing deeply the part of the image that have a high probability of containing the object.
Cascades are usually done through cost-aware ADAboost. The sensitivity threshold (0.8 in our example) can be adjusted so that there is close to 100% true positives and some false positives. The procedure can then be started again for stage 2, until the desired accuracy/computation time is reached.
After the initial algorithm, it was understood that training the cascade as a whole can be optimized, to achieve a desired true detection rate with minimal complexity. Examples of such algorithms are RCBoost, ECBoost or RCECBoost. In their most basic versions, they can be understood as choosing, at each step, between adding a stage or adding a weak learner to a previous stage, whichever is less costly, until the desired accuracy has been reached. Every stage of the classifier cannot have a detection rate (sensitivity) below the desired rate, so this is aconstrained optimizationproblem. To be precise, the total sensitivity will be the product of stage sensitivities.
Cascade classifiers are available inOpenCV, with pre-trained cascades for frontal faces and upper body. Training a new cascade in OpenCV is also possible with either haar_training or train_cascades methods. This can be used for rapid object detection of more specific targets, including non-human objects withHaar-like features. The process requires two sets of samples: negative and positive, where the negative samples correspond to arbitrary non-object images. The time constraint in training a cascade classifier can be circumvented usingcloud-computingmethods.
The term is also used in statistics to describe a model that is staged. For example, a classifier (for examplek-means), takes a vector of features (decision variables) and outputs for each possible classification result the probability that the vector belongs to the class. This is usually used to take a decision (classify into the class with highest probability), but cascading classifiers use this output as the input to another model (another stage). This is particularly useful for models that have highly combinatorial or counting rules (for example, class1 if exactly two features are negative, class2 otherwise), which cannot be fitted without looking at all the interaction terms. Having cascading classifiers enables the successive stage to gradually approximate the combinatorial nature of the classification, or to add interaction terms in classification algorithms that cannot express them in one stage.
As a simple example, if we try to match the rule (class1 if exactly 2 features out of 3 are negative, class2 otherwise), a decision tree would be:
The tree has all the combinations of possible leaves to express the full ruleset, whereas (feature1 positive, feature2 negative) and (feature1 negative, feature2 positive) should actually join to the same rule. This leads to a tree with too few samples on the leaves. A two-stage algorithm can effectively merge these two cases by giving a medium-high probability to class1 if feature1 or (exclusive) feature2 is negative. The second classifier can pick up this higher probability and make a decision on the sign of feature3.
In abias-variancedecomposition, cascaded models are usually seen as lowering bias while raising variance.
|
https://en.wikipedia.org/wiki/Cascading_classifiers
|
CoBoostis asemi-supervisedtrainingalgorithmproposed by Collins and Singer in 1999.[1]The original application for the algorithm was the task ofnamed-entity recognitionusing very weak learners, but it can be used for performing semi-supervised learning in cases wheredata featuresmay be redundant.[1]
It may be seen as a combination ofco-trainingandboosting. Each example is available in two views (subsections of the feature set), and boosting is applied iteratively in alternation with each view using predicted labels produced in the alternate view on the previous iteration. CoBoosting is not a valid boosting algorithm in thePAC learningsense.
CoBoosting was an attempt by Collins and Singer to improve on previous attempts to leverage redundancy in features for training classifiers in a semi-supervised fashion. CoTraining, a seminal work by Blum and Mitchell, was shown to be a powerful framework for learning classifiers given a small number of seed examples by iteratively inducing rules in a decision list. The advantage of CoBoosting to CoTraining is that it generalizes the CoTraining pattern so that it could be used with any classifier. CoBoosting accomplishes this feat by borrowing concepts fromAdaBoost.
In both CoTrain and CoBoost the training and testing example sets must follow two properties. The first is that the feature space of the examples can separated into two feature spaces (or views) such that each view is sufficiently expressive for classification.
Formally, there exist two functionsf1(x1){\displaystyle f_{1}(x_{1})}andf2(x2){\displaystyle f_{2}(x_{2})}such that for all examplesx=(x1,x2){\displaystyle x=(x_{1},x_{2})},f1(x1)=f2(x2)=f(x){\displaystyle f_{1}(x_{1})=f_{2}(x_{2})=f(x)}. While ideal, this constraint is in fact too strong due to noise and other factors, and both algorithms instead seek to maximize the agreement between the two functions. The second property is that the two views must not be highly correlated.
Input:{(x1,i,x2,i)}i=1n{\displaystyle \{(x_{1,i},x_{2,i})\}_{i=1}^{n}},{yi}i=1m{\displaystyle \{y_{i}\}_{i=1}^{m}}
Initialize:∀i,j:gj0(xi)=0{\displaystyle \forall i,j:g_{j}^{0}({\boldsymbol {x_{i}}})=0}.
Fort=1,...,T{\displaystyle t=1,...,T}and forj=1,2{\displaystyle j=1,2}:
Set pseudo-labels:
yi^={yi,1≤i≤msign(g3−jt−1(x3−j,i)),m<i≤n{\displaystyle {\hat {y_{i}}}=\left\{{\begin{array}{ll}y_{i},1\leq i\leq m\\sign(g_{3-j}^{t-1}({\boldsymbol {x_{3-j,i}}})),m<i\leq n\end{array}}\right.}
Set virtual distribution:Dtj(i)=1Ztje−yi^gjt−1(xj,i){\displaystyle D_{t}^{j}(i)={\frac {1}{Z_{t}^{j}}}e^{-{\hat {y_{i}}}g_{j}^{t-1}({\boldsymbol {x_{j,i}}})}}
whereZtj=∑i=1ne−yi^gjt−1(xj,i){\displaystyle Z_{t}^{j}=\sum _{i=1}^{n}e^{-{\hat {y_{i}}}g_{j}^{t-1}({\boldsymbol {x_{j,i}}})}}
Find the weak hypothesishtj{\displaystyle h_{t}^{j}}that minimizes expanded training error.
Choose value forαt{\displaystyle \alpha _{t}}that minimizes expanded training error.
Update the value for current strong non-thresholded classifier:
∀i:gjt(xj,i)=gjt−1(xj,i)+αthtj(xj,i){\displaystyle \forall i:g_{j}^{t}({\boldsymbol {x_{j,i}}})=g_{j}^{t-1}({\boldsymbol {x_{j,i}}})+\alpha _{t}h_{t}^{j}({\boldsymbol {x_{j,i}}})}
The final strong classifier output is
f(x)=sign(∑j=12gjT(xj)){\displaystyle f({\boldsymbol {x}})=sign\left(\sum _{j=1}^{2}g_{j}^{T}({\boldsymbol {x_{j}}})\right)}
CoBoosting builds on theAdaBoostalgorithm, which gives CoBoosting its generalization ability since AdaBoost can be used in conjunction with many other learning algorithms. This build up assumes a two class classification task, although it can be adapted to multiple class classification. In the AdaBoost framework, weak classifiers are generated in series as well as a distribution over examples in the training set. Each weak classifier is given a weight and the final strong classifier is defined as the sign of the sum of the weak classifiers weighted by their assigned weight. (SeeAdaBoostWikipedia page for notation). In the AdaBoost framework Schapire and Singer have shown that the training error is bounded by the following equation:
1m∑i=1me(−yi(∑t=1Tαtht(xi)))=∏tZt{\displaystyle {\frac {1}{m}}\sum _{i=1}^{m}e^{\left(-y_{i}\left(\sum _{t=1}^{T}\alpha _{t}h_{t}({\boldsymbol {x_{i}}})\right)\right)}=\prod _{t}Z_{t}}
WhereZt{\displaystyle Z_{t}}is the normalizing factor for the distributionDt+1{\displaystyle D_{t+1}}. Solving forZt{\displaystyle Z_{t}}in the equation forDt(i){\displaystyle D_{t}(i)}we get:
Zt=∑i:xt∉xiDt(i)+∑i:xt∈xiDt(i)e−yiαiht(xi){\displaystyle Z_{t}=\sum _{i:x_{t}\notin x_{i}}D_{t}(i)+\sum _{i:x_{t}\in x_{i}}D_{t}(i)e^{-y_{i}\alpha _{i}h_{t}({\boldsymbol {x_{i}}})}}
Wherext{\displaystyle x_{t}}is the feature selected in the current weak hypothesis. Three equations are defined describing the sum of the distributions for in which the current hypothesis has selected either correct or incorrect label. Note that it is possible for the classifier to abstain from selecting a label for an example, in which the label provided is 0. The two labels are selected to be either -1 or 1.
W0=∑i:ht(xi)=0Dt(i){\displaystyle W_{0}=\sum _{i:h_{t}(x_{i})=0}D_{t}(i)}
W+=∑i:ht(xi)=yiDt(i){\displaystyle W_{+}=\sum _{i:h_{t}(x_{i})=y_{i}}D_{t}(i)}
W−=∑i:ht(xi)=−yiDt(i){\displaystyle W_{-}=\sum _{i:h_{t}(x_{i})=-y_{i}}D_{t}(i)}
Schapire and Singer have shown that the valueZt{\displaystyle Z_{t}}can be minimized (and thus the training error) by selectingαt{\displaystyle \alpha _{t}}to be as follows:
αt=12ln(W+W−){\displaystyle \alpha _{t}={\frac {1}{2}}\ln \left({\frac {W_{+}}{W_{-}}}\right)}
Providing confidence values for the current hypothesized classifier based on the number of correctly classified vs. the number of incorrectly classified examples weighted by the distribution over examples. This equation can be smoothed to compensate for cases in whichW−{\displaystyle W_{-}}is too small. DerivingZt{\displaystyle Z_{t}}from this equation we get:
Zt=W0+2W+W−{\displaystyle Z_{t}=W_{0}+2{\sqrt {W_{+}W_{-}}}}
The training error thus is minimized by selecting the weak hypothesis at every iteration that minimizes the previous equation.
CoBoosting extends this framework in the case where one has a labeled training set (examples from1...m{\displaystyle 1...m}) and an unlabeled training set (fromm1...n{\displaystyle m_{1}...n}), as well as satisfy the conditions of redundancy in features in the form ofxi=(x1,i,x2,i){\displaystyle x_{i}=(x_{1,i},x_{2,i})}. The algorithm trains two classifiers in the same fashion asAdaBoostthat agree on the labeled training sets correct labels and maximizes the agreement between the two classifiers on the unlabeled training set. The final classifier is the sign of the sum of the two strong classifiers. The bounded training error on CoBoost is extended as follows, whereZCO{\displaystyle Z_{CO}}is the extension ofZt{\displaystyle Z_{t}}:
ZCO=∑i=1me−yig1(x1,i)+∑i=1me−yig2(x2,i)+∑i=m+1ne−f2(x2,i)g1(x1,i)+∑i=m+1ne−f1(x1,i)g2(x2,i){\displaystyle Z_{CO}=\sum _{i=1}^{m}e^{-y_{i}g_{1}({\boldsymbol {x_{1,i}}})}+\sum _{i=1}^{m}e^{-y_{i}g_{2}({\boldsymbol {x_{2,i}}})}+\sum _{i=m+1}^{n}e^{-f_{2}({\boldsymbol {x_{2,i}}})g_{1}({\boldsymbol {x_{1,i}}})}+\sum _{i=m+1}^{n}e^{-f_{1}({\boldsymbol {x_{1,i}}})g_{2}({\boldsymbol {x_{2,i}}})}}
Wheregj{\displaystyle g_{j}}is the summation of hypotheses weight by their confidence values for thejth{\displaystyle j^{th}}view (j = 1 or 2).fj{\displaystyle f_{j}}is the sign ofgj{\displaystyle g_{j}}. At each iteration of CoBoost both classifiers are updated iteratively. Ifgjt−1{\displaystyle g_{j}^{t-1}}is the strong classifier output for thejth{\displaystyle j^{th}}view up to thet−1{\displaystyle t-1}iteration we can set the pseudo-labels for thejth update to be:
yi^={yi1≤i≤msign(g3−jt−1(x3−j,i))m<i≤n{\displaystyle {\hat {y_{i}}}=\left\{{\begin{array}{ll}y_{i}1\leq i\leq m\\sign(g_{3-j}^{t-1}({\boldsymbol {x_{3-j,i}}}))m<i\leq n\end{array}}\right.}
In which3−j{\displaystyle 3-j}selects the other view to the one currently being updated.ZCO{\displaystyle Z_{CO}}is split into two such thatZCO=ZCO1+ZCO2{\displaystyle Z_{CO}=Z_{CO}^{1}+Z_{CO}^{2}}. Where
ZCOj=∑i=1ne−yi^(gjt−1(xi)+αtjgtj(xj,i)){\displaystyle Z_{CO}^{j}=\sum _{i=1}^{n}e^{-{\hat {y_{i}}}(g_{j}^{t-1}({\boldsymbol {x_{i}}})+\alpha _{t}^{j}g_{t}^{j}({\boldsymbol {x_{j,i}}}))}}
The distribution over examples for each viewj{\displaystyle j}at iterationt{\displaystyle t}is defined as follows:
Dtj(i)=1Ztje−yi^gjt−1(xj,i){\displaystyle D_{t}^{j}(i)={\frac {1}{Z_{t}^{j}}}e^{-{\hat {y_{i}}}g_{j}^{t-1}({\boldsymbol {x_{j,i}}})}}
At which pointZCOj{\displaystyle Z_{CO}^{j}}can be rewritten as
ZCOj=∑i=1nDtje−yi^αtjgtj(xj,i){\displaystyle Z_{CO}^{j}=\sum _{i=1}^{n}D_{t}^{j}e^{-{\hat {y_{i}}}\alpha _{t}^{j}g_{t}^{j}({\boldsymbol {x_{j,i}}})}}
Which is identical to the equation in AdaBoost. Thus the same process can be used to update the values ofαtj{\displaystyle \alpha _{t}^{j}}as in AdaBoost usingyi^{\displaystyle {\hat {y_{i}}}}andDtj{\displaystyle D_{t}^{j}}. By alternating this, the minimization ofZCO1{\displaystyle Z_{CO}^{1}}andZCO2{\displaystyle Z_{CO}^{2}}in this fashionZCO{\displaystyle Z_{CO}}is minimized in a greedy fashion.
|
https://en.wikipedia.org/wiki/CoBoosting
|
Inmachine learning(ML), amargin classifieris a type ofclassificationmodel which is able to give an associated distance from thedecision boundaryfor each data sample. For instance, if alinear classifieris used, the distance (typicallyEuclidean, though others may be used) of a sample from the separatinghyperplaneis the margin of that sample.
The notion ofmarginsis important in several ML classification algorithms, as it can be used to bound thegeneralization errorof these classifiers. These bounds are frequently shown using theVC dimension. The generalizationerror boundinboostingalgorithms andsupport vector machinesis particularly prominent.
The margin for an iterativeboostingalgorithm given adatasetwith two classes can be defined as follows: the classifier is given a sample pair(x,y){\displaystyle (x,y)}, wherex∈X{\displaystyle x\in X}is a domain space andy∈Y={−1,+1}{\displaystyle y\in Y=\{-1,+1\}}is the sample's label. The algorithm then selects a classifierhj∈C{\displaystyle h_{j}\in C}at each iterationj{\displaystyle j}whereC{\displaystyle C}is a space of possible classifiers that predict real values. This hypothesis is then weighted byαj∈R{\displaystyle \alpha _{j}\in R}as selected by the boosting algorithm. At iterationt{\displaystyle t}, the margin of a samplex{\displaystyle x}can thus be defined as
By this definition, the margin is positive if the sample is labeled correctly, or negative if the sample is labeled incorrectly.
This definition may be modified and is not the only way to define the margin for boosting algorithms. However, there are reasons why this definition may be appealing.[1]
Many classifiers can give an associated margin for each sample. However, only some classifiers utilize information of the margin while learning from a dataset.
Many boosting algorithms rely on the notion of a margin to assign weight to samples. If a convex loss is utilized (as inAdaBoostorLogitBoost, for instance) then a sample with a higher margin will receive less (or equal) weight than a sample with a lower margin. This leads the boosting algorithm to focus weight on low-margin samples. In non-convex algorithms (e.g.,BrownBoost), the margin still dictates the weighting of a sample, though the weighting is non-monotonewith respect to the margin.
One theoretical motivation behind margin classifiers is that theirgeneralization errormay be bound by the algorithm parameters and a margin term. An example of such a bound is for the AdaBoost algorithm.[1]LetS{\displaystyle S}be a set ofm{\displaystyle m}data points, sampled independently at random from a distributionD{\displaystyle D}. Assume the VC-dimension of the underlying base classifier isd{\displaystyle d}andm≥d≥1{\displaystyle m\geq d\geq 1}. Then, with probability1−δ{\displaystyle 1-\delta }, we have the bound:[citation needed]
for allθ>0{\displaystyle \theta >0}.
|
https://en.wikipedia.org/wiki/Margin_classifier
|
The followingoutlineis provided as an overview of and topical guide to corporate finance:
Corporate financeis the area offinancethat deals with the sources of funding, and thecapital structureofcorporations, the actions that managers take to increase thevalueof the firm to theshareholders, and the tools andanalysisused to allocate financial resources.
Forfinancein general, seeOutline of finance.
|
https://en.wikipedia.org/wiki/Outline_of_corporate_finance
|
Empiricalmethods
Prescriptiveand policy
Financial economicsis the branch ofeconomicscharacterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear onboth sidesof a trade".[1]Its concern is thus the interrelation of financial variables, such asshare prices,interest ratesandexchange rates, as opposed to those concerning thereal economy.
It has two main areas of focus:[2]asset pricingandcorporate finance; the first being the perspective of providers ofcapital, i.e. investors, and the second of users of capital.
It thus provides the theoretical underpinning for much offinance.
The subject is concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment".[3][4]It therefore centers on decision making under uncertainty in the context of the financial markets, and the resultanteconomicandfinancial modelsand principles, and is concerned with deriving testable or policy implications from acceptable assumptions.
It thus also includes a formal study of thefinancial marketsthemselves, especiallymarket microstructureandmarket regulation.
It is built on the foundations ofmicroeconomicsanddecision theory.
Financial econometricsis the branch of financial economics that useseconometrictechniques to parameterise the relationships identified.Mathematical financeis related in that it will derive and extend the mathematical or numerical models suggested by financial economics.
Whereas financial economics has a primarily microeconomic focus,monetary economicsis primarilymacroeconomicin nature.
Four equivalent formulations,[6]where:
Financial economics studies howrational investorswould applydecision theorytoinvestment management. The subject is thus built on the foundations ofmicroeconomicsand derives several key results for the application ofdecision makingunder uncertainty to thefinancial markets. The underlying economic logic yields thefundamental theorem of asset pricing, which gives the conditions forarbitrage-free asset pricing.[6][5]The various "fundamental" valuation formulae result directly.
Underlying all of financial economics are the concepts ofpresent valueandexpectation.[6]
Calculating their present value,Xsj/r{\displaystyle X_{sj}/r}in the first formula, allows the decision maker to aggregate thecashflows(or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making.[note 1](Note that here, "r{\displaystyle r}" represents a generic (or arbitrary)discount rateapplied to the cash flows, whereas in the valuation formulae, therisk-free rateis applied once these have been "adjusted" for their riskiness; see below.)
An immediate extension is to combine probabilities with present value, leading to theexpected value criterionwhich sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence,Xs{\displaystyle X_{s}}andps{\displaystyle p_{s}}respectively.[note 2]
This decision method, however, fails to considerrisk aversion. In other words, since individuals receive greaterutilityfrom an extra dollar when they are poor and less utility when comparatively rich, the approach is therefore to "adjust" the weight assigned to the various outcomes, i.e. "states", correspondingly:Ys{\displaystyle Y_{s}}. Seeindifference price. (Some investors may in fact berisk seekingas opposed torisk averse, but the same logic would apply.)
Choice under uncertainty here may then be defined as the maximization ofexpected utility. More formally, the resultingexpected utility hypothesisstates that, if certain axioms are satisfied, thesubjectivevalue associated with a gamble by an individual isthat individual'sstatistical expectationof the valuations of the outcomes of that gamble.
The impetus for these ideas arises from various inconsistencies observed under the expected value framework, such as theSt. Petersburg paradoxand theEllsberg paradox.[note 3]
Each is further divided into its tertiary categories.
The concepts ofarbitrage-free, "rational", pricing and equilibrium are then coupled[10]with the above to derive various of the "classical"[11](or"neo-classical"[12]) financial economics models.
Rational pricingis the assumption that asset prices (and hence asset pricing models) will reflect thearbitrage-free priceof the asset, as any deviation from this price will bearbitraged away: the"law of one price". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Economic equilibriumis a state in which economic forces such as supply and demand are balanced, and in the absence of external influences these equilibrium values of economic variables will not change.General equilibriumdeals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.)
The two concepts are linked as follows: where market prices arecompleteand do not allow profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and they are therefore not in equilibrium.[13]An arbitrage equilibrium is thus a precondition for a general economic equilibrium.
"Complete" here means that there is a price for every asset in every possible state of the world,s{\displaystyle s}, and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assumingno friction): essentiallysolving simultaneouslyforn(risk-neutral) probabilities,qs{\displaystyle q_{s}}, givennprices. For a simplified example seeRational pricing § Risk neutral valuation, where the economy has only two possible states – up and down – and wherequp{\displaystyle q_{up}}andqdown{\displaystyle q_{down}}(=1−qup{\displaystyle 1-q_{up}}) are the two corresponding probabilities, and in turn, the derived distribution, or"measure".
The formal derivation will proceed by arbitrage arguments.[6][13][10]The analysis here is often undertaken to assume arepresentative agent,[14]essentially treating all market participants, "agents", as identical (or, at least, assuming that theyact in such a way thatthe sum of their choices is equivalent to the decision of one individual) with the effect thatthe problems are thenmathematically tractable.
With this measure in place, the expected,i.e. required, return of any security (or portfolio) will then equal the risk-free return, plus an "adjustment for risk",[6]i.e. a security-specificrisk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions.[6][5][15]This approach is consistent withthe above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences.
Continuing the example, in pricing aderivative instrument, its forecasted cashflows in the abovementioned up- and down-statesXup{\displaystyle X_{up}}andXdown{\displaystyle X_{down}}, are multiplied through byqup{\displaystyle q_{up}}andqdown{\displaystyle q_{down}}, and are thendiscountedat the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation withY{\displaystyle Y}andr{\displaystyle r}combined. This premium may be derived by theCAPM(or extensions) as will be seen under§ Uncertainty.
The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of theunderlyingand a risk free "bond"; seeRational pricing § Delta hedging(and§ Uncertaintybelow). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk.
(Correspondingly, mathematical finance separates intotwo analytic regimes:
risk and portfolio management (generally) usephysical-(or actual or actuarial) probability, denoted by "P"; while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q".
In specific applications the lower case is used, as in the above equations.)
With the above relationship established, the further specializedArrow–Debreu modelmay be derived.[note 4]This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.
The Arrow–Debreu model applies to economies with maximallycomplete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods.
A direct extension, then, is the concept of astate pricesecurity, also called an Arrow–Debreu security, a contract that agrees to pay one unit of anumeraire(a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is thestate priceπs{\displaystyle \pi _{s}}of this particular state of the world; the collection of these is also referred to as a "Risk Neutral Density".[19]
In the above example, the state prices,πup{\displaystyle \pi _{up}},πdown{\displaystyle \pi _{down}}would equate to the present values of$qup{\displaystyle \$q_{up}}and$qdown{\displaystyle \$q_{down}}: i.e. what one would pay today, respectively, for the up- and down-state securities; thestate price vectoris the vector of state prices for all states. Applied to derivative valuation, the price today would simply be[πup{\displaystyle \pi _{up}}×Xup{\displaystyle X_{up}}+πdown{\displaystyle \pi _{down}}×Xdown{\displaystyle X_{down}}]: the fourth formula (see above regarding the absence of a risk premium here). For acontinuous random variableindicating a continuum of possible states, the value is found byintegratingover the state price "density".
State prices find immediate application as a conceptual tool ("contingent claim analysis");[6]but can also be applied to valuation problems.[20]Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security"[2]– as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices.[21][20][19]These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself.
Using the relatedstochastic discount factor- SDF; also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factorm~{\displaystyle {\tilde {m}}}, and then taking the expectation;[15][22]the third equation above. Essentially, this factor divides expectedutilityat the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporalmarginal rate of substitution".
Correspondingly, the SDF,m~s{\displaystyle {\tilde {m}}_{s}}, may be thought of as the discounted value of Risk Aversion,Ys.{\displaystyle Y_{s}.}(The latter may be inferred via the ratio of risk neutral- to physical-probabilities,qs/ps.{\displaystyle q_{s}/p_{s}.}SeeGirsanov theoremandRadon-Nikodym derivative.)
Applying the above economic concepts, we may then derive variouseconomic-and financial models and principles. As above, the two usual areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information",[1][14]as will be seen below.
Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations.[4](This division sometimes denoted "deterministic" and "random",[23]or "stochastic".)
Bond valuation formulawhere Coupons and Face value are discounted at the appropriate rate, "i": typically reflecting a spread over the risk free rateas a function of credit risk; often quoted as a "yield to maturity". See body for discussion re the relationship with the above pricing formulae.
DCF valuation formula, where thevalue of the firm, is its forecastedfree cash flowsdiscounted to the present using theweighted average cost of capital, i.e.cost of equityandcost of debt, with the former (often) derived using the below CAPM.
Forshare valuationinvestors use the relateddividend discount model.
The starting point here is "Investment under certainty", and usually framed in the context of a corporation.
TheFisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders.
Related is theModigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected byhow that firm is financed, and depends neither on itsdividend policynorits decisionto raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark[10]for evaluating the effects of factors outside the model that do affect value.[note 5]
The mechanism for determining (corporate) value is provided by[26][27]John Burr Williams'The Theory of Investment Value, which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the"intrinsic", long-term worth is the present value of its future net cashflows, in the form ofdividends; inthe corporate context, "free cash flow" as aside. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below.Net present value(NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics underOutline of finance § Discounted cash flow valuation.[note 6]
Bond valuation, in that cashflows (couponsand return of principal, or "Face value") are deterministic, may proceed in the same fashion.[23]An immediate extension,Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's correspondingzero rate, and of equivalent credit worthiness – as opposed to an overall rate.
In many treatments bond valuation precedesequity valuation, under which cashflows (dividends) are not "known"per se. Williams and onward allow for forecasting as to these – based onhistoric ratiosor publisheddividend policy– and cashflows are then treated as essentially deterministic; see below under§ Corporate finance theory.
For both stocks and bonds, "under certainty, with the focus on cash flows from securities over time," valuation based on aterm structure of interest ratesis in fact consistent with arbitrage-free pricing.[28]Indeed, a corollary ofthe aboveis that "the law of one priceimplies the existence of a discount factor";[29]correspondingly, as formulated,∑sπs=1/r{\textstyle \sum _{s}\pi _{s}=1/r}.
Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows.Fisher's formulationof the theory here - developingan intertemporal equilibrium model- underpins also[26]the below applications to uncertainty;[note 7]see[30]for the development.
Theexpected returnused when discounting cashflows on an asseti{\displaystyle i}, is the risk-free rate plus themarket premiummultiplied bybeta(ρi,mσiσm{\displaystyle \rho _{i,m}{\frac {\sigma _{i}}{\sigma _{m}}}}), the asset's correlated volatility relative to the overall marketm{\displaystyle m}.
For"choice under uncertainty"the twin assumptions of rationality andmarket efficiency, as more closely defined, lead tomodern portfolio theory(MPT) with itscapital asset pricing model(CAPM) – anequilibrium-basedresult – and to theBlack–Scholes–Merton theory(BSM; often, simply Black–Scholes) foroption pricing– anarbitrage-freeresult. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; seeAsset pricing § Interrelationship.
Briefly, and intuitively – and consistent with§ Arbitrage-free pricing and equilibriumabove – the relationship between rationality and efficiency is as follows.[31]Given the ability to profit fromprivate information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e.efficient, prices: theefficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (Seeearnings response coefficient.)
The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to thebest guess of the future: the assumption ofrational expectations.
The EMH does allow that when faced with new information, some investors may overreact and some may underreact,[32]but what is required, however, is that investors' reactions follow anormal distribution– so that the net effect on market prices cannot be reliably exploited[32]to make an abnormal profit.
In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news:[33]therandom walk hypothesis.
This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution.[note 8]
Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow;[32]correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium.
Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends,[5][33][14]as based on currently available information.
What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM.
In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are.[note 9]This result will be independent of the investor's level of risk aversion and assumedutility function, thus providing a readily determined discount rate for corporate finance decision makersas above,[36]and for other investors.
The argumentproceeds as follows:[37]If one can construct anefficient frontier– i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of therisk-free assetand the "market portfolio" (theMutual fund separation theorem), with the combinations here plotting as thecapital market line, or CML.
Then, given this CML, the required return on a risky security will be independent of the investor'sutility function, and solely determined by itscovariance("beta") with aggregate, i.e. market, risk.
This is because investors here can then maximize utility through leverage as opposed to stock selection; seeSeparation property (finance),Markowitz model § Choosing the best portfolioand CML diagram aside.
As can be seen in the formula aside, this result is consistent withthe preceding, equaling the riskless return plus an adjustment for risk.[5]A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to deriveother equilibrium-pricing models.
Black–Scholes provides a mathematical model of a financial market containingderivativeinstruments, and the resultant formula for the price ofEuropean-styled options.[note 10]The model is expressed as the Black–Scholes equation, apartial differential equationdescribing the changing price of the option over time; it is derived assuming log-normal,geometric Brownian motion(seeBrownian model of financial markets).
The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (V{\displaystyle V}, the value, or price, of the option, grows atr{\displaystyle r}, the risk-free rate).[6][5]This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is thesolutionto the equation.)
Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with§ Arbitrage-free pricing and equilibriumabove. Relatedly, therefore, the pricing formulamay also be deriveddirectly via risk neutral expectation.Itô's lemmaprovidesthe underlying mathematics, and, withItô calculusmore generally, remains fundamental in quantitative finance.[note 11]
As implied by the Fundamental Theorem,the two major results are consistent.
Here, the Black-Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM.[46][12]The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing.
Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the above fundamental equations – further explaining, and if required demonstrating, this consistency.[6]Here, the CAPM is derived[15]by linkingY{\displaystyle Y}, risk aversion, to overall market return, and setting the return on securityj{\displaystyle j}asXj/Pricej{\displaystyle X_{j}/Price_{j}}; seeStochastic discount factor § Properties.
The Black–Scholes formula is found,in the limit,[47]by attaching abinomial probability[10]to each of numerous possiblespot-prices(i.e. states) and then rearranging for the terms corresponding toN(d1){\displaystyle N(d_{1})}andN(d2){\displaystyle N(d_{2})}, per the boxed description; seeBinomial options pricing model § Relationship with Black–Scholes.
More recent work further generalizes and extends these models. As regardsasset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models.
The majority of developments here relate to required return, i.e. pricing, extending the basic CAPM. Multi-factor models such as theFama–French three-factor modeland theCarhart four-factor model, propose factors other than market return as relevant in pricing. Theintertemporal CAPMandconsumption-based CAPMsimilarly extend the model. Withintertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion ofconsumption (in the economic sense)then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return.
Whereas the above extend the CAPM, thesingle-index modelis a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization. Thearbitrage pricing theory(APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns."[48]It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line.[note 12]The linear factor model structure of the APT is used as the basis for many of the commercial risk systems employed by asset managers.
As regardsportfolio optimization, theBlack–Litterman model[51]departs from the originalMarkowitz modelapproach to constructingefficient portfolios. Black–Litterman starts with an equilibrium assumption, as for the latter, but this is then modified to take into account the "views" (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke[52]asset allocation. Where factors additional to volatility are considered (kurtosis, skew...) thenmultiple-criteria decision analysiscan be applied; here deriving aPareto efficientportfolio. Theuniversal portfolio algorithmappliesinformation theoryto asset selection, learning adaptively from historical data.Behavioral portfolio theoryrecognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas havelately been applied here; recently this is the case alsofor genetic algorithmsandMachine learning, more generally[53](seebelow).
Interpretation:Analogous to Black–Scholes,[54]arbitrage arguments describe the instantaneous change in the bond priceP{\displaystyle P}for changes in the (risk-free) short rater{\displaystyle r}; the analyst selects the specificshort-rate modelto be employed.
In pricing derivatives, thebinomial options pricing modelprovides a discretized version of Black–Scholes, useful for the valuation ofAmerican styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchershave used optionsto extract state-prices for a variety of other applications in financial economics.[6][46][21]Forpath dependent derivatives,Monte Carlo methods for option pricingare employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Variousother numeric techniqueshave also been developed. The theoretical framework too has been extended such thatmartingale pricingis now the standard approach.[note 13]
Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis").Real options valuationallows that option holders can influence the option's underlying; models foremployee stock option valuationexplicitly assume non-rationality on the part of option holders;Credit derivativesallow that payment obligations or delivery requirements might not be honored.Exotic derivativesare now routinely valued. Multi-asset underlyers are handled via simulation orcopula based analysis.
Similarly, the variousshort-rate modelsallow for an extension of these techniques tofixed income-andinterest rate derivatives. (TheVasicekandCIRmodels are equilibrium-based, whileHo–Leeand subsequent models are based on arbitrage-free pricing.) The more generalHJM Frameworkdescribes the dynamics of the fullforward-ratecurve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly forhybrid securities, where credit risk is combined with uncertainty re future rates; seeBond valuation § Stochastic calculus approachandLattice model (finance) § Hybrid securities.[note 14]
Following theCrash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thusimplied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussedin the next section.
After the2008 financial crisis, a further development:[63]as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis.
Addressing this, therefore, issues such ascounterparty credit risk, funding costs and costs of capital are now additionally considered when pricing,[64]and acredit valuation adjustment, or CVA – and potentially othervaluation adjustments, collectivelyxVA– is generally added to the risk-neutral derivative value.
The standard economic arguments can be extended to incorporate these various adjustments.[65]
A related, and perhaps more fundamental change, is that discounting is now on theOvernight Index Swap(OIS) curve, as opposed toLIBORas used previously.[63]This is because post-crisis, theovernight rateis considered a better proxy for the "risk-free rate".[66](Also, practically, the interest paid on cashcollateralis usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSAdiscounting".)Swap pricing– and, therefore,yield curveconstruction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-legLIBOR tenor, with discounting on thecommonOIS curve.
Mirroring theabovedevelopments, corporate finance valuations and decisioning no longer need assume "certainty".Monte Carlo methods in financeallow financial analysts to construct "stochastic" orprobabilisticcorporate finance models, as opposed to the traditional static anddeterministicmodels;[67]seeCorporate finance § Quantifying uncertainty.
Relatedly,Real Options theoryallows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today.[68]More traditionally,decision trees– which are complementary – have been used to evaluate projects, by incorporating in the valuation (all)possible events(or states) and consequentmanagement decisions;[69][67]the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward."[67][note 15]
Related to this, is the treatment of forecasted cashflows inequity valuation. In many cases, following Williamsabove, the average (or most likely) cash-flows were discounted,[71]as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments underFinancial modeling § Accounting.
In more modern treatments, then, it is theexpectedcashflows (in themathematical sense:∑spsXsj{\textstyle \sum _{s}p_{s}X_{sj}}) combined into an overall value per forecast period which are discounted.[72][73][74][67]And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows[67](essentially,Y{\displaystyle Y}andr{\displaystyle r}combined).
Other developments here include[75]agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizing the issues interrelated with capital structure.[76]Clean surplus accountingand the relatedresidual income valuationprovide a model that returns price as a function of earnings, expected returns, and change inbook value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; seeDividend policy § Relevance of dividend policy.
"Corporate finance" as a discipline more generally, building on Fisherabove, relates to the long term objective of maximizing thevalue of the firm- and itsreturn to shareholders- and thus also incorporates the areas ofcapital structureanddividend policy.[77]Extensions of the theory here then also consider these latter, as follows:
(i)optimization re capitalization structure, and theories here as to corporate choices and behavior:Capital structure substitution theory,Pecking order theory,Market timing hypothesis,Trade-off theory;
(ii)considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include:
theWalter model,Lintner model,Residuals theoryandsignaling hypothesis, as well as discussion re the observedclientele effectanddividend puzzle.
As described, the typical application of real options is tocapital budgetingtype problems.
However, here, they arealso appliedto problems of capital structure and dividend policy, and to the related design of corporate securities;[78]and since stockholder and bondholders have different objective functions, in the analysis of therelated agency problems.[68]In all of these cases, state-prices can provide the market-implied information relating to the corporate,as above, which is then applied to the analysis. For example,convertible bondscan (must) be priced consistent with the (recovered) state-prices of the corporate's equity.[20][72]
The discipline, as outlined, also includes a formal study offinancial markets. Of interest especially are market regulation andmarket microstructure, and their relationship toprice efficiency.
Regulatory economicsstudies, in general, the economics of regulation. In the context of finance, it will address the impact offinancial regulationon the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence andfinancial stability.
Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance,annual reports),insider trading, andshort-sellingwill impact price efficiency, thecost of equity, andmarket liquidity.[79]
Market microstructure is concerned with the details of how exchange occurs in markets
(withWalrasian-,matching-,Fisher-, andArrow-Debreu marketsas prototypes),
and "analyzes how specific trading mechanisms affect theprice formationprocess",[80]examining the ways in which the processes of a market affect determinants oftransaction costs, prices, quotes, volume, and trading behavior.
It has been used, for example, in providing explanations forlong-standing exchange rate puzzles,[81]and for theequity premium puzzle.[82]In contrast to the above classical approach, models here explicitly allow for (testing the impact of)market frictionsand otherimperfections;
see alsomarket design.
For both regulation[83]and microstructure,[84]and generally,[85]agent-based modelscan be developed[86]toexamine any impactdue to a change in structure or policy - orto make inferencesre market dynamics -by testing thesein an artificial financial market, or AFM.[note 16]This approach, essentiallysimulatedtrade between numerousagents, "typically usesartificial intelligencetechnologies [oftengenetic algorithmsandneural nets] to represent theadaptive behaviourof market participants".[86]
These'bottom-up' models"start from first principals of agent behavior",[87]with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e.stylized facts]emergingfrom a soup of individual interacting strategies".[87]Agent-based models depart further from the classical approach — therepresentative agent, as outlined — in that they introduceheterogeneityinto the environment (thereby addressing, also, theaggregation problem).
More recent research focuses on the potential impact ofMachine Learningon market functioning and efficiency.
As these methods become more prevalent in financial markets, economists would expect greaterinformation acquisitionand improved price efficiency.[88]In fact, an apparent rejection of market efficiency (seebelow) might simply represent "the unsurprising consequence of investors not having precise knowledge of the parameters of a data-generating process that involves thousands of predictor variables".[89]At the same time, it is acknowledged that a potential downside of these methods, in this context, is their lack ofinterpretability"which translates into difficulties in attaching economic meaning to the results found."[53]
As above, there is a very close link between:
therandom walk hypothesis, with the associated belief that price changes should follow anormal distribution, on the one hand;
and market efficiency andrational expectations, on the other.
Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges.
As discussed, the assumptions that market prices follow arandom walkand that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analystsand risk managersfrequently modify the "standard models" (seekurtosis risk,skewness risk,long tail,model risk).
In fact,Benoit Mandelbrothad discovered already in the 1960s[90]that changes in financial prices do not follow anormal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics.[91]
Financial models with long-tailed distributions and volatility clusteringhave been introduced to overcome problems with the realism of the above "classical" financial models; whilejump diffusion modelsallow for (option) pricing incorporating"jumps"in thespot price.[92]Risk managers, similarly, complement (or substitute) the standardvalue at riskmodels withhistorical simulations,mixture models,principal component analysis,extreme value theory, as well as models forvolatility clustering.[93]For further discussion seeFat-tailed distribution § Applications in economics, andValue at risk § Criticism. Portfolio managers, likewise, have modified their optimization criteria and algorithms; see§ Portfolio theoryabove.
Closely related is thevolatility smile, where, as above,implied volatility– the volatility corresponding to the BSM price – is observed todifferas a function ofstrike price(i.e.moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM (i.e.N(d1){\displaystyle N(d_{1})}andN(d2){\displaystyle N(d_{2})}above). The term structure of volatility describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is then a three-dimensional surface plot of volatility smile and term structure. These empirical phenomena negate the assumption of constant volatility – andlog-normality– upon which Black–Scholes is built.[40][92]Within institutions, the function of Black–Scholes is now, largely, tocommunicateprices via implied volatilities, much like bond prices are communicated viaYTM; seeBlack–Scholes model § The volatility smile.
In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes.
The two main approaches arelocal volatilityandstochastic volatility. The first returns the volatility which is "local" to each spot-time point of thefinite difference-orsimulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are firstcalibrated to observed prices, and are then applied to the valuation or hedging in question; the most common areHeston,SABRandCEV. This approach addresses certain problems identified with hedging under local volatility.[94]
Related to local volatility are thelattice-basedimplied-binomialand-trinomial trees– essentially a discretization of the approach – which are similarly, but less commonly,[19]used for pricing; these are built on state-prices recovered from the surface.Edgeworth binomial treesallow for a specified (i.e. non-Gaussian)skewandkurtosisin the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required.[95]Similarly purposed (and derived)closed-form modelswere also developed.[96]
As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate.
And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value forcounterparty-andfunding-relatedrisk.
These xVA areadditionalto any smile or surface effect: with the surface built on price data for fully-collateralized positions, there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...)
As mentioned at top, mathematical finance (and particularlyfinancial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, critics of financial economics - especially vocal since the2008 financial crisis- suggest that instead, the theory needs revisiting almost entirely:[note 17]
The current system, based on the idea that risk is distributed in the shape of a bell curve, is flawed... The problem is [that economists and practitioners] never abandon the bell curve. They are like medieval astronomers who believe the sun revolves around the earth and arefuriously tweaking their geo-centric mathin the face of contrary evidence. They will never get this right;they need their Copernicus.[97]
As seen, a common assumption is that financial decision makers act rationally; seeHomo economicus. Recently, however, researchers inexperimental economicsandexperimental financehave challenged this assumptionempirically. These assumptions are also challengedtheoretically, bybehavioral finance, a discipline primarily concerned with the limits to rationality of economic agents.[note 18]For related criticisms re corporate finance theory vs its practice see:.[98]
Various persistentmarket anomalieshave also been documented as consistent with and complementary to price or return distortions – e.g.size premiums– which appear to contradict theefficient-market hypothesis. Within these market anomalies,calendar effectsare the most commonly referenced group.
Related to these are various of theeconomic puzzles, concerning phenomena similarly contradicting the theory. Theequity premium puzzle, as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than therisk premiumrational equity investors should demand, an "abnormal return". For further context seeRandom walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances.
More generally, and, again, particularly following the2008 financial crisis, financial economics (andmathematical finance) has been subjected to deeper criticism.
Notable here isNassim Taleb, whose critique overlaps the above, but extends[99]also to the institutional[100][101]aspects of finance - includingacademic.[102][40]HisBlack swan theoryposits that although events of large magnitude and consequence play a major role in finance, since these are (statistically) unexpected,they are "ignored"by economists and traders.
Thus, although a "Taleb distribution" - which normally provides a payoff of small positive returns, while carrying a small but significant risk of catastrophic losses - more realistically describes markets than current models, the latter continue to be preferred (even withprofessionals hereacknowledging that it only "generally works" or only "works on average").[103]
Here,[100]financial criseshave been a topic of interest[104]and, in particular,the failure[101]of (financial) economists - as well as[100]bankersandregulators- to model and predict these.
SeeFinancial crisis § Theories.
The related problem ofsystemic risk, has also received attention. Where companies hold securities in each other, then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See:Systemic risk § Inadequacy of classic valuation models;Cascades in financial networks;Flight-to-quality.
Areas of research attempting to explain (or at least model) these phenomena, and crises, include[14]market microstructureandHeterogeneous agent models, as above. The latter is extended toagent-based computational models; here,[85]as mentioned, price is treated as anemergent phenomenon, resulting from the interaction of the various market participants (agents). Thenoisy market hypothesisargues that prices can be influenced by speculators andmomentum traders, as well as byinsidersand institutions that often buy and sell stocks for reasons unrelated tofundamental value; seeNoise (economic)andNoise trader. Theadaptive market hypothesisis an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles ofevolutionto financial interactions. Aninformation cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information.Copula-based modellinghas similarly been applied. See alsoHyman Minsky's"financial instability hypothesis", as well asGeorge Soros' applicationof"reflexivity".
In the alternative, institutionally inherentlimits to arbitrage- i.e. as opposed to factors directly contradictory to the theory - are sometimes referenced.
Note however, that despite the above inefficiencies, asset prices doeffectively[32]follow a random walk - i.e. (at least) in the sense that "changes in the stock market are unpredictable, lacking any pattern that can be used by an investor to beat the overall market".[105]Thus afterfund costs- and givenother considerations- it is difficult to consistently outperform market averages[106]and achieve"alpha".
The practical implication[107]is thatpassive investing, i.e. via low-costindex funds, should, on average, serve better thanany otheractive strategy-
and, in fact, this practice isnow widely adopted.[note 19]Here, however, the followingconcern is posited:
although in concept, it is "the research undertaken by active managers [that] keeps prices closer to value... [and] thus there is a fragile equilibrium in which some investors choose to index while the rest continue to search for mispriced securities";[107]in practice, as more investors "pour money into index funds tracking the same stocks, valuationsfor those companiesbecome inflated",[108]potentially leading toasset bubbles.
Financial economics
Asset pricing
Corporate finance
Course material
Links and portals
Actuarial resources
|
https://en.wikipedia.org/wiki/Financial_economics#Certainty
|
Empiricalmethods
Prescriptiveand policy
Financial economicsis the branch ofeconomicscharacterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear onboth sidesof a trade".[1]Its concern is thus the interrelation of financial variables, such asshare prices,interest ratesandexchange rates, as opposed to those concerning thereal economy.
It has two main areas of focus:[2]asset pricingandcorporate finance; the first being the perspective of providers ofcapital, i.e. investors, and the second of users of capital.
It thus provides the theoretical underpinning for much offinance.
The subject is concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment".[3][4]It therefore centers on decision making under uncertainty in the context of the financial markets, and the resultanteconomicandfinancial modelsand principles, and is concerned with deriving testable or policy implications from acceptable assumptions.
It thus also includes a formal study of thefinancial marketsthemselves, especiallymarket microstructureandmarket regulation.
It is built on the foundations ofmicroeconomicsanddecision theory.
Financial econometricsis the branch of financial economics that useseconometrictechniques to parameterise the relationships identified.Mathematical financeis related in that it will derive and extend the mathematical or numerical models suggested by financial economics.
Whereas financial economics has a primarily microeconomic focus,monetary economicsis primarilymacroeconomicin nature.
Four equivalent formulations,[6]where:
Financial economics studies howrational investorswould applydecision theorytoinvestment management. The subject is thus built on the foundations ofmicroeconomicsand derives several key results for the application ofdecision makingunder uncertainty to thefinancial markets. The underlying economic logic yields thefundamental theorem of asset pricing, which gives the conditions forarbitrage-free asset pricing.[6][5]The various "fundamental" valuation formulae result directly.
Underlying all of financial economics are the concepts ofpresent valueandexpectation.[6]
Calculating their present value,Xsj/r{\displaystyle X_{sj}/r}in the first formula, allows the decision maker to aggregate thecashflows(or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making.[note 1](Note that here, "r{\displaystyle r}" represents a generic (or arbitrary)discount rateapplied to the cash flows, whereas in the valuation formulae, therisk-free rateis applied once these have been "adjusted" for their riskiness; see below.)
An immediate extension is to combine probabilities with present value, leading to theexpected value criterionwhich sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence,Xs{\displaystyle X_{s}}andps{\displaystyle p_{s}}respectively.[note 2]
This decision method, however, fails to considerrisk aversion. In other words, since individuals receive greaterutilityfrom an extra dollar when they are poor and less utility when comparatively rich, the approach is therefore to "adjust" the weight assigned to the various outcomes, i.e. "states", correspondingly:Ys{\displaystyle Y_{s}}. Seeindifference price. (Some investors may in fact berisk seekingas opposed torisk averse, but the same logic would apply.)
Choice under uncertainty here may then be defined as the maximization ofexpected utility. More formally, the resultingexpected utility hypothesisstates that, if certain axioms are satisfied, thesubjectivevalue associated with a gamble by an individual isthat individual'sstatistical expectationof the valuations of the outcomes of that gamble.
The impetus for these ideas arises from various inconsistencies observed under the expected value framework, such as theSt. Petersburg paradoxand theEllsberg paradox.[note 3]
Each is further divided into its tertiary categories.
The concepts ofarbitrage-free, "rational", pricing and equilibrium are then coupled[10]with the above to derive various of the "classical"[11](or"neo-classical"[12]) financial economics models.
Rational pricingis the assumption that asset prices (and hence asset pricing models) will reflect thearbitrage-free priceof the asset, as any deviation from this price will bearbitraged away: the"law of one price". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Economic equilibriumis a state in which economic forces such as supply and demand are balanced, and in the absence of external influences these equilibrium values of economic variables will not change.General equilibriumdeals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.)
The two concepts are linked as follows: where market prices arecompleteand do not allow profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and they are therefore not in equilibrium.[13]An arbitrage equilibrium is thus a precondition for a general economic equilibrium.
"Complete" here means that there is a price for every asset in every possible state of the world,s{\displaystyle s}, and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assumingno friction): essentiallysolving simultaneouslyforn(risk-neutral) probabilities,qs{\displaystyle q_{s}}, givennprices. For a simplified example seeRational pricing § Risk neutral valuation, where the economy has only two possible states – up and down – and wherequp{\displaystyle q_{up}}andqdown{\displaystyle q_{down}}(=1−qup{\displaystyle 1-q_{up}}) are the two corresponding probabilities, and in turn, the derived distribution, or"measure".
The formal derivation will proceed by arbitrage arguments.[6][13][10]The analysis here is often undertaken to assume arepresentative agent,[14]essentially treating all market participants, "agents", as identical (or, at least, assuming that theyact in such a way thatthe sum of their choices is equivalent to the decision of one individual) with the effect thatthe problems are thenmathematically tractable.
With this measure in place, the expected,i.e. required, return of any security (or portfolio) will then equal the risk-free return, plus an "adjustment for risk",[6]i.e. a security-specificrisk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions.[6][5][15]This approach is consistent withthe above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences.
Continuing the example, in pricing aderivative instrument, its forecasted cashflows in the abovementioned up- and down-statesXup{\displaystyle X_{up}}andXdown{\displaystyle X_{down}}, are multiplied through byqup{\displaystyle q_{up}}andqdown{\displaystyle q_{down}}, and are thendiscountedat the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation withY{\displaystyle Y}andr{\displaystyle r}combined. This premium may be derived by theCAPM(or extensions) as will be seen under§ Uncertainty.
The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of theunderlyingand a risk free "bond"; seeRational pricing § Delta hedging(and§ Uncertaintybelow). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk.
(Correspondingly, mathematical finance separates intotwo analytic regimes:
risk and portfolio management (generally) usephysical-(or actual or actuarial) probability, denoted by "P"; while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q".
In specific applications the lower case is used, as in the above equations.)
With the above relationship established, the further specializedArrow–Debreu modelmay be derived.[note 4]This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.
The Arrow–Debreu model applies to economies with maximallycomplete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods.
A direct extension, then, is the concept of astate pricesecurity, also called an Arrow–Debreu security, a contract that agrees to pay one unit of anumeraire(a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is thestate priceπs{\displaystyle \pi _{s}}of this particular state of the world; the collection of these is also referred to as a "Risk Neutral Density".[19]
In the above example, the state prices,πup{\displaystyle \pi _{up}},πdown{\displaystyle \pi _{down}}would equate to the present values of$qup{\displaystyle \$q_{up}}and$qdown{\displaystyle \$q_{down}}: i.e. what one would pay today, respectively, for the up- and down-state securities; thestate price vectoris the vector of state prices for all states. Applied to derivative valuation, the price today would simply be[πup{\displaystyle \pi _{up}}×Xup{\displaystyle X_{up}}+πdown{\displaystyle \pi _{down}}×Xdown{\displaystyle X_{down}}]: the fourth formula (see above regarding the absence of a risk premium here). For acontinuous random variableindicating a continuum of possible states, the value is found byintegratingover the state price "density".
State prices find immediate application as a conceptual tool ("contingent claim analysis");[6]but can also be applied to valuation problems.[20]Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security"[2]– as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices.[21][20][19]These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself.
Using the relatedstochastic discount factor- SDF; also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factorm~{\displaystyle {\tilde {m}}}, and then taking the expectation;[15][22]the third equation above. Essentially, this factor divides expectedutilityat the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporalmarginal rate of substitution".
Correspondingly, the SDF,m~s{\displaystyle {\tilde {m}}_{s}}, may be thought of as the discounted value of Risk Aversion,Ys.{\displaystyle Y_{s}.}(The latter may be inferred via the ratio of risk neutral- to physical-probabilities,qs/ps.{\displaystyle q_{s}/p_{s}.}SeeGirsanov theoremandRadon-Nikodym derivative.)
Applying the above economic concepts, we may then derive variouseconomic-and financial models and principles. As above, the two usual areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information",[1][14]as will be seen below.
Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations.[4](This division sometimes denoted "deterministic" and "random",[23]or "stochastic".)
Bond valuation formulawhere Coupons and Face value are discounted at the appropriate rate, "i": typically reflecting a spread over the risk free rateas a function of credit risk; often quoted as a "yield to maturity". See body for discussion re the relationship with the above pricing formulae.
DCF valuation formula, where thevalue of the firm, is its forecastedfree cash flowsdiscounted to the present using theweighted average cost of capital, i.e.cost of equityandcost of debt, with the former (often) derived using the below CAPM.
Forshare valuationinvestors use the relateddividend discount model.
The starting point here is "Investment under certainty", and usually framed in the context of a corporation.
TheFisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders.
Related is theModigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected byhow that firm is financed, and depends neither on itsdividend policynorits decisionto raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark[10]for evaluating the effects of factors outside the model that do affect value.[note 5]
The mechanism for determining (corporate) value is provided by[26][27]John Burr Williams'The Theory of Investment Value, which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the"intrinsic", long-term worth is the present value of its future net cashflows, in the form ofdividends; inthe corporate context, "free cash flow" as aside. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below.Net present value(NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics underOutline of finance § Discounted cash flow valuation.[note 6]
Bond valuation, in that cashflows (couponsand return of principal, or "Face value") are deterministic, may proceed in the same fashion.[23]An immediate extension,Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's correspondingzero rate, and of equivalent credit worthiness – as opposed to an overall rate.
In many treatments bond valuation precedesequity valuation, under which cashflows (dividends) are not "known"per se. Williams and onward allow for forecasting as to these – based onhistoric ratiosor publisheddividend policy– and cashflows are then treated as essentially deterministic; see below under§ Corporate finance theory.
For both stocks and bonds, "under certainty, with the focus on cash flows from securities over time," valuation based on aterm structure of interest ratesis in fact consistent with arbitrage-free pricing.[28]Indeed, a corollary ofthe aboveis that "the law of one priceimplies the existence of a discount factor";[29]correspondingly, as formulated,∑sπs=1/r{\textstyle \sum _{s}\pi _{s}=1/r}.
Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows.Fisher's formulationof the theory here - developingan intertemporal equilibrium model- underpins also[26]the below applications to uncertainty;[note 7]see[30]for the development.
Theexpected returnused when discounting cashflows on an asseti{\displaystyle i}, is the risk-free rate plus themarket premiummultiplied bybeta(ρi,mσiσm{\displaystyle \rho _{i,m}{\frac {\sigma _{i}}{\sigma _{m}}}}), the asset's correlated volatility relative to the overall marketm{\displaystyle m}.
For"choice under uncertainty"the twin assumptions of rationality andmarket efficiency, as more closely defined, lead tomodern portfolio theory(MPT) with itscapital asset pricing model(CAPM) – anequilibrium-basedresult – and to theBlack–Scholes–Merton theory(BSM; often, simply Black–Scholes) foroption pricing– anarbitrage-freeresult. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; seeAsset pricing § Interrelationship.
Briefly, and intuitively – and consistent with§ Arbitrage-free pricing and equilibriumabove – the relationship between rationality and efficiency is as follows.[31]Given the ability to profit fromprivate information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e.efficient, prices: theefficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (Seeearnings response coefficient.)
The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to thebest guess of the future: the assumption ofrational expectations.
The EMH does allow that when faced with new information, some investors may overreact and some may underreact,[32]but what is required, however, is that investors' reactions follow anormal distribution– so that the net effect on market prices cannot be reliably exploited[32]to make an abnormal profit.
In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news:[33]therandom walk hypothesis.
This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution.[note 8]
Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow;[32]correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium.
Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends,[5][33][14]as based on currently available information.
What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM.
In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are.[note 9]This result will be independent of the investor's level of risk aversion and assumedutility function, thus providing a readily determined discount rate for corporate finance decision makersas above,[36]and for other investors.
The argumentproceeds as follows:[37]If one can construct anefficient frontier– i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of therisk-free assetand the "market portfolio" (theMutual fund separation theorem), with the combinations here plotting as thecapital market line, or CML.
Then, given this CML, the required return on a risky security will be independent of the investor'sutility function, and solely determined by itscovariance("beta") with aggregate, i.e. market, risk.
This is because investors here can then maximize utility through leverage as opposed to stock selection; seeSeparation property (finance),Markowitz model § Choosing the best portfolioand CML diagram aside.
As can be seen in the formula aside, this result is consistent withthe preceding, equaling the riskless return plus an adjustment for risk.[5]A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to deriveother equilibrium-pricing models.
Black–Scholes provides a mathematical model of a financial market containingderivativeinstruments, and the resultant formula for the price ofEuropean-styled options.[note 10]The model is expressed as the Black–Scholes equation, apartial differential equationdescribing the changing price of the option over time; it is derived assuming log-normal,geometric Brownian motion(seeBrownian model of financial markets).
The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (V{\displaystyle V}, the value, or price, of the option, grows atr{\displaystyle r}, the risk-free rate).[6][5]This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is thesolutionto the equation.)
Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with§ Arbitrage-free pricing and equilibriumabove. Relatedly, therefore, the pricing formulamay also be deriveddirectly via risk neutral expectation.Itô's lemmaprovidesthe underlying mathematics, and, withItô calculusmore generally, remains fundamental in quantitative finance.[note 11]
As implied by the Fundamental Theorem,the two major results are consistent.
Here, the Black-Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM.[46][12]The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing.
Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the above fundamental equations – further explaining, and if required demonstrating, this consistency.[6]Here, the CAPM is derived[15]by linkingY{\displaystyle Y}, risk aversion, to overall market return, and setting the return on securityj{\displaystyle j}asXj/Pricej{\displaystyle X_{j}/Price_{j}}; seeStochastic discount factor § Properties.
The Black–Scholes formula is found,in the limit,[47]by attaching abinomial probability[10]to each of numerous possiblespot-prices(i.e. states) and then rearranging for the terms corresponding toN(d1){\displaystyle N(d_{1})}andN(d2){\displaystyle N(d_{2})}, per the boxed description; seeBinomial options pricing model § Relationship with Black–Scholes.
More recent work further generalizes and extends these models. As regardsasset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models.
The majority of developments here relate to required return, i.e. pricing, extending the basic CAPM. Multi-factor models such as theFama–French three-factor modeland theCarhart four-factor model, propose factors other than market return as relevant in pricing. Theintertemporal CAPMandconsumption-based CAPMsimilarly extend the model. Withintertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion ofconsumption (in the economic sense)then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return.
Whereas the above extend the CAPM, thesingle-index modelis a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization. Thearbitrage pricing theory(APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns."[48]It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line.[note 12]The linear factor model structure of the APT is used as the basis for many of the commercial risk systems employed by asset managers.
As regardsportfolio optimization, theBlack–Litterman model[51]departs from the originalMarkowitz modelapproach to constructingefficient portfolios. Black–Litterman starts with an equilibrium assumption, as for the latter, but this is then modified to take into account the "views" (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke[52]asset allocation. Where factors additional to volatility are considered (kurtosis, skew...) thenmultiple-criteria decision analysiscan be applied; here deriving aPareto efficientportfolio. Theuniversal portfolio algorithmappliesinformation theoryto asset selection, learning adaptively from historical data.Behavioral portfolio theoryrecognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas havelately been applied here; recently this is the case alsofor genetic algorithmsandMachine learning, more generally[53](seebelow).
Interpretation:Analogous to Black–Scholes,[54]arbitrage arguments describe the instantaneous change in the bond priceP{\displaystyle P}for changes in the (risk-free) short rater{\displaystyle r}; the analyst selects the specificshort-rate modelto be employed.
In pricing derivatives, thebinomial options pricing modelprovides a discretized version of Black–Scholes, useful for the valuation ofAmerican styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchershave used optionsto extract state-prices for a variety of other applications in financial economics.[6][46][21]Forpath dependent derivatives,Monte Carlo methods for option pricingare employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Variousother numeric techniqueshave also been developed. The theoretical framework too has been extended such thatmartingale pricingis now the standard approach.[note 13]
Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis").Real options valuationallows that option holders can influence the option's underlying; models foremployee stock option valuationexplicitly assume non-rationality on the part of option holders;Credit derivativesallow that payment obligations or delivery requirements might not be honored.Exotic derivativesare now routinely valued. Multi-asset underlyers are handled via simulation orcopula based analysis.
Similarly, the variousshort-rate modelsallow for an extension of these techniques tofixed income-andinterest rate derivatives. (TheVasicekandCIRmodels are equilibrium-based, whileHo–Leeand subsequent models are based on arbitrage-free pricing.) The more generalHJM Frameworkdescribes the dynamics of the fullforward-ratecurve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly forhybrid securities, where credit risk is combined with uncertainty re future rates; seeBond valuation § Stochastic calculus approachandLattice model (finance) § Hybrid securities.[note 14]
Following theCrash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thusimplied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussedin the next section.
After the2008 financial crisis, a further development:[63]as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis.
Addressing this, therefore, issues such ascounterparty credit risk, funding costs and costs of capital are now additionally considered when pricing,[64]and acredit valuation adjustment, or CVA – and potentially othervaluation adjustments, collectivelyxVA– is generally added to the risk-neutral derivative value.
The standard economic arguments can be extended to incorporate these various adjustments.[65]
A related, and perhaps more fundamental change, is that discounting is now on theOvernight Index Swap(OIS) curve, as opposed toLIBORas used previously.[63]This is because post-crisis, theovernight rateis considered a better proxy for the "risk-free rate".[66](Also, practically, the interest paid on cashcollateralis usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSAdiscounting".)Swap pricing– and, therefore,yield curveconstruction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-legLIBOR tenor, with discounting on thecommonOIS curve.
Mirroring theabovedevelopments, corporate finance valuations and decisioning no longer need assume "certainty".Monte Carlo methods in financeallow financial analysts to construct "stochastic" orprobabilisticcorporate finance models, as opposed to the traditional static anddeterministicmodels;[67]seeCorporate finance § Quantifying uncertainty.
Relatedly,Real Options theoryallows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today.[68]More traditionally,decision trees– which are complementary – have been used to evaluate projects, by incorporating in the valuation (all)possible events(or states) and consequentmanagement decisions;[69][67]the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward."[67][note 15]
Related to this, is the treatment of forecasted cashflows inequity valuation. In many cases, following Williamsabove, the average (or most likely) cash-flows were discounted,[71]as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments underFinancial modeling § Accounting.
In more modern treatments, then, it is theexpectedcashflows (in themathematical sense:∑spsXsj{\textstyle \sum _{s}p_{s}X_{sj}}) combined into an overall value per forecast period which are discounted.[72][73][74][67]And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows[67](essentially,Y{\displaystyle Y}andr{\displaystyle r}combined).
Other developments here include[75]agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizing the issues interrelated with capital structure.[76]Clean surplus accountingand the relatedresidual income valuationprovide a model that returns price as a function of earnings, expected returns, and change inbook value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; seeDividend policy § Relevance of dividend policy.
"Corporate finance" as a discipline more generally, building on Fisherabove, relates to the long term objective of maximizing thevalue of the firm- and itsreturn to shareholders- and thus also incorporates the areas ofcapital structureanddividend policy.[77]Extensions of the theory here then also consider these latter, as follows:
(i)optimization re capitalization structure, and theories here as to corporate choices and behavior:Capital structure substitution theory,Pecking order theory,Market timing hypothesis,Trade-off theory;
(ii)considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include:
theWalter model,Lintner model,Residuals theoryandsignaling hypothesis, as well as discussion re the observedclientele effectanddividend puzzle.
As described, the typical application of real options is tocapital budgetingtype problems.
However, here, they arealso appliedto problems of capital structure and dividend policy, and to the related design of corporate securities;[78]and since stockholder and bondholders have different objective functions, in the analysis of therelated agency problems.[68]In all of these cases, state-prices can provide the market-implied information relating to the corporate,as above, which is then applied to the analysis. For example,convertible bondscan (must) be priced consistent with the (recovered) state-prices of the corporate's equity.[20][72]
The discipline, as outlined, also includes a formal study offinancial markets. Of interest especially are market regulation andmarket microstructure, and their relationship toprice efficiency.
Regulatory economicsstudies, in general, the economics of regulation. In the context of finance, it will address the impact offinancial regulationon the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence andfinancial stability.
Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance,annual reports),insider trading, andshort-sellingwill impact price efficiency, thecost of equity, andmarket liquidity.[79]
Market microstructure is concerned with the details of how exchange occurs in markets
(withWalrasian-,matching-,Fisher-, andArrow-Debreu marketsas prototypes),
and "analyzes how specific trading mechanisms affect theprice formationprocess",[80]examining the ways in which the processes of a market affect determinants oftransaction costs, prices, quotes, volume, and trading behavior.
It has been used, for example, in providing explanations forlong-standing exchange rate puzzles,[81]and for theequity premium puzzle.[82]In contrast to the above classical approach, models here explicitly allow for (testing the impact of)market frictionsand otherimperfections;
see alsomarket design.
For both regulation[83]and microstructure,[84]and generally,[85]agent-based modelscan be developed[86]toexamine any impactdue to a change in structure or policy - orto make inferencesre market dynamics -by testing thesein an artificial financial market, or AFM.[note 16]This approach, essentiallysimulatedtrade between numerousagents, "typically usesartificial intelligencetechnologies [oftengenetic algorithmsandneural nets] to represent theadaptive behaviourof market participants".[86]
These'bottom-up' models"start from first principals of agent behavior",[87]with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e.stylized facts]emergingfrom a soup of individual interacting strategies".[87]Agent-based models depart further from the classical approach — therepresentative agent, as outlined — in that they introduceheterogeneityinto the environment (thereby addressing, also, theaggregation problem).
More recent research focuses on the potential impact ofMachine Learningon market functioning and efficiency.
As these methods become more prevalent in financial markets, economists would expect greaterinformation acquisitionand improved price efficiency.[88]In fact, an apparent rejection of market efficiency (seebelow) might simply represent "the unsurprising consequence of investors not having precise knowledge of the parameters of a data-generating process that involves thousands of predictor variables".[89]At the same time, it is acknowledged that a potential downside of these methods, in this context, is their lack ofinterpretability"which translates into difficulties in attaching economic meaning to the results found."[53]
As above, there is a very close link between:
therandom walk hypothesis, with the associated belief that price changes should follow anormal distribution, on the one hand;
and market efficiency andrational expectations, on the other.
Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges.
As discussed, the assumptions that market prices follow arandom walkand that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analystsand risk managersfrequently modify the "standard models" (seekurtosis risk,skewness risk,long tail,model risk).
In fact,Benoit Mandelbrothad discovered already in the 1960s[90]that changes in financial prices do not follow anormal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics.[91]
Financial models with long-tailed distributions and volatility clusteringhave been introduced to overcome problems with the realism of the above "classical" financial models; whilejump diffusion modelsallow for (option) pricing incorporating"jumps"in thespot price.[92]Risk managers, similarly, complement (or substitute) the standardvalue at riskmodels withhistorical simulations,mixture models,principal component analysis,extreme value theory, as well as models forvolatility clustering.[93]For further discussion seeFat-tailed distribution § Applications in economics, andValue at risk § Criticism. Portfolio managers, likewise, have modified their optimization criteria and algorithms; see§ Portfolio theoryabove.
Closely related is thevolatility smile, where, as above,implied volatility– the volatility corresponding to the BSM price – is observed todifferas a function ofstrike price(i.e.moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM (i.e.N(d1){\displaystyle N(d_{1})}andN(d2){\displaystyle N(d_{2})}above). The term structure of volatility describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is then a three-dimensional surface plot of volatility smile and term structure. These empirical phenomena negate the assumption of constant volatility – andlog-normality– upon which Black–Scholes is built.[40][92]Within institutions, the function of Black–Scholes is now, largely, tocommunicateprices via implied volatilities, much like bond prices are communicated viaYTM; seeBlack–Scholes model § The volatility smile.
In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes.
The two main approaches arelocal volatilityandstochastic volatility. The first returns the volatility which is "local" to each spot-time point of thefinite difference-orsimulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are firstcalibrated to observed prices, and are then applied to the valuation or hedging in question; the most common areHeston,SABRandCEV. This approach addresses certain problems identified with hedging under local volatility.[94]
Related to local volatility are thelattice-basedimplied-binomialand-trinomial trees– essentially a discretization of the approach – which are similarly, but less commonly,[19]used for pricing; these are built on state-prices recovered from the surface.Edgeworth binomial treesallow for a specified (i.e. non-Gaussian)skewandkurtosisin the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required.[95]Similarly purposed (and derived)closed-form modelswere also developed.[96]
As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate.
And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value forcounterparty-andfunding-relatedrisk.
These xVA areadditionalto any smile or surface effect: with the surface built on price data for fully-collateralized positions, there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...)
As mentioned at top, mathematical finance (and particularlyfinancial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, critics of financial economics - especially vocal since the2008 financial crisis- suggest that instead, the theory needs revisiting almost entirely:[note 17]
The current system, based on the idea that risk is distributed in the shape of a bell curve, is flawed... The problem is [that economists and practitioners] never abandon the bell curve. They are like medieval astronomers who believe the sun revolves around the earth and arefuriously tweaking their geo-centric mathin the face of contrary evidence. They will never get this right;they need their Copernicus.[97]
As seen, a common assumption is that financial decision makers act rationally; seeHomo economicus. Recently, however, researchers inexperimental economicsandexperimental financehave challenged this assumptionempirically. These assumptions are also challengedtheoretically, bybehavioral finance, a discipline primarily concerned with the limits to rationality of economic agents.[note 18]For related criticisms re corporate finance theory vs its practice see:.[98]
Various persistentmarket anomalieshave also been documented as consistent with and complementary to price or return distortions – e.g.size premiums– which appear to contradict theefficient-market hypothesis. Within these market anomalies,calendar effectsare the most commonly referenced group.
Related to these are various of theeconomic puzzles, concerning phenomena similarly contradicting the theory. Theequity premium puzzle, as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than therisk premiumrational equity investors should demand, an "abnormal return". For further context seeRandom walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances.
More generally, and, again, particularly following the2008 financial crisis, financial economics (andmathematical finance) has been subjected to deeper criticism.
Notable here isNassim Taleb, whose critique overlaps the above, but extends[99]also to the institutional[100][101]aspects of finance - includingacademic.[102][40]HisBlack swan theoryposits that although events of large magnitude and consequence play a major role in finance, since these are (statistically) unexpected,they are "ignored"by economists and traders.
Thus, although a "Taleb distribution" - which normally provides a payoff of small positive returns, while carrying a small but significant risk of catastrophic losses - more realistically describes markets than current models, the latter continue to be preferred (even withprofessionals hereacknowledging that it only "generally works" or only "works on average").[103]
Here,[100]financial criseshave been a topic of interest[104]and, in particular,the failure[101]of (financial) economists - as well as[100]bankersandregulators- to model and predict these.
SeeFinancial crisis § Theories.
The related problem ofsystemic risk, has also received attention. Where companies hold securities in each other, then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See:Systemic risk § Inadequacy of classic valuation models;Cascades in financial networks;Flight-to-quality.
Areas of research attempting to explain (or at least model) these phenomena, and crises, include[14]market microstructureandHeterogeneous agent models, as above. The latter is extended toagent-based computational models; here,[85]as mentioned, price is treated as anemergent phenomenon, resulting from the interaction of the various market participants (agents). Thenoisy market hypothesisargues that prices can be influenced by speculators andmomentum traders, as well as byinsidersand institutions that often buy and sell stocks for reasons unrelated tofundamental value; seeNoise (economic)andNoise trader. Theadaptive market hypothesisis an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles ofevolutionto financial interactions. Aninformation cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information.Copula-based modellinghas similarly been applied. See alsoHyman Minsky's"financial instability hypothesis", as well asGeorge Soros' applicationof"reflexivity".
In the alternative, institutionally inherentlimits to arbitrage- i.e. as opposed to factors directly contradictory to the theory - are sometimes referenced.
Note however, that despite the above inefficiencies, asset prices doeffectively[32]follow a random walk - i.e. (at least) in the sense that "changes in the stock market are unpredictable, lacking any pattern that can be used by an investor to beat the overall market".[105]Thus afterfund costs- and givenother considerations- it is difficult to consistently outperform market averages[106]and achieve"alpha".
The practical implication[107]is thatpassive investing, i.e. via low-costindex funds, should, on average, serve better thanany otheractive strategy-
and, in fact, this practice isnow widely adopted.[note 19]Here, however, the followingconcern is posited:
although in concept, it is "the research undertaken by active managers [that] keeps prices closer to value... [and] thus there is a fragile equilibrium in which some investors choose to index while the rest continue to search for mispriced securities";[107]in practice, as more investors "pour money into index funds tracking the same stocks, valuationsfor those companiesbecome inflated",[108]potentially leading toasset bubbles.
Financial economics
Asset pricing
Corporate finance
Course material
Links and portals
Actuarial resources
|
https://en.wikipedia.org/wiki/Financial_economics#Corporate_finance_theory
|
The followingoutlineis provided as an overview of and topical guide to finance:
Finance– addresses the ways in which individuals and organizations raise and allocate monetaryresourcesover time, taking into account therisksentailed in their projects.
The termfinancemay incorporate any of the following:
Financial institutions
|
https://en.wikipedia.org/wiki/Outline_of_finance#Corporate_finance_theory
|
Capital managementrefers to the area offinancial managementthat deals withcapital assets, which are assets that have value as a function of economic production, or otherwise are of utility to other economicassets. Capital management can broadly be divided into two classes:
The discipline exists because assets that are ofcapitalvalue to business entities or otherlegal personsrequire management to aim to achieve optimal, adequate or otherwise sufficient capital performance of the assets at hand. Underperforming capital assets pose a liability to the finances and continued existence of any legal entity, regardless of whether it is positioned in thepublic sectoror in theprivate sector.[2]
|
https://en.wikipedia.org/wiki/Capital_management
|
Abudgetis acalculationplan, usually but not alwaysfinancial, for a definedperiod, often one year or a month. A budget may include anticipatedsalesvolumes andrevenues, resource quantities including time,costsandexpenses, environmental impacts such as greenhouse gas emissions, other impacts,assets,liabilitiesandcash flows. Companies, governments, families, and other organizations use budgets to expressstrategic plansof activities in measurable terms.[1]
Preparing a budget allowscompanies,authorities, private entities orfamiliesto establish priorities and evaluate the achievement of their objectives. To achieve these goals it may be necessary to incur adeficit(expenses exceed income) or, on the contrary, it may be possible to save, in which case the budget will present asurplus(income exceed expenses).
In the field of commerce, a budget is also a financial document or report that details the cost that a service will have if performed. Whoever makes the budget must adhere to it and cannot change it if the client accepts the service.
A budget expresses intended expenditures along with proposals for how to meet them with resources. A budget may express asurplus, providing resources for use at afuturetime, or a deficit in which expenditures exceedincomeor other resources.
The budget of agovernmentis a summary or plan of the anticipated resources (often but not always from taxes) and expenditures of that government. There are three types of government budgets: the operating or current budget, the capital or investment budget, and the cash or cash flow budget.[2]
The federal budget is prepared by theOffice of Management and Budget, and submitted to Congress for consideration. Invariably, Congress makes many and substantial changes. Nearly all American states are required to havebalanced budgets, but the federal government is allowed to run deficits.[3]
The budget is prepared by the Budget Division Department of Economic Affairs of theMinistry of Financeannually. The Finance Minister is the head of the budget making committee. The present Indian Finance minister isNirmala Sitharaman. The Budget includes supplementary excess grants and when a proclamation by thePresidentas to failure of Constitutional machinery is in operation in relation to a State or a Union Territory, preparation of the Budget of such State.[citation needed]The first budget of India was submitted on 18 February 1860 byJames Wilson.P C Mahalanobisis known as the father of Indian budget.
The Philippine budget is considered the most complicated in the world, incorporating multiple approaches in one single budget system: line-item (budget execution), performance (budget accountability), andzero-based budgeting. TheDepartment of Budget and Management(DBM) prepares the National Expenditure Program and forwards it to the Committee on Appropriations of the House of Representatives to come up with a General Appropriations Bill (GAB). The GAB will go through budget deliberations and voting; the same process occurs when the GAB is transmitted to thePhilippine Senate.
After both houses of Congress approves the GAB, the President signs the bill into a General Appropriations Act (GAA); also, the President may opt tovetothe GAB and have it returned to the legislative branch or leave the bill unsigned for 30 days and lapse into law. There are two types of budget bill veto: theline-item vetoand the veto of the whole budget.[4]
A personal budget or home budget is afinance planthat allocates future personalincometowardsexpenses,savingsanddebtrepayment. Past spending andpersonal debtare considered when creating a personal budget. There are several methods and tools available for creating, using, and adjusting a personal budget. For example, jobs are an income source, while bills and rent payments are expenses. A third category (other than income and expenses) may be assets (such as property, investments, or other savings or value) representing a potential reserve for funds in case of budget shortfalls.
The budget of abusiness,division, orcorporation[5][6][1][7]is afinancial forecastfor the near-term future, usually the nextaccounting period, aggregating theexpected revenuesand expenses of the various departments – operations,human resources, IT, etc.
It is thus a key element inintegrated business planning, with measurable targetscorrespondingly devolvedto departmental managers (and becomingKPIs[1]);
budgets may then also specify non-cash resources, such as staff or time.[1]
The budgeting processrequires considerable effort,[5]often involving dozens of staff;
final sign-off resides with both thefinancial directorandoperations director.
The responsibility usually sits within the company'sfinancial managementarea in general, sometimes, specifically in "FP&A".
Professionals employed in this role are often designated "Budget Analyst",[8]a specializedfinancial analystfunction.
Organisations may produce[7]functional budgets, relating to activities, and / or cash budgets, focused on receipts and payments.
Incremental budgeting starts with the budget from the previous period, while underzero-based budgetingactivities/costs are included only if justified.
Under all approachesexpected salesor revenue, is typically the starting point;[7]this will be based on thebusiness' planningfor the period in question.
Directly related elementsand costsare typically linked to these (activity based costingmay be employed).Support and management functionsmay be revisited, and the resultant"fixed" costs, such as rent and payroll, will be adjusted, at a minimum, for inflation.Capital expenditure, both new investments and maintenance, may be budgeted separately;
debt servicing and repayments likewise.
The master budget[7]aggregates these all.
SeeFinancial forecast,Cash flow forecast,Financial modeling § Accounting.
Whereas the budget is typically compiled on an annual basis
- although, e.g. inmining,[9]this may be quarterly -the monitoringis ongoing, with financialand operationaladjustments (or interventions) made as warranted; seeFinancial risk management § Corporate financefor further discussion.
Here,[7]if the actual figures delivered come close to those budgeted, this suggests that managers understand their business and have been successful in delivering.
On the other hand, if thefigures divergethis sends an "out of control" signal;
additionally, theshare price could sufferwhere these figures have beencommunicated to analysts.
Criticism is sometimes directed at the nature of budgeting, and its impact on the organization.[10][11]Additional to the cost in time and resources, two phenomena are identified as problematic:
First, it is suggested that managers will often"game the system"in specifying targets that are easily attainable, and / or in asking for more resources than required,[7]such that the required resources will be budgeted as a compromise.
A second observation is that managers' thinking may emphasize short term, operational thinking at the expense of a long term andstrategic perspective, particularly when[12]bonus paymentsare linked to budget.
SeeStrategic planning § Strategic planning vs. financial planning.
|
https://en.wikipedia.org/wiki/Corporate_budget
|
Corporate governancerefers to the mechanisms, processes, practices, and relations by whichcorporationsare controlled and operated by their boards of directors, managers, shareholders, and stakeholders.
"Corporate governance" may be defined, described or delineated in diverse ways, depending on the writer's purpose. Writers focused on a disciplinary interest or context (such asaccounting,finance,law, ormanagement) often adopt narrow definitions that appear purpose-specific. Writers concerned with regulatory policy in relation to corporate governance practices often use broader structural descriptions. A broad (meta) definition that encompasses many adopted definitions is "Corporate governance describes the processes, structures, and mechanisms that influence the control and direction of corporations."[1]
This meta definition accommodates both the narrow definitions used in specific contexts and the broader descriptions that are often presented as authoritative. The latter include the structural definition from theCadbury Report, which identifies corporate governance as "the system by which companies are directed and controlled" (Cadbury 1992, p. 15); and the relational-structural view adopted by the Organisation for Economic Cooperation and Development (OECD) of "Corporate governance involves a set of relationships between a company's management, board, shareholders and stakeholders. Corporate governance also provides the structure and systems through which the company is directed and its objectives are set, and the means of attaining those objectives and monitoring performance are determined" (OECD 2023, p. 6).[2]
Examples of narrower definitions in particular contexts include:
The firm itself is modelled as a governance structure acting through the mechanisms of contract.[5][6][7][8]Here corporate governance may includeits relationtocorporate finance.[9][10][11]
Contemporary discussions of corporate governance tend to refer to principles raised in three documents released since 1990: TheCadbury Report(UK, 1992), the Principles of Corporate Governance (OECD, 1999, 2004, 2015 and 2023), and theSarbanes–Oxley Actof 2002 (US, 2002). The Cadbury andOrganisation for Economic Co-operation and Development(OECD) reports present general principles around which businesses are expected to operate to assure proper governance. The Sarbanes–Oxley Act, informally referred to as Sarbox or Sox, is an attempt by the federal government in the United States to legislate several of the principles recommended in the Cadbury and OECD reports.
Some concerns regarding governance follows from the potential forconflicts of intereststhat are a consequence of the non-alignment of preferences between: shareholders and upper management (principal–agent problems); and among shareholders (principal–principal problems),[22]although also other stakeholder relations are affected and coordinated through corporate governance.
In large firms where there is a separation of ownership and management, theprincipal–agent problem[23]can arise between upper-management (the "agent") and the shareholder(s) (the "principals"). The shareholders and upper management may have different interests. The shareholders typically desire returns on their investments through profits and dividends, while upper management may also be influenced by other motives, such as management remuneration or wealth interests, working conditions and perquisites, or relationships with other parties within (e.g., management-worker relations) or outside the corporation, to the extent that these are not necessary for profits. Those pertaining to self-interest are usually emphasized in relation to principal-agent problems. The effectiveness of corporate governance practices from a shareholder perspective might be judged by how well those practices align and coordinate the interests of the upper management with those of the shareholders. However, corporations sometimes undertake initiatives, such as climate activism and voluntary emission reduction, that seems to contradict the idea that rational self-interest drives shareholders' governance goals.[24]: 3
An example of a possible conflict between shareholders and upper management materializes through stock repurchases (treasury stock). Executives may have incentive to divert cash surpluses to buying treasury stock to support or increase the share price. However, that reduces the financial resources available to maintain or enhance profitable operations. As a result, executives can sacrifice long-term profits for short-term personal gain. Shareholders may have different perspectives in this regard, depending on their owntime preferences, but it can also be viewed as a conflict with broader corporate interests (including preferences of other stakeholders and the long-term health of the corporation).
The principal–agent problem can be intensified when upper management acts on behalf of multiple shareholders—which is often the case in large firms (seeMultiple principal problem).[22]Specifically, when upper management acts on behalf of multiple shareholders, the multiple shareholders face acollective action problemin corporate governance, as individual shareholders may lobby upper management or otherwise have incentives to act in their individual interests rather than in the collective interest of all shareholders.[25]As a result, there may be free-riding in steering and monitoring of upper management,[26]or conversely, high costs may arise from duplicate steering and monitoring of upper management.[27]Conflict may break out between principals,[28]and this all leads to increased autonomy for upper management.[22]
Ways of mitigating or preventing these conflicts of interests include the processes, customs, policies, laws, and institutions which affect the way a company is controlled—and this is the challenge of corporate governance.[29][30]To solve the problem of governing upper management under multiple shareholders, corporate governance scholars have figured out that the straightforward solution of appointing one or more shareholders for governance is likely to lead to problems because of the information asymmetry it creates.[31][32][33]Shareholders' meetings are necessary to arrange governance under multiple shareholders, and it has been proposed that this is the solution to the problem of multiple principals due to median voter theorem: shareholders' meetings lead power to be devolved to an actor that approximately holds the median interest of all shareholders, thus causing governance to best represent the aggregated interest of all shareholders.[22]
An important theme of governance is the nature and extent ofcorporate accountability. A related discussion at the macro level focuses on the effect of a corporate governance system oneconomic efficiency, with a strong emphasis on shareholders' welfare.[8]This has resulted in a literature focused on economic analysis.[34][35][36]A comparative assessment of corporate governance principles and practices across countries was published by Aguilera and Jackson in 2011.[37]
Different models of corporate governance differ according to the variety of capitalism in which they are embedded. The Anglo-American "model" tends to emphasize the interests of shareholders. The coordinated ormultistakeholder modelassociated with Continental Europe and Japan also recognizes the interests of workers, managers, suppliers, customers, and the community. A related distinction is between market-oriented and network-oriented models of corporate governance.[38]
Some continental European countries, including Germany, Austria, and the Netherlands, require a two-tiered board of directors as a means of improving corporate governance.[39]In the two-tiered board, the executive board, made up of company executives, generally runs day-to-day operations while the supervisory board, made up entirely of non-executive directors who represent shareholders and employees, hires and fires the members of the executive board, determines their compensation, and reviews major business decisions.[40]
Germany, in particular, is known for its practice ofco-determination, founded on the German Codetermination Act of 1976, in which workers are granted seats on the board as stakeholders, separate from the seats accruing to shareholder equity.
The so-called "Anglo-American model" of corporate governance emphasizes the interests of shareholders. It relies on a single-tiered board of directors that is normally dominated by non-executive directors elected by shareholders. Because of this, it is also known as "the unitary system".[41][42]Within this system, many boards include some executives from the company (who areex officiomembers of the board). Non-executive directors are expected to outnumber executive directors and hold key posts, including audit and compensation committees. In the United Kingdom, theCEOgenerally does not also serve as chairman of the board, whereas in the US having the dual role has been the norm, despite major misgivings regarding the effect on corporate governance.[43]The number of US firms combining both roles is declining, however.[44]
In the United States, corporations are directly governed by state laws, while the exchange (offering and trading) of securities in corporations (including shares) is governed by federal legislation. Many US states have adopted theModel Business Corporation Act, but the dominant state law for publicly traded corporations isDelaware General Corporation Law, which continues to be the place of incorporation for the majority of publicly traded corporations.[45]Individual rules for corporations are based upon thecorporate charterand, less authoritatively, the corporatebylaws.[45]Shareholders cannot initiate changes in the corporate charter although they can initiate changes to the corporate bylaws.[45]
It is sometimes colloquially stated that in the US and the UK that "the shareholders own the company." This is, however, a misconception as argued by Eccles and Youmans (2015) and Kay (2015).[46]The American system has long been based on a belief in the potential ofshareholder democracyto efficiently allocate capital.
The Japanese model of corporate governance has traditionally held a broad view that firms should account for the interests of a range of stakeholders. For instance, managers do not have a fiduciary responsibility to shareholders. This framework is rooted in the belief that a balance among stakeholder interests can lead to a superior allocation of resources for society. The Japanese model includes several key principles:[47]
An article published by theAustralian Institute of Company Directorscalled "Do Boards Need to become more Entrepreneurial?" considered the need for founder centrism behaviour at board level to appropriately manage disruption.[48]
Corporations are created aslegal personsby the laws and regulations of a particular jurisdiction. These may vary in many respects between countries, but a corporation's legal person status is fundamental to all jurisdictions and is conferred by statute. This allows the entity to hold property in its own right without reference to any real person. It also results in the perpetual existence that characterizes the modern corporation. The statutory granting of corporate existence may arise from general purpose legislation (which is the general case) or from a statute to create a specific corporation. Now, the formation of business corporations in most jurisdictions requires government legislation that facilitatesincorporation. This legislation is often in the form ofCompanies ActorCorporations Act, or similar. Country-specific regulatory devices are summarized below.
It is generally perceived that regulatory attention on the corporate governance practices of publicly listed corporations, particularly in relation totransparencyandaccountability, increased in many jurisdictions following the high-profilecorporate scandalsin 2001–2002, many of which involvedaccounting fraud; and then again after the2008 financial crisis. For example, in the U.S., these included scandals surroundingEnronandMCI Inc.(formerly WorldCom). Their demise led to the enactment of theSarbanes–Oxley Actin 2002, aU.S. federal lawintended to improve corporate governance in the United States. Comparable failures in Australia (HIH,One.Tel) are linked to with the eventual passage of theCLERP 9reforms there (2004), that similarly aimed to improve corporate governance.[49]Similar corporate failures in other countries stimulated increased regulatory interest (e.g.,ParmalatinItaly). Also see
In addition to legislation the facilitates incorporation, many jurisdictions have some major regulatory devices that impact on corporate governance. This includes statutory laws concerned with the functioning ofstock or securities markets(also seeSecurity (finance),consumer and competition(antitrust) laws,labour or employmentlaws, andenvironmental protectionlaws, which may also entail disclosure requirements. In addition to thestatutory lawsof the relevant jurisdiction, corporations are subject tocommon lawin some countries.
In most jurisdictions, corporations also have some form of a corporate constitution that provides individual rules that govern the corporation and authorize or constrain its decision-makers. This constitution is identified by a variety of terms; in English-speaking jurisdictions, it is sometimes known as the corporate charter orarticles of association(which also be accompanied by amemorandum of association).
Incorporation in Australia originated under state legislation but has been underfederal legislationsince 2001. Also seeAustralian corporate law.
Other significant legislation includes:
Incorporation in Canada can be done either under either federal or provincial legislation. SeeCanadian corporate law.
Dutch corporate law is embedded in theondernemingsrechtand, specifically for limited liability companies, in thevennootschapsrecht.
In addition The Netherlands has adopted a Corporate Governance Code in 2016, which has been updated twice since.
In the latest version (2022),[50]theExecutive Boardof the company is held responsible for the continuity of the company and itssustainable long-term value creation.
The executive board considers the impact of corporate actions on People and Planet and takes the effects on corporate stakeholders into account.[51]In the Dutch two-tier system, theSupervisory Boardmonitors and supervises the executive board in this respect.
Polish Corporate Law is regulated in Code of Commercial Companies.[52]The code regulates most of the aspects of corporate governance, incl. rules of incorporation and liquidation, it defines rights, obligations and rules of operations of corporate bodies (Management Board, Supervisory Board, Shareholders Meeting).[53]
The UK has a single jurisdiction forincorporation. Also seeUnited Kingdom company lawOther significant legislation includes:
The UK passed theBribery Actin 2010. This law made it illegal to bribe either government or private citizens or make facilitating payments (i.e., payment to a government official to perform their routine duties more quickly). It also required corporations to establish controls to prevent bribery.
Incorporation in the USis under state level legislation, but there important federal acts. in particular, seeSecurities Act of 1933,Securities Exchange Act of 1934, andUniform Securities Act.
TheSarbanes–Oxley Actof 2002 (SOX) was enacted in the wake of a series of high-profile corporate scandals, which cost investors billions of dollars.[54]It established a series of requirements that affect corporate governance in the US and influenced similar laws in many other countries. SOX contained many other elements, but provided for several changes that are important to corporate governance practices:
The U.S. passed theForeign Corrupt Practices Act(FCPA) in 1977, with subsequent modifications. This law made it illegal to bribe government officials and required corporations to maintain adequate accounting controls. It is enforced by theU.S. Department of Justiceand theSecurities and Exchange Commission(SEC). Substantial civil and criminal penalties have been levied on corporations and executives convicted of bribery.[56]
Corporate governance principles and codes have been developed in different countries and issued from stock exchanges, corporations, institutional investors, or associations (institutes) of directors and managers with the support of governments and international organizations. As a rule, compliance with these governance recommendations is not mandated by law, although the codes linked to stock exchangelisting requirementsmay have a coercive effect.
One of the most influential guidelines on corporate governance are theG20/OECDPrinciples of Corporate Governance, first published as the OECD Principles in 1999, revised in 2004, in 2015 when endorsed by the G20, and in 2023.[57]The Principles are often referenced by countries developing local codes or guidelines. Building on the work of the OECD, other international organizations, private sector associations and more than 20 national corporate governance codes formed theUnited NationsIntergovernmental Working Group of Experts on International Standards of Accounting and Reporting(ISAR) to produce their Guidance on Good Practices in Corporate Governance Disclosure.[58]This internationally agreed[59]benchmark consists of more than fifty distinct disclosure items across five broad categories:[60]
TheOECDGuidelines on Corporate Governance of State-Owned Enterprises[61]complement the G20/OECD Principles of Corporate Governance,[62]providing guidance tailored to the corporate governance challenges ofstate-owned enterprises.
Companies listed on theNew York Stock Exchange(NYSE) and other stock exchanges are required to meet certain governance standards. For example, the NYSE Listed Company Manual requires, among many other elements:
The investor-led organisation International Corporate Governance Network (ICGN) was set up by individuals centred around the ten largest pension funds in the world in 1995. The aim is to promote global corporate governance standards. The network is led by investors that manage US$77 trillion, and members are located in fifty different countries. ICGN has developed a suite of global guidelines ranging from shareholder rights to business ethics.[63]
TheWorld Business Council for Sustainable Development(WBCSD) has done work on corporate governance, particularly on accounting and reporting.[64]In 2009, theInternational Finance Corporationand theUN Global Compactreleased a report, "Corporate Governance: the Foundation for Corporate Citizenship and Sustainable Business",[65]linking the environmental, social and governance responsibilities of a company to its financial performance and long-term sustainability.
Most codes are largely voluntary. An issue raised in the U.S. since the 2005Disney decision[66]is the degree to which companies manage their governance responsibilities; in other words, do they merely try to supersede the legal threshold, or should they create governance guidelines that ascend to the level of best practice. For example, the guidelines issued by associations of directors, corporate managers and individual companies tend to be wholly voluntary, but such documents may have a wider effect by prompting other companies to adopt similar practices.[citation needed]
In 2021, the first everinternational standard, ISO 37000, was published as guidance for good governance.[67]The guidance places emphasis on purpose which is at the heart of all organizations, i.e. a meaningful reason to exist. Values inform both the purpose and the way the purpose is achieved.[68]
Robert E. Wrightargued inCorporation Nation(2014) that the governance of early U.S. corporations, of which over 20,000 existed by theCivil Warof 1861–1865, was superior to that of corporations in the late 19th and early 20th centuries because early corporations governed themselves like "republics", replete with numerous "checks and balances" against fraud and against usurpation of power by managers or by large shareholders.[69](The term"robber baron"became particularly associated with US corporate figures in theGilded Age—the late 19th century.)
In the immediate aftermath of theWall Street crash of 1929legal scholars such asAdolf Augustus Berle, Edwin Dodd, andGardiner C. Meanspondered on the changing role of the modern corporation in society.[70]From theChicago school of economics,Ronald Coase[71]introduced the notion of transaction costs into the understanding of why firms are founded and how they continue to behave.[72]
US economic expansion through the emergence of multinational corporations afterWorld War II(1939–1945) saw the establishment of themanagerial class. SeveralHarvard Business Schoolmanagement professors studied and wrote about the new class:Myles Mace(entrepreneurship),Alfred D. Chandler, Jr.(business history),Jay Lorsch(organizational behavior) and Elizabeth MacIver (organizational behavior). According to Lorsch and MacIver "many large corporations have dominant control over business affairs without sufficient accountability or monitoring by their board of directors".[citation needed]
In the 1980s,Eugene FamaandMichael Jensen[73]established theprincipal–agent problemas a way of understanding corporate governance: the firm is seen as a series of contracts.[74]
In the period from 1977 to 1997, corporate directors' duties in the U.S. expanded beyond their traditional legal responsibility of duty of loyalty to the corporation and to its shareholders.[75][vague]
In the first half of the 1990s, the issue of corporate governance in the U.S. received considerable press attention due to a spate of CEO dismissals (for example, atIBM,Kodak, andHoneywell) by their boards. The California Public Employees' Retirement System (CalPERS) led a wave ofinstitutionalshareholder activism (something only very rarely seen before), as a way of ensuring that corporate value would not be destroyed by the now traditionally cozy relationships between the CEO and the board of directors (for example, by the unrestrained issuance of stock options, not infrequentlyback-dated).
In the early 2000s, the massive bankruptcies (and criminal malfeasance) ofEnronandWorldcom, as well as lessercorporate scandals(such as those involvingAdelphia Communications,AOL,Arthur Andersen,Global Crossing, andTyco) led to increased political interest in corporate governance. This was reflected in the passage of theSarbanes–Oxley Actof 2002. Other triggers for continued interest in the corporate governance of organizations included the2008 financial crisisand the level of CEO pay.[76]
Some corporations have tried to burnish their ethical image by creating whistle-blower protections, such as anonymity. This varies significantly by justification, company and sector.
The1997 Asian financial crisisseverely affected the economies ofThailand,Indonesia,South Korea,Malaysia, and thePhilippinesthrough the exit of foreign capital after property assets collapsed. The lack of corporate governance mechanisms in these countries highlighted the weaknesses of the institutions in their economies.[citation needed]
In the 1990s, China established the Shanghai and Shenzhen Stock Exchanges and theChina Securities Regulatory Commission(CSRC) to improve corporate governance. Despite these efforts, state ownership concentration and governance issues such as board independence and insider trading persisted.[77]
In November 2006 theCapital Market Authority (Saudi Arabia)(CMA) issued a corporate governance code in the Arabic language.[78]The Kingdom ofSaudi Arabiahas made considerable progress with respect to the implementation of viable and culturally appropriate governance mechanisms (Al-Hussain & Johnson, 2009).[79][need quotation to verify]
Al-Hussain, A. and Johnson, R. (2009) found a strong relationship between the efficiency of corporate governance structure andSaudibank performance when usingreturn on assetsas a performance measure with one exception—that government and local ownership groups were not significant. However, usingrate of returnas a performance measure revealed a weak positive relationship between the efficiency of corporate governance structure and bank performance.[80]
Key parties involved in corporate governance include stakeholders such as the board of directors, management and shareholders. External stakeholders such as creditors, auditors, customers, suppliers, government agencies, and the community at large also exert influence. The agency view of the corporation posits that the shareholder forgoes decision rights (control) and entrusts the manager to act in the shareholders' best (joint) interests. Partly as a result of this separation between the two investors and managers, corporate governance mechanisms include a system of controls intended to help align managers' incentives with those of shareholders. Agency concerns (risk) are necessarily lower for acontrolling shareholder.[81]
In private for-profit corporations, shareholders elect the board of directors to represent their interests. In the case of nonprofits, stakeholders may have some role in recommending or selecting board members, but typically the board itself decides who will serve on the board as a 'self-perpetuating' board.[82]The degree of leadership that the board has over the organization varies; in practice at large organizations, the executive management, principally the CEO, drives major initiatives with the oversight and approval of the board.[83]
Former Chairman of the Board ofGeneral MotorsJohn G. Smalewrote in 1995: "The board is responsible for the successful perpetuation of the corporation. That responsibility cannot be relegated to management."[84]Aboard of directorsis expected to play a key role in corporate governance. The board has responsibility for: CEO selection and succession; providing feedback to management on the organization's strategy; compensating senior executives; monitoring financial health, performance and risk; and ensuring accountability of the organization to its investors and authorities. Boards typically have several committees (e.g., Compensation, Nominating and Audit) to perform their work.[85]
The OECD Principles of Corporate Governance (2025) describe the responsibilities of the board; some of these are summarized below:[57]
All parties, not just shareholders, to corporate governance have an interest, whether direct or indirect, in the financial performance of the corporation.[86]Directors, workers and management receive salaries, benefits and reputation, while investors expect to receive financial returns. For lenders, it is specified interest payments, while returns to equity investors arise from dividend distributions or capital gains on their stock. Customers are concerned with the certainty of the provision of goods and services of an appropriate quality; suppliers are concerned with compensation for their goods or services, and possible continued trading relationships. These parties provide value to the corporation in the form of financial, physical, human and other forms of capital. Many parties may also be concerned withcorporate social performance.[86]
A key factor in a party's decision to participate in or engage with a corporation is their confidence that the corporation will deliver the party's expected outcomes. When categories of parties (stakeholders) do not have sufficient confidence that a corporation is being controlled and directed in a manner consistent with their desired outcomes, they are less likely to engage with the corporation. When this becomes an endemic system feature, the loss of confidence and participation in markets may affect many other stakeholders, and increases the likelihood of political action. There is substantial interest in how external systems and institutions, including markets, influence corporate governance.[87]
In 2016 the director of theWorld Pensions Council(WPC) said that "institutional asset owners now seem more eager to take to task [the] negligent CEOs" of the companies whose shares they own.[88]
This development is part of a broader trend towards more fully exercised asset ownership—notably from the part of theboards of directors('trustees') of large UK, Dutch, Scandinavian and Canadian pension investors:
No longer 'absentee landlords', [pension fund] trustees have started to exercise more forcefully theirgovernanceprerogatives across the boardrooms of Britain,BeneluxandAmerica: coming together through the establishment of engaged pressure groups […] to 'shift the [whole economic] system towards sustainable investment'.[88]
This could eventually put more pressure on theCEOsofpublicly listed companies, as "more than ever before, many [North American,] UK and European Unionpension trusteesspeak enthusiastically about flexing theirfiduciarymuscles for the UN'sSustainable Development Goals", and otherESG-centricinvestment practices.[89]
In Britain, "The widespread social disenchantment that followed the [2008–2012]great recessionhad an impact" on all stakeholders, includingpension fundboard members and investment managers.[90]
Many of the UK's largest pension funds are thus already active stewards of their assets, engaging withcorporate boardsand speaking up when they think it is necessary.[90]
Control and ownership structure refers to the types and composition of shareholders in a corporation. In some countries such as most of Continental Europe, ownership is not necessarily equivalent to control due to the existence of e.g. dual-class shares, ownership pyramids, voting coalitions, proxy votes and clauses in the articles of association that confer additional voting rights to long-term shareholders.[91]Ownership is typically defined as the ownership of cash flow rights whereas control refers to ownership of control or voting rights.[91]Researchers often "measure" control and ownership structures by using some observable measures of control and ownership concentration or the extent of inside control and ownership. Some features or types of control and ownership structure involvingcorporate groupsinclude pyramids,cross-shareholdings, rings, and webs. German "concerns" (Konzern) are legally recognized corporate groups with complex structures. Japanesekeiretsu(系列) and South Koreanchaebol(which tend to be family-controlled) are corporate groups which consist of complex interlocking business relationships and shareholdings. Cross-shareholding is an essential feature of keiretsu and chaebol groups. Corporate engagement with shareholders and other stakeholders can differ substantially across different control and ownership structures.
In smaller companies founder‐owners often play a pivotal role in shaping corporate value systems that influence companies for years to come. In larger companies that separate ownership and control, managers and boards come to play an influential role.[92]This is in part due to the distinction between employees and shareholders in large firms, where labour forms part of the corporate organization to which it belongs whereas shareholders, creditors and investors act outside of the organization of interest.
Family interests dominate ownership and control structures of some corporations, and it has been suggested that the oversight of family-controlled corporations are superior to corporations "controlled" by institutional investors (or with such diverse share ownership that they are controlled by management). A 2003Business Weekstudy said: "Forget the celebrity CEO. Look beyond Six Sigma and the latest technology fad. One of the biggest strategic advantages a company can have, it turns out, is blood lines."[93]A 2007 study byCredit Suissefound that European companies in which "the founding family or manager retains a stake of more than 10 per cent of the company's capital enjoyed a superior performance over their respective sectoral peers", reportedFinancial Times.[94]Since 1996, this superior performance amounted to 8% per year.[94]
The significance of institutional investors varies substantially across countries. In developed Anglo-American countries (Australia, Canada, New Zealand, U.K., U.S.), institutional investors dominate the market for stocks in larger corporations. While the majority of the shares in the Japanese market are held by financial companies and industrial corporations, these are not institutional investors if their holdings are largely with-on group.[citation needed]
The largest funds of invested money or the largest investment management firm for corporations are designed to maximize the benefits of diversified investment by investing in a very large number of different corporations with sufficientliquidity. The idea is this strategy will largely eliminate individual firmfinancialor other risk. A consequence of this approach is that these investors have relatively little interest in the governance of a particular corporation. It is often assumed that, if institutional investors pressing for changes decide they will likely be costly because of "golden handshakes" or the effort required, they will simply sell out their investment.[citation needed]
Particularly in the United States, proxy access allows shareholders to nominate candidates which appear on theproxy statement, as opposed to restricting that power to the nominating committee. The SEC had attempted a proxy access rule for decades,[95]and the United StatesDodd–Frank Wall Street Reform and Consumer Protection Actspecifically allowed the SEC to rule on this issue, however, the rule was struck down in court.[95]Beginning in 2015, proxy access rules began to spread driven by initiatives from major institutional investors, and as of 2018, 71% of S&P 500 companies had a proxy access rule.[95]
Corporate governance mechanisms and controls are designed to reduce the inefficiencies that arise frommoral hazardandadverse selection. There are both internal monitoring systems and external monitoring systems.[96]Internal monitoring can be done, for example, by one (or a few) large shareholder(s) in the case of privately held companies or a firm belonging to abusiness group. Furthermore, the various board mechanisms provide for internal monitoring. External monitoring of managers' behavior occurs when an independent third party (e.g. theexternal auditor) attests the accuracy of information provided by management to investors. Stock analysts and debt holders may also conduct such external monitoring. An ideal monitoring and control system should regulate both motivation and ability, while providing incentive alignment toward corporate goals and objectives. Care should be taken that incentives are not so strong that some individuals are tempted to cross lines of ethical behavior, for example by manipulating revenue and profit figures to drive the share price of the company up.[72]
Internal corporate governance controls monitor activities and then take corrective actions to accomplish organisational goals. Examples include:
In publicly traded U.S. corporations, boards of directors are largelychosenby the president/CEO, and the president/CEO often takes the chair of the board position for him/herself (which makes it much more difficult for the institutional owners to "fire" him/her). The practice of the CEO also being the chair of the Board is fairly common in large American corporations.[99]
While this practice is common in the U.S., it is relatively rare elsewhere. In the U.K., successive codes of best practice have recommended against duality.[citation needed]
External corporate governance controls the external stakeholders' exercise over the organization. Examples include:
The board of directors has primary responsibility for the corporation's internal and externalfinancial reportingfunctions. Thechief executive officerandchief financial officerare crucial participants, and boards usually have a high degree of reliance on them for the integrity and supply of accounting information. They oversee the internal accounting systems, and are dependent on the corporation'saccountantsandinternal auditors.
Current accounting rules underInternational Accounting Standardsand U.S.GAAPallow managers some choice in determining the methods of measurement and criteria for recognition of various financial reporting elements. The potential exercise of this choice to improve apparent performance increases the information risk for users. Financial reporting fraud, including non-disclosure and deliberate falsification of values also contributes to users' information risk. To reduce this risk and to enhance the perceived integrity of financial reports, corporation financial reports must be audited by an independentexternal auditorwho issues a report that accompanies the financial statements.
One area of concern is whether the auditing firm acts as both the independent auditor and management consultant to the firm they are auditing. This may result in a conflict of interest which places the integrity of financial reports in doubt due to client pressure to appease management. The power of the corporate client to initiate and terminate management consulting services and, more fundamentally, to select and dismiss accounting firms contradicts the concept of an independent auditor. Changes enacted in the United States in the form of theSarbanes–Oxley Act(following numerous corporate scandals, culminating with theEnron scandal) prohibit accounting firms from providing both auditing and management consulting services. Similar provisions are in place under clause 49 of Standard Listing Agreement in India.
A basic comprehension of corporate positioning on the market can be found by looking at which market area or areas a corporation acts in, and which stages of the respective value chain for that market area or areas it encompasses.[100][101]
A corporation may from time to time decide to alter or change its market positioning – throughM&A activityfor example – however it may loose some or all of its market efficiency in the process due to commercial operations depending to a large extent on its ability to account for a specific positioning on the market.[102]
Well-designed corporate governance policies also support the sustainability and resilience of corporations and in turn, may contribute to the sustainability and resilience of the broader economy. Investors have increasingly expanded their focus on companies' financial performance to include the financial risks and opportunities posed by broader economic, environmental and societal challenges, and companies' resilience to and management of those risks. In some jurisdictions, policy makers also focus on how companies' operations may contribute to addressing such challenges. A sound framework for corporate governance with respect to sustainability matters can help companies recognise and respond to the interests of shareholders and different stakeholders, as well as contribute to their own long-term success. Such a framework should include the disclosure of material sustainability-related information that is reliable, consistent and comparable, including related to climate change. In some cases, jurisdictions may interpret concepts of sustainability-related disclosure and materiality in terms of applicable standards articulating information that a reasonable shareholder needs in order to make investment or voting decisions.
Increasing attention and regulation (as under theSwiss referendum "against corporate rip-offs" of 2013) has been brought to executive pay levels since the2008 financial crisis. Research on the relationship between firm performance andexecutive compensationdoes not identify consistent and significant relationships between executives' remuneration and firm performance. Not all firms experience the same levels of agency conflict, and external and internal monitoring devices may be more effective for some than for others.[76][104]Some researchers have found that the largest CEO performance incentives came from ownership of the firm's shares, while other researchers found that the relationship between share ownership and firm performance was dependent on the level of ownership. The results suggest that increases in ownership above 20% cause management to become more entrenched, and less interested in the welfare of their shareholders.[104]
Some argue that firm performance is positively associated withshare optionplans and that these plans direct managers' energies and extend their decision horizons toward the long-term, rather than the short-term, performance of the company. However, that point of view came under substantial criticism circa in the wake of various security scandals including mutual fund timing episodes and, in particular, the backdating of option grants as documented by University of Iowa academic Erik Lie[105]and reported by James Blander and Charles Forelle of theWall Street Journal.[104][106]
Even before the negative influence on public opinion caused by the 2006 backdating scandal, use of options faced various criticisms. A particularly forceful and long running argument concerned the interaction of executive options with corporate stock repurchase programs. Numerous authorities (including U.S. Federal Reserve Board economist Weisbenner) determined options may be employed in concert with stock buybacks in a manner contrary to shareholder interests. These authors argued that, in part, corporate stock buybacks for U.S. Standard & Poor's 500 companies surged to a $500 billion annual rate in late 2006 because of the effect of options.[107]
A combination of accounting changes and governance issues led options to become a less popular means of remuneration as 2006 progressed, and various alternative implementations of buybacks surfaced to challenge the dominance of "open market" cash buybacks as the preferred means of implementing ashare repurchaseplan.
Shareholders elect a board of directors, who in turn hire achief executive officer(CEO) tolead management. The primary responsibility of the board relates to the selection and retention of the CEO. However, in many U.S. corporations the CEO and chairman of the board roles are held by the same person. This creates an inherent conflict of interest between management and the board.
Critics of combined roles argue the two roles that should be separated to avoid the conflict of interest and more easily enable a poorly performing CEO to be replaced.Warren Buffettwrote in 2014: "In my service on the boards of nineteen public companies, however, I've seen how hard it is to replace a mediocre CEO if that person is also Chairman. (The deed usually gets done, but almost always very late.)"[108]
Advocates argue that empirical studies do not indicate that separation of the roles improves stock market performance and that it should be up to shareholders to determine what corporate governance model is appropriate for the firm.[109]
In 2004, 73.4% of U.S. companies had combined roles; this fell to 57.2% by May 2012. Many U.S. companies with combined roles have appointed a "Lead Director" to improve independence of the board from management. German and UK companies have generally split the roles in nearly 100% of listed companies. Empirical evidence does not indicate one model is superior to the other in terms of performance. However, one study indicated that poorly performing firms tend to remove separate CEOs more frequently than when the CEO/Chair roles are combined.[110]
Certain groups of shareholders may become disinterested in the corporate governance process, potentially creating a power vacuum in corporate power. Insiders, other shareholders, and stakeholders may take advantage of these situations to exercise greater influence and extract rents from the corporation. Shareholder apathy may result from the increasing popularity ofpassive investing,diversification, and investment vehicles such asmutual fundsandETFs.
|
https://en.wikipedia.org/wiki/Corporate_governance
|
Note to admins: In case of doubt, remove this template and post a message asking for review atWT:CP.Withthis script, go tothe history with auto-selected revisions.
Note to the requestor: Make sure the page has already been reverted to a non-infringing revision or that infringing text has been removed or replaced before submitting this request. This template is reserved for obvious cases only, for other cases refer toWikipedia:Copyright problems.
Acorporate tax, also calledcorporation taxorcompany tax, is a type ofdirect taxlevied on the income or capital ofcorporationsand other similar legal entities. The tax is usually imposed at the national level, but it may also be imposed at state or local levels in some countries. Corporate taxes may be referred to asincome taxorcapital tax, depending on the nature of the tax.
The purpose of corporate tax is to generate revenue for the government by taxing the profits earned by corporations. The tax rate varies from country to country and is usually calculated as a percentage of the corporation's net income or capital. Corporate tax rates may also differ for domestic and foreign corporations.
Many countries have tax laws that require corporations to pay taxes on their worldwide income, regardless of where the income is earned. However, some countries have territorial tax systems, which only require corporations to pay taxes on income earned within the country's borders.
A country's corporate tax may apply to:
Company income subject to tax is often determined much like taxable income for individual taxpayers. Generally, the tax is imposed on net profits. In some jurisdictions, rules for taxing companies may differ significantly from rules for taxing individuals. Certain corporate acts or types of entities may be exempt from tax.
Theincidenceof corporate taxation is a subject of significant debate among economists and policymakers. Evidence suggests that some portion of the corporate tax falls on owners of capital, workers, and shareholders, but the ultimate incidence of the tax is an unresolved question.[1]
Economists disagree as to how much of the burden of the corporate tax falls on owners, workers, consumers, and landowners, and how the corporate tax affects economic growth andeconomic inequality.[2]More of the burden probably falls on capital in large open economies such as the US.[3]Some studies place the burden more on labor.[4][5][6]According to one study: "Regression analysis shows that a one-percentage-point increase in the marginal state corporate tax rate reduces wages 0.14 to 0.36 percent."[7]There have been other studies.[8][9][10][11][12][13]According to theAdam Smith Institute, "Clausing (2012), Gravelle (2010) and Auerbach (2005), the three best reviews we found, basically conclude that most of the tax falls on capital, not labour."[14]
A 2022 meta-analysis found that the impact of corporate taxes on economic growth was exaggerated and that it could not be ruled out that the impact of corporate taxation on economic growth was zero.[15]
Regardless of who bears the burden, corporation tax has been used as a tool of economic policy, with the main goal beingeconomic stabilization. In times of economic downturn, lowering the corporate tax rates is meant to encourage investment, while in cases of an overheating economy adjusting the corporate tax is used to slow investment.[16]
Another use of the corporate tax is to encourage investments in some specific industries. One such case could be the current tax benefits afforded to the oil and gas industry. A less recent example was the effort to restore heavy industries in the US[16]by enacting the 1981Accelerated Cost Recovery System (ACRS), which offered favorable depreciation allowances that would in turn lower taxes and increase cash flow, thus encouraging investment during the recession.
The agriculture industry, for example, could profit from the reassesment of their farming equipment. Under this new system, automobiles and breeding swine obtained a three year depreciation value; storage facilities, most equipment and breeding cattle and sheep became five year property; and land improvements were fifteen year property. The depreciation defined by ACRS was thus sizeably larger than under the previous tax system.[17]
A corporate tax is a tax imposed on the net profit of a corporation that is taxed at the entity level in a particular jurisdiction. Net profit for corporate tax is generally the financial statement net profit with modifications, and may be defined in great detail within each country's tax system. Such taxes may include income or other taxes. The tax systems of most countries impose anincome taxat the entity level on the certain type(s) of entities (company orcorporation). The rate of tax varies by jurisdiction. The tax may have an alternative base, such as assets, payroll, or income computed in an alternative manner.
Most countries exempt certain types of corporate events or transactions from income tax. For example, events related to the formation or reorganization of the corporation, which are treated as capital costs. In addition, most systems provide specific rules for taxation of the entity and/or its members upon winding up or dissolution of the entity.
In systems where financing costs are allowed as reductions of the tax base (tax deductions), rules may apply that differentiate between classes of member-provided financing. In such systems, items characterized asinterestmay be deductible, perhaps subject to limitations, while items characterized as dividends are not. Some systems limit deductions based on simple formulas, such as adebt-to-equity ratio, while other systems have more complex rules.
Some systems provide a mechanism whereby groups of related corporations may obtain benefit from losses, credits, or other items of all members within the group. Mechanisms include combined or consolidated returns as well as group relief (direct benefit from items of another member).
Many systems additionally tax shareholders of those entities ondividendsor other distributions by the corporation. A few systems provide for partial integration of entity and member taxation. This may be accomplished by "imputation systems" orfranking credits. In the past, mechanisms have existed for advance payment of member tax by corporations, with such payment offsetting entity level tax.
Many systems (particularly sub-country level systems) impose a tax on particular corporate attributes. Such non-income taxes may be based on capital stock issued or authorized (either by number of shares or value), total equity, net capital, or other measures unique to corporations.
Corporations, like other entities, may be subject towithholding taxobligations upon making certain varieties of payments to others. These obligations are generally not the tax of the corporation, but the system may impose penalties on the corporation or its officers or employees for failing to withhold and pay over such taxes. A company has been defined as a juristic person having an independent and separate existence from its shareholders. Income of the company is computed and assessed separately in the hands of the company. In certain cases, distributions from the company to its shareholders as dividends are taxed as income to the shareholders.
Corporations'property tax,payroll tax,withholding tax,excise tax,customs duties,value added tax, and other common taxes, are generally not referred to as "corporate tax".
Characterization as acorporationfor tax purposes is based on the form of organization, with the exception of United States Federal[18]and most states income taxes, under which an entity may elect to be treated as a corporation and taxed at the entity level or taxed only at the member level.[19]SeeLimited liability company,Partnership taxation,S corporation,Sole proprietorship.
Most jurisdictions, including the United Kingdom[20]and the United States,[19]tax corporations on their income. The United States taxes most types of corporate income at 21%.[19]
The United States taxes corporations under the same framework of tax law as individuals, with differences related to the inherent natures of corporations and individuals or unincorporated entities. For example, individuals are not formed, amalgamated, or acquired, and corporations do not incur medical expenses except by way of compensating individuals.[21]
Most systems tax both domestic andforeign corporations. Often, domestic corporations are taxed on worldwide income while foreign corporations are taxed only on income from sources within the jurisdiction.
The United States defines taxable income for a corporation as allgross income, i.e. sales plus other income minus cost of goods sold and tax exempt income less allowabletax deductions, without the allowance of thestandard deductionapplicable to individuals.[22]
The United States' system requires that differences in principles for recognizing income and deductions differing from financial accounting principles like the timing of income or deduction,tax exemptionfor certain income, and disallowance or limitation of certain tax deductions be disclosed in considerable detail for non-small corporations on Schedule M-3 to Form 1120.[23]
The United States taxes resident corporations, i.e. those organized within the country, on their worldwide income, and nonresident, foreign corporations only on their income from sources within the country.[24]Hong Kong taxes resident and nonresident corporations only on income from sources within the country.[25]
Corporate tax rates generally are the same for differing types of income, yet the US graduated its tax rate system where corporations with lower levels of income pay a lower rate of tax, with rates varying from 15% on the first $50,000 of income to 35% on incomes over $10,000,000, with phase-outs.[27]
The corporate income tax rates differ between US states and range from 2.5% to 11.5%.[28]
The Canadian system imposes tax at different rates for different types of corporations, allowing lower rates for some smaller corporations.[29]
Tax rates vary by jurisdiction and some countries have sub-country level jurisdictions like provinces, cantons, prefectures, cities, or other that also impose corporate income tax like Canada, Germany, Japan, Switzerland, and the United States.[30]Some jurisdictions impose tax at a different rate on an alternative tax base.
Examples of corporate tax rates for a few English-speaking jurisdictions include:
Corporate tax rates vary widely by country, leading some corporations to shield earnings within offshore subsidiaries or to redomicile within countries with lower tax rates.
In comparing national corporate tax rates one should also take into account the taxes on dividends paid to shareholders. For example, the overall U.S. tax on corporate profits of 35% is less than or similar to that of European countries such as Germany, Ireland, Switzerland and the United Kingdom, which have lower corporate tax rates but higher taxes on dividends paid to shareholders.[37]
Corporate tax rates across theOrganisation for Economic Co-operation and Development(OECD) are shown in the table.
The corporate tax rates in other jurisdictions include:
[42]
[42]
In October 2021 some 136 countries agreed to enforce a corporate tax rate of at least 15% from 2023 after the talks on a minimum rate led by OECD for a decade.[43]
Most systems that tax corporations also impose income tax on shareholders of corporations when earnings are distributed.[44]Such distribution of earnings is generally referred to as adividend. The tax may be at reduced rates. For example, the United States provides for reduced amounts of tax on dividends received by individuals and by corporations.[45]
The company law of some jurisdictions prevents corporations from distributing amounts to shareholders except as distribution of earnings. Such earnings may be determined under company law principles or tax principles. In such jurisdictions, exceptions are usually provided with respect to distribution of shares of the company, for winding up, and in limited other situations.
Other jurisdictions treat distributions as distributions of earnings taxable to shareholders if earnings are available to be distributed, but do not prohibit distributions in excess of earnings. For example, under the United States system each corporation must maintain a calculation of its earnings and profits (a tax concept similar to retained earnings).[46]A distribution to a shareholder is considered to be from earnings and profits to the extent thereof unless an exception applies.[47]The United States provides reduced tax on dividend income of both corporations and individuals.
Other jurisdictions provide corporations a means of designating, within limits, whether a distribution is a distribution of earnings taxable to the shareholder or areturn of capital.
The following illustrates the dual level of tax concept:
Widget Corp earns 100 of profits before tax in each of years 1 and 2. It distributes all the earnings in year 3, when it has no profits. Jim owns all of Widget Corp. The tax rate in the residence jurisdiction of Jim and Widget Corp is 30%.
Many systems provide that certain corporate events are not taxable to corporations or shareholders. Significant restrictions and special rules often apply. The rules related to such transactions are often quite complex.
Most systems treat the formation of a corporation by a controlling corporate shareholder as a nontaxable event. Many systems, including the United States and Canada, extend this tax free treatment to the formation of a corporation by any group of shareholders in control of the corporation.[48]Generally, in tax free formations the tax attributes of assets and liabilities are transferred to the new corporation along with such assets and liabilities.
Example: John and Mary are United States residents who operate a business. They decide to incorporate for business reasons. They transfer the assets of the business to Newco, a newly formed Delaware corporation of which they are the sole shareholders, subject to accrued liabilities of the business in exchange solely for common shares of Newco. Under United States principles, this transfer does not cause tax to John, Mary, or Newco. If on the other hand Newco also assumes a bank loan in excess of the basis of the assets transferred less the accrued liabilities, John and Mary will recognize taxable gain for such excess.[49]
Corporations may merge or acquire other corporations in a manner a particular tax system treats as nontaxable to either of the corporations and/or to their shareholders. Generally, significant restrictions apply if tax free treatment is to be obtained.[50]For example, Bigco acquires all of the shares of Smallco from Smallco shareholders in exchange solely for Bigco shares. This acquisition is not taxable to Smallco or its shareholders under U.S. or Canadian tax law if certain requirements are met, even if Smallco is then liquidated into or merged or amalgamated with Bigco.
In addition, corporations may change key aspects of their legal identity, capitalization, or structure in a tax free manner under most systems. Examples of reorganizations that may be tax free include mergers, amalgamations, liquidations of subsidiaries, share for share exchanges, exchanges of shares for assets, changes in form or place of organization, and recapitalizations.[51]
Most jurisdictions allow atax deductionfor interest expense incurred by a corporation in carrying out its trading activities. Where such interest is paid to related parties, such deduction may be limited. Without such limitation, owners could structure financing of the corporation in a manner that would provide for a tax deduction for much of the profits, potentially without changing the tax on shareholders. For example, assume a corporation earns profits of 100 before interest expense and would normally distribute 50 to shareholders. If the corporation is structured so that deductible interest of 50 is payable to the shareholders, it will cut its tax to half the amount due if it merely paid a dividend.
A common form of limitation is to limit the deduction for interest paid to related parties to interest charged at arm's length rates on debt not exceeding a certain portion of the equity of the paying corporation. For example, interest paid on related party debt in excess of three times equity may not be deductible in computing taxable income.
The United States, United Kingdom, and French tax systems apply a more complex set of tests to limit deductions. Under theU.S. system, related party interest expense in excess of 50% of cash flow is generally not currently deductible, with the excess potentially deductible in future years.[52]
The classification of instruments as debt on which interest is deductible or as equity with respect to which distributions are not deductible can be complex in some systems.[53]TheInternal Revenue Servicehad proposed complex regulations under this section (see TD 7747, 1981-1 CB 141) which were soon withdrawn (TD 7920, 1983-2 CB 69).[citation needed]
Most jurisdictions tax foreign corporations differently from domestic corporations.[54]No international laws limit the ability of a country to tax its nationals and residents (individuals and entities). However, treaties and practicality impose limits on taxation of those outside its borders, even on income from sources within the country.
Most jurisdictions tax foreign corporations on business income within the jurisdiction when earned through a branch orpermanent establishmentin the jurisdiction. This tax may be imposed at the same rate as the tax on business income of a resident corporation or at a different rate.[55]
Upon payment ofdividends, corporations are generally subject towithholding taxonly by their country of incorporation. Many countries impose a branch profits tax on foreign corporations to prevent the advantage the absence of dividend withholding tax would otherwise provide to foreign corporations. This tax may be imposed at the time profits are earned by the branch or at the time they are remitted or deemed remitted outside the country.[56]
Branches of foreign corporations may not be entitled to all of the same deductions as domestic corporations. Some jurisdictions do not recognize inter-branch payments as actual payments, and income or deductions arising from such inter-branch payments are disregarded.[57]Some jurisdictions impose express limits on tax deductions of branches. Commonly limited deductions include management fees and interest.
Nathan M. Jenson argues that low corporate tax rates are a minor determinate of a multinational company when setting up their headquarters in a country.[58]
Most jurisdictions allow interperiod allocation or deduction of losses in some manner for corporations, even where such deduction is not allowed for individuals. A few jurisdictions allow losses (usually defined as negative taxable income) to be deducted by revising or amending prior year taxable income.[59]Most jurisdictions allow such deductions only in subsequent periods. Some jurisdictions impose time limitations as to when loss deductions may be utilized.
Several jurisdictions provide a mechanism whereby losses ortax creditsof one corporation may be used by another corporation where both corporations are commonly controlled (together, a group). In the United States and Netherlands, among others, this is accomplished by filing a single tax return including the income and loss of each group member. This is referred to as a consolidated return in the United States and as a fiscal unity in the Netherlands. In the United Kingdom, this is accomplished directly on a pairwise basis called group relief. Losses of one group member company may be "surrendered" to another group member company, and the latter company may deduct the loss against profits.
The United States has extensive regulations dealing with consolidated returns.[60]One such rule requires matching of income and deductions on intercompany transactions within the group by use of "deferred intercompany transaction" rules.
In addition, a few systems provide a tax exemption for dividend income received by corporations. The Netherlands system provides a "participation exception" to taxation for corporations owning more than 25% of the dividend paying corporation.
A key issue in corporate tax is the setting of prices charged by related parties for goods, services or the use of property. Many jurisdictions have guidelines on transfer pricing which allow tax authorities to adjust transfer prices used. Such adjustments may apply in both an international and a domestic context.
Most income tax systems levy tax on the corporation and, upon distribution of earnings (dividends), on the shareholder. This results in a dual level of tax. Most systems require that income tax be withheld on distribution of dividends to foreign shareholders, and some also require withholding of tax on distributions to domestic shareholders. The rate of suchwithholding taxmay be reduced for a shareholder under atax treaty.
Some systems tax some or all dividend income at lower rates than other income. The United States has historically provided adividends received deductionto corporations with respect to dividends from other corporations in which the recipient owns more than 10% of the shares. For tax years 2004–2010, the United States also has imposed a reduced rate of taxation on dividends received by individuals.[61]
Some systems currently attempt or in the past have attempted tointegrate taxationof the corporation with taxation of shareholders to mitigate the dual level of taxation. As a current example, Australia provides for a "franking credit" as a benefit to shareholders. When an Australian company pays a dividend to a domestic shareholder, it reports the dividend as well as a notional tax credit amount. The shareholder utilizes this notional credit to offset shareholder level income tax.[citation needed]
A previous system was utilised in the United Kingdom, called theadvance corporation tax(ACT). When a company paid a dividend, it was required to pay an amount of ACT, which it then used to offset its own taxes. The ACT was included in income by the shareholder resident in the United Kingdom or certain treaty countries, and treated as a payment of tax by the shareholder. To the extent that deemed tax payment exceeded taxes otherwise due, it was refundable to the shareholder.
Many jurisdictions incorporate some sort of alternative tax computation. These computations may be based on assets, capital, wages, or some alternative measure of taxable income. Often the alternative tax functions as a minimum tax.
United States federal income taxincorporates analternative minimum tax. This tax is computed at a lower tax rate (20% for corporations), and imposed based on a modified version of taxable income. Modifications include longer depreciation lives assets underMACRS, adjustments related to costs of developing natural resources, and an addback of certain tax exempt interest. The U.S. state of Michigan previously taxed businesses on an alternative base that did not allow compensation of employees as a tax deduction and allowed full deduction of the cost of production assets upon acquisition.
Some jurisdictions, such as Swiss cantons and certain states within the United States, impose taxes based on capital. These may be based on total equity per audited financial statements,[62]a computed amount of assets less liabilities[63]or quantity of shares outstanding.[64]In some jurisdictions, capital based taxes are imposed in addition to the income tax.[63]In other jurisdictions, the capital taxes function as alternative taxes.
Mexico imposes an alternative tax on corporations, the IETU.[citation needed]The tax rate is lower than the regular rate, and there are adjustments for salaries and wages, interest and royalties, and depreciable assets.
Most systems require that corporations file an annual income tax return.[65]Some systems (such as theCanadian,United KingdomandUnited Statessystems) require that taxpayers self assess tax on the tax return.[66]Other systems provide that the government must make an assessment for tax to be due.[citation needed]Some systems require certification of tax returns in some manner by accountants licensed to practice in the jurisdiction, often the company's auditors.[67]
Tax returns can be fairly simple or quite complex. The systems requiring simple returns often base taxable income on financial statement profits with few adjustments, and may require that audited financial statements be attached to the return.[68]Returns for such systems generally require that the relevant financial statements be attached to a simple adjustment schedule. By contrast, United States corporate tax returns require both computation of taxable income from components thereof and reconciliation of taxable income to financial statement income.
Many systems require forms or schedules supporting particular items on the main form. Some of these schedules may be incorporated into the main form. For example, the Canadian corporate return,Form T-2, an eight-page form, incorporates some detail schedules but has nearly 50 additional schedules that may be required.
Some systems have different returns for different types of corporations or corporations engaged in specialized businesses. The United States has 13 variations on the basic Form 1120[69]forS corporations, insurance companies,Domestic international sales corporations, foreign corporations, and other entities. The structure of the forms and imbedded schedules vary by type of form.
Preparation of non-simple corporate tax returns can be time consuming. For example, the U.S.Internal Revenue Servicestates in theinstructions for Form 1120that the average time needed to complete form is over 56 hours, not including record keeping time and required attachments.
Tax return due dates vary by jurisdiction, fiscal or tax year, and type of entity.[70]In self-assessment systems, payment of taxes is generally due no later than the normal due date, though advance tax payments may be required.[71]Canadian corporations must pay estimated taxes monthly.[72]In each case, final payment is due with the corporation tax return.
U.S.
United Kingdom
Canada
United Kingdom
United States
|
https://en.wikipedia.org/wiki/Corporate_tax
|
Financial planning and analysis(FP&A), in accounting and business, refers to the various integratedplanning,analysis, andmodelingactivities aimedat supportingfinancial decisioningand managementin the wider organization.[1][2][3][4][5]SeeFinancial analyst § Financial planning and analysisfor outline, and aside articles for further detail.
In larger companies, "FP&A" will run as a dedicated area or team, under an "FP&A Manager" reporting to theCFO.[6]
FP&A is distinct fromfinancial managementand (management)accountingin that it is oriented, additionally, towardsbusiness performance management, and, further, encompasses bothqualitativeandquantitative analysis.
This positioning allows management—in partnershipwith FP&A—to preemptively address issues relating, e.g., tocustomersandoperations, as well as themore traditional business-finance problems.
Relatedly, althoughBudgetingandForecastingare typically done at specific times in the year—and correspondingly cover specific time periods—FP&A, by contrast, has a wider brief re both horizon and content.
"FP&A Analysts" thus play an important role in every (major) decision by the company—ranging in scope from changes in headcount tomergers and acquisitions.[1]
Over the years, FP&A's role has evolved, facilitated by technological advances.[4]During its early years, 1960s to 1980s, FP&A focused on more traditional forecasting andfinancial analysis; relying onspreadsheets, mainlyExcel, but in earlier years,Lotus 1-2-3(andVisiCalc).
From the 1980s to the early 2000s, the scope shifted torisk,scenario, andsensitivity analysis; utilizingbusiness intelligenceandfinancial modelingsoftware, such asCognos,Hyperion, andBusinessObjects.
From 2000s to present, the emphasis is increasingly onpredictive analytics; tools includecloud-based platformsandanalytics packages, i.e.Amazon Web ServicesandMicrosoft Azure, andSAS,KNIME,[7]R, andPython.[8]More recently,[9]specialized software— which increasingly[10]employsAI/ML— is availablecommercially. Products here are fromJedox,Anaplan,Workday,Hyperion,Wolters Kluwer,Datarails,Workivaand others.
|
https://en.wikipedia.org/wiki/FP%26A
|
Financial accountingis a branch ofaccountingconcerned with the summary, analysis and reporting of financial transactions related to a business.[1]This involves the preparation offinancial statementsavailable for public use.Stockholders,suppliers,banks,employees,government agencies,business owners, and otherstakeholdersare examples of people interested in receiving such information for decision making purposes.
Financial accountancy is governed by both local and international accounting standards.Generally Accepted Accounting Principles(GAAP) is the standard framework of guidelines for financial accounting used in any given jurisdiction. It includes the standards, conventions and rules that accountants follow in recording and summarizing and in the preparation of financial statements.
On the other hand,International Financial Reporting Standards(IFRS) is a set of accounting standards stating how particular types of transactions and other events should be reported in financial statements. IFRS are issued by theInternational Accounting Standards Board(IASB).[2]With IFRS becoming more widespread on the international scene,consistencyin financial reporting has become more prevalent between global organizations.
While financial accounting is used to prepare accounting information for people outside the organization or not involved in the day-to-day running of the company,managerial accountingprovides accounting information to help managers make decisions to manage the business.
Financial accounting and financial reporting are often used as synonyms.
1. According to International Financial Reporting Standards: the objective of financial reporting is:
To provide financial information that is useful to existing and potential investors, lenders and other creditors in making decisions about providing resources to the reporting entity.[3]
2. According to the European Accounting Association:
Capital maintenance is a competing objective of financial reporting.[4]
Financial accounting is the preparation of financial statements that can be consumed by the public and the relevant stakeholders. Financial information would be useful to users if such qualitative characteristics are present. When producing financial statements, the following must comply:Fundamental Qualitative Characteristics:
Enhancing Qualitative Characteristics:
The statement of cash flows considers the inputs and outputs in concrete cash within a stated period. The general template of a cash flow statement is as follows:Cash Inflow - Cash Outflow + Opening Balance = Closing Balance
Example 1: in the beginning of September, Ellen started out with $5 in her bank account. During that same month, Ellen borrowed $20 from Tom. At the end of the month, Ellen bought a pair of shoes for $7. Ellen's cash flow statement for the month of September looks like this:
Example 2: in the beginning of June, WikiTables, a company that buys and resells tables, sold 2 tables. They'd originally bought the tables for $25 each, and sold them at a price of $50 per table. The first table was paid out in cash however the second one was bought in credit terms. WikiTables' cash flow statement for the month of June looks like this:
Important: the cash flow statement only considers the exchange ofactualcash, and ignores what the person in question owes or is owed.
The statement of profit or income statement represents the changes in value of a company'saccountsover a set period (most commonly onefiscal year), and may compare the changes to changes in the same accounts over the previous period. All changes are summarized on the "bottom line" asnet income, often reported as "net loss" when income is less than zero.
The net profit or loss is determined by:
Sales (revenue)
–cost of goods sold
– selling, general, administrative expenses (SGA)
–depreciation/ amortization
= earnings before interest and taxes (EBIT)
– interest and tax expenses
= profit/loss
The balance sheet is the financial statement showing a firm'sassets,liabilitiesandequity(capital) at a set point in time, usually the end of the fiscal year reported on the accompanying income statement. The total assets always equal the total combined liabilities and equity. This statement best demonstrates the basic accounting equation:
The statement can be used to help show the financial position of a company because liability accounts are external claims on the firm's assets while equity accounts are internal claims on the firm's assets.
Accounting standards often set out a general format that companies are expected to follow when presenting their balance sheets.International Financial Reporting Standards(IFRS) normally require that companies reportcurrentassets and liabilities separately from non-current amounts.[5][6]A GAAP-compliant balance sheet must list assets and liabilities based on decreasing liquidity, from most liquid to least liquid. As a result, current assets/liabilities are listed first followed by non-current assets/liabilities. However, an IFRS-compliant balance sheet must list assets/liabilities based on increasing liquidity, from least liquid to most liquid. As a result, non-current assets/liabilities are listed first followed by current assets/liabilities.[7]
Current assets are the most liquid assets of a firm, which are expected to be realized within a 12-month period. Current assets include:
Non-current assets includefixedor long-term assets andintangible assets:
Liabilities include:
Owner's equity, sometimes referred to as net assets, is represented differently depending on the type of business ownership. Business ownership can be in the form of a sole proprietorship, partnership, or acorporation. For a corporation, the owner's equity portion usually shows common stock, and retained earnings (earnings kept in the company). Retained earnings come from the retained earnings statement, prepared prior to the balance sheet.[8]
This statement is additional to the three main statements described above. It shows how the distribution of income and transfer of dividends affects the wealth of shareholders in the company. The concept of retained earnings means profits of previous years that are accumulated till current period. Basic proforma for this statement is as follows:
Retained earnings at the beginning of period
+ Net Income for the period
- Dividends
= Retained earnings at the end of period.[9]
One of the basic principles in accounting is "The Measuring Unit principle":
The unit of measure in accounting shall be the base money unit of the most relevant currency. This principle also assumes the unit of measure is stable; that is, changes in its general purchasing power are not considered sufficiently important to require adjustments to the basic financial statements."[10]
Historical Cost Accounting, i.e., financial capital maintenance in nominal monetary units, is based on the stable measuring unit assumption under which accountants simply assume that money, the monetary unit of measure, is perfectly stable in real value for the purpose of measuring (1) monetary items not inflation-indexed daily in terms of the Daily CPI and (2) constant real value non-monetary items not updated daily in terms of the Daily CPI during low and high inflation and deflation.
The stable monetary unit assumption is not applied during hyperinflation. IFRS requires entities to implement capital maintenance in units of constant purchasing power in terms of IAS 29 Financial Reporting in Hyperinflationary Economies.
Financial accountants produce financial statements based on the accounting standards in a given jurisdiction. These standards may be theGenerally Accepted Accounting Principlesof a respective country, which are typically issued by a national standard setter, orInternational Financial Reporting Standards(IFRS), which are issued by theInternational Accounting Standards Board(IASB).
Financial accounting serves the following purposes:
Theaccounting equation(Assets=Liabilities+Owners' Equity) and financial statements are the main topics of financial accounting.
Thetrial balance, which is usually prepared using thedouble-entry accounting system, forms the basis for preparing the financial statements. All the figures in the trial balance are rearranged to prepare aprofit & loss statementandbalance sheet. Accounting standards determine the format for these accounts (SSAP, FRS,IFRS). Financial statements display the income and expenditure for the company and a summary of the assets, liabilities, and shareholders' or owners' equity of the company on the date to which the accounts were prepared.
Asset,expense, anddividendaccounts have normal debit balances (i.e., debiting these types of accounts increases them).
Liability,revenue, andequityaccounts have normal credit balances (i.e., crediting these types of accounts increases them).
When the same thing is done to an account as its normal balance it increases; when the opposite is done, it will decrease. Much like signs in math: two positive numbers are added and two negative numbers are also added. It is only when there is one positive and one negative (opposites) that you will subtract.
However, there are instances of accounts, known as contra-accounts, which have a normal balance opposite that listed above. Examples include:
Many professional accountancy qualifications cover the field of financial accountancy, includingCertified Public Accountant CPA,Chartered Accountant(CA or other national designations,American Institute of Certified Public AccountantsAICPAandChartered Certified Accountant(ACCA).
|
https://en.wikipedia.org/wiki/Financial_accounting
|
Financial analysis(also known asfinancial statement analysis,accounting analysis, oranalysis of finance) refers to an assessment of the viability, stability, and profitability of abusiness, sub-business,projector investment.
It is performed by professionals who prepare reports usingratiosand other techniques, that make use of information taken fromfinancial statementsand other reports. These reports are usually presented to top management as one of their bases in making business decisions.
Financial analysis may determine if a business will:
Financial analysts often assess the following elements of a firm:
Both 2 and 3 are based on the company'sbalance sheet, which indicates the financial condition of a business as of a given point in time.
Financial analysts often comparefinancial ratios(ofsolvency,profitability, growth, etc.):
Comparing financial ratios is merely one way of conducting financial analysis.
Financial analysts can also use percentage analysis which involves reducing a series of figures as a percentage of some base amount.[1]For example, a group of items can be expressed as a percentage of net income. When proportionate changes in the same figure over a given time period expressed as a percentage is known as horizontal analysis.[2]
Vertical or common-size analysis reduces all items on a statement to a "common size" as a percentage of some base value which assists in comparability with other companies of different sizes.[3]As a result, all Income Statement items are divided by Sales, and all Balance Sheet items are divided by Total Assets.[4]
Another method is comparative analysis. This provides a better way to determine trends. Comparative analysis presents the same information for two or more time periods and is presented side-by-side to allow for easy analysis.[5]
Financial ratiosface several theoretical challenges:
|
https://en.wikipedia.org/wiki/Financial_analysis
|
Financial managementis thebusiness functionconcerned with profitability, expenses, cash and credit. These are often grouped together under the rubric of maximizing thevalue of the firmforstockholders. The discipline is then tasked with the "efficient acquisition and deployment" of bothshort-andlong-term financial resources, to ensure the objectives of the enterprise are achieved.[1]
Financial managers[2](FM) are specialized professionals directly reporting tosenior management, often thefinancial director(FD); the function is seen as'staff', and not'line'.
Financial management is generally concerned with short termworking capital management, focusing oncurrent assetsandcurrent liabilities, andmanaging fluctuationsin foreign currency and product cycles, often throughhedging.
The function also entails the efficient and effective day-to-day management of funds, and thus overlapstreasury management.
It is also involved with long termstrategic financial management, focused on i.a.capital structuremanagement, including capital raising,capital budgeting(capital allocation between business units or products), anddividend policy;
these latter, in large corporates, being more the domain of "corporate finance."
Specific tasks:
Two areas of finance directly overlap financial management:
(i)Managerial financeis the (academic) branch of finance concerned with the managerial application of financial techniques;
(ii)Corporate financeis mainly concerned with the longer term capital budgeting, and typically is more relevant to large corporations.
Investment management, also related, is the professionalasset managementof varioussecurities(shares, bonds and other securities/assets).
In the context of financial management, the function sits with treasury; usually the management of the various short-term financiallegal instruments(contractual duties, obligations, or rights) appropriate to the company'scash-andliquidity managementrequirements. SeeTreasury management § Functions.
The term "financial management" refers to a company's financial strategy, whilepersonal financeorfinancial life managementrefers to an individual's management strategy. Afinancial planner, or personal financial planner, is a professional who prepares financial plans here.
Financial management systems are thesoftware and technologyused by organizations to connect, store, and report on assets, income, and expenses.[4]SeeFinancial modeling § AccountingandFinancial planning and analysisfor discussion.
The discipline relies on a range of products, fromspreadsheets(invariably as a starting point, and frequently in total[5]) through commercialEPMandBItools, oftenBusinessObjects(SAP),OBI EE(Oracle),Cognos(IBM), andPower BI(Microsoft).
SpecialisedFP&Aproducts are provided byJedox,Anaplan,Workday,Hyperion,Wolters Kluwer,Datarails, andWorkiva.
|
https://en.wikipedia.org/wiki/Financial_management
|
In general usage, afinancial planis a comprehensive evaluation of an individual's current pay and future financial state by using current known variables to predict future income, asset values and withdrawal plans.[1]This often includes abudgetwhich organizes an individual's finances and sometimes includes a series of steps or specific goals for spending andsavingin the future. This plan allocates future income to various types ofexpenses, such as rent or utilities, and also reserves some income for short-term and long-term savings. A financial plan is sometimes referred to as aninvestmentplan, but inpersonal finance, a financial plan can focus on other specific areas such as risk management, estates, college, or retirement.
In business, "financial forecast" or "financial plan" can also refer to a projection across a time horizon, typically an annual one, of income and expenses for acompany, division, or department;[2]seeBudget § Corporate budget.
More specifically, a financial plan can also refer to the three primaryfinancial statements(balance sheet,income statement, andcash flow statement) created within abusiness plan.
A financial plan can also be anestimation of cash needsand a decision on how to raise the cash, such as through borrowing or issuing additional shares in a company.[3]
Note that the financial plan may then containprospective financial statements, which are similar, but different, to those of abudget.
Financial plans are the entire financial accounting overview of a company. Complete financial plans contain all periods and transaction types. It's a combination of the financial statements which independently only reflect a past, present, or future state of the company. Financial plans are the collection of the historical, present, and future financial statements; for example, a (historical & present) costly expense from an operational issue is normally presented prior to the issuance of the prospective financial statements which propose a solution to said operational issue.
The confusion surrounding the term "financial plans" might stem from the fact that there are many types of financial statement reports. Individually, financial statements show either the past, present, or future financial results. More specifically, financial statements also only reflect the specific categories which are relevant. For instance, investing activities are not adequately displayed in a balance sheet. A financial plan is a combination of the individual financial statements and reflect all categories of transactions (operations & expenses & investing) over time.[4]
Some period-specific financial statement examples includepro formastatements (historical period) andprospective statements(current and future period). Compilations are a type of service which involves "presenting, in the form of financial statements, information that is the representation of management".[5]There are two types of "prospective financial statements":financial forecasts&financial projectionsand both relate to the current/future time period.Prospective financial statementsare a time period-type of financial statement which may reflect the current/future financial status of a company using three main reports/financial statements: cash flow statement, income statement, and balance sheet. "Prospective financial statementsare of two types-forecastsandprojections. Forecasts are based on management's expected financial position, results of operations, and cash flows."[6]Pro Forma statements take previously recorded results, the historical financial data, and present a "what-if": "what-if" a transaction had happened sooner.[7]
While the common usage of the term "financial plan" often refers to a formal and defined series of steps or goals, there is some technical confusion about what the term "financial plan" actually means in the industry. For example, one of the industry's leading professional organizations, the Certified Financial Planner Board of Standards, lacks any definition for the term "financial plan" in itsStandards of Professional Conductpublication. This publication outlines the professional financial planner's job, and explains the process of financial planning, but the term "financial plan" never appears in the publication's text.[8]
The accounting and finance industries have distinct responsibilities and roles. When the products of their work are combined, it produces a complete picture, a financial plan. A financial analyst studies the data and facts (regulations/standards), which are processed, recorded, and presented by accountants. Normally, finance personnel study the data results - meaning what has happened or what might happen - and propose a solution to an inefficiency. Investors and financial institutions must see both the issue and the solution to make an informed decision. Accountants and financial planners are both involved with presenting issues and resolving inefficiencies, so together, the results and explanation are provided in afinancial plan.
Textbooks used in universities offering financial planning-related courses also generally do not define the term 'financial plan'. For example, Sid Mittra, Anandi P. Sahu, and Robert A Crane, authors ofPracticing Financial Planning for Professionals[9]do not define what a financial plan is, but merely defer to the Certified Financial Planner Board of Standards' definition of 'financial planning'.
When drafting a financial plan, the company should establish the planning horizon,[10]which is the time period of the plan, whether it be on a short-term (usually 12 months) or long-term (two to five years) basis. Also, the individual projects and investment proposals of each operational unit within the company should be totaled and treated as one large project. This process is called aggregation.[11]
|
https://en.wikipedia.org/wiki/Financial_planning
|
Afinancial ratiooraccounting ratiostates the relative magnitude of two selected numerical values taken from an enterprise'sfinancial statements. Often used inaccounting, there are many standardratiosused to try to evaluate the overall financial condition of a corporation or other organization. Financial ratios may be used by managers within a firm, by current and potentialshareholders(owners) of a firm, and by a firm'screditors.Financial analystsuse financial ratios to compare the strengths and weaknesses in various companies.[1]If shares in a company are publicly listed, the market price of the shares is used in certain financial ratios.
Ratios can be expressed as adecimal value, such as 0.10, or given as an equivalentpercentagevalue, such as 10%. Some ratios are usually quoted as percentages, especially ratios that are usually or always less than 1, such asearnings yield, while others are usually quoted as decimal numbers, especially ratios that are usually more than 1, such asP/E ratio; these latter are also calledmultiples. Given any ratio, one can take itsreciprocal; if the ratio was above 1, the reciprocal will be below 1, and conversely. The reciprocal expresses the same information, but may be more understandable: for instance, the earnings yield can be compared with bond yields, while the P/E ratio cannot be: for example, a P/E ratio of 20 corresponds to an earnings yield of 5%.
Values used in calculating financial ratios are taken from thebalance sheet,income statement,statement of cash flowsor (sometimes) thestatement of changes in equity. These comprise the firm's "accounting statements" orfinancial statements. The statements' data is based on the accounting method and accounting standards used by the organisation.
These are concerned with the return on investment forshareholders, and with the relationship between return and the value of an investment in company's shares.
Financial ratios allow for comparisons
Ratios generally are not useful unless they arebenchmarkedagainst something else, like past performance or another company. Thus, the ratios of firms in different industries, which face different risks, capital requirements, and competition are usually hard to compare.
Financial ratios may not be directly comparable between companies that use differentaccounting methodsor follow variousstandard accounting practices. Mostpublic companiesare required by law to usegenerally accepted accounting principlesfor their home countries, butprivate companies,partnershipsandsole proprietorshipsmay elect to not use accrual basis accounting. Large multi-national corporations may useInternational Financial Reporting Standardsto produce their financial statements, or they may use the generally accepted accounting principles of their home country.
There is no international standard for calculating the summary data presented in all financial statements, and the terminology is not always consistent between companies, industries, countries and time periods.
An important feature of ratio analysis is interpreting ratio values. A meaningful basis for comparison is needed to answer questions such as "Is it too high or too low?" or "Is it good or bad?". Two types of ratio comparisons can be made, cross-sectional and time-series.[7]
Cross-sectional analysis compares the financial ratios of different companies at the same point in time. It allows companies to benchmark from other competitors by comparing their ratio values to similar companies in the industry.
Time-series analysis evaluates a company's performance over time. It compares its current performance against past or historical performance. This can help assess the company's progress by looking into developing trends or year-to-year changes.
Various abbreviations may be used in financial statements, especially financial statements summarized on theInternet.Salesreported by a firm are usuallynet sales, which deduct returns, allowances, and early payment discounts from the charge on aninvoice.Net incomeis always the amountaftertaxes, depreciation, amortization, and interest, unless otherwise stated. Otherwise, the amount would be EBIT, or EBITDA (see below).
Companies that are primarily involved in providing services with labour do not generally report "Sales" based on hours. These companies tend to report "revenue" based on the monetary value of income that the services provide.
Note that Shareholders' Equity and Owner's Equity arenotthe same thing, Shareholder's Equity represents the total number of shares in the company multiplied by each share's book value; Owner's Equity represents the total number of shares that an individual shareholder owns (usually the owner withcontrolling interest), multiplied by each share's book value. It is important to make this distinction when calculating ratios.
(Note:These are not ratios, but values in currency.)
Profitability ratios measure the company's use of its assets and control of its expenses to generate an acceptable rate of return.
Liquidityratios measure the availability of cash to pay debt.
Efficiency ratios measure the effectiveness of the firm's use of resources.
Debt ratios quantify the firm's ability to repay long-term debt. Debt ratios measure the level of borrowed funds used by the firm to finance its activities.
Market ratios measure investor response to owning a company's stock and also the cost of issuing stock.
These are concerned with the return on investment for shareholders, and with the relationship between return and the value of an investment in company's shares.
In addition to assisting management and owners in diagnosing the financial health of their company, ratios can also help managers make decisions about investments or projects that the company is considering to take, such as acquisitions, or expansion.
Many formal methods are used in capital budgeting, including the techniques such as
|
https://en.wikipedia.org/wiki/Financial_ratio
|
Financial statement analysis(or justfinancial analysis) is the process of reviewing and analyzing a company'sfinancial statementsto make better economic decisions to earn income in future. These statements include theincome statement,balance sheet,statement of cash flows, notes to accounts and astatement of changes in equity(if applicable). Financial statement analysis is a method or process involving specific techniques for evaluating risks, performance, valuation, financial health, and future prospects of an organization.[1]
It is used by a variety of stakeholders, such as credit and equity investors, the government, the public, and decision-makers within the organization. These stakeholders have different interests and apply a variety of different techniques to meet their needs. For example, equity investors are interested in the long-term earnings power of the organization and perhaps the sustainability and growth of dividend payments. Creditors want to ensure the interest and principal is paid on the organizations debt securities (e.g., bonds) when due.
Common methods of financial statement analysis include horizontal and vertical analysis and the use offinancial ratios. Historical information combined with a series of assumptions and adjustments to the financial information may be used to project future performance. TheChartered Financial Analystdesignation is available for professional financial analysts.
Benjamin GrahamandDavid Doddfirst published their influential book "Security Analysis" in 1934.[2][3]A central premise of their book is that the market's pricing mechanism for financial securities such as stocks and bonds is based upon faulty and irrational analytical processes performed by many market participants. This results in the market price of a security only occasionally coinciding with theintrinsic valuearound which the price tends to fluctuate.[4]InvestorWarren Buffettis a well-known supporter of Graham and Dodd's philosophy.
The Graham and Dodd approach is referred to asFundamental analysisand includes: 1) Economic analysis; 2) Industry analysis; and 3) Company analysis. The latter is the primary realm of financial statement analysis. On the basis of these three analyses the intrinsic value of thesecurityis determined.[4]
Horizontal analysis compares financial information over time, typically from past quarters or years. Horizontal analysis is performed by comparing financial data from a past statement, such as the income statement. When comparing this past information one will want to look for variations such as higher or lower earnings.[5]
Vertical analysis is a percentage analysis of financial statements. Each line item listed in the financial statement is listed as the percentage of another line item. For example, on an income statement each line item will be listed as a percentage of gross sales. This technique is also referred to as normalization[6]or common-sizing.[5]
Financial ratios are very powerful tools to perform some quick analysis of financial statements. There are four main categories of ratios: liquidity ratios, profitability ratios, activity ratios and leverage ratios. These are typically analyzed over time and across competitors in an industry.
DuPont analysisuses several financial ratios that multiplied together equal return on equity, a measure of how much income the firm earns divided by the amount of funds invested (equity).
ADividend discount model(DDM) may also be used to value a company'sstockprice based on the theory that its stock is worth the sum of all of its future dividend payments, discounted back to their present value.[8]In other words, it is used to value stocks based on thenet present valueof the futuredividends.
Financial statement analyses are typically performed inspreadsheetsoftware — or specializedaccounting software— and summarized in a variety of formats.
An earnings recast is the act of amending and re-releasing a previously released earnings statement, with specified intent.[9]
Investors need to understand the ability of the company to generate profit. This, together with itsrate of profitgrowth, relative to the amount of capital deployed and various other financial ratios, forms an important part of their analysis of the value of the company. Analysts may modify ("recast") the financial statements by adjusting the underlying assumptions to aid in this computation. For example, operating leases (treated like a rental transaction) may be recast as capital leases (indicating ownership), adding assets and liabilities to the balance sheet. This affects the financial statement ratios.[10]
Recasting is also known as normalizing accounts.[11]
Financial analysts typically have finance and accounting education at the undergraduate or graduate level. Persons may earn theChartered Financial Analyst(CFA) designation through a series of challenging examinations. Upon completion of the three-part exam, CFAs are considered experts in areas like fundamentals of investing, the valuation of assets, portfolio management, and wealth planning.
|
https://en.wikipedia.org/wiki/Financial_statement_analysis
|
Infinance, agrowth stockis astockof a company that generates substantial and sustainable positivecash flowand whoserevenuesandearningsare expected to increase at a faster rate than the average company within the same industry.[1]A growth company typically has some sort ofcompetitive advantage(a new product, a breakthrough patent, overseas expansion) that allows it to fend off competitors. Growth stocks usually pay smallerdividends, as the companies typically reinvest mostretained earningsin capital-intensive projects.
Analysts computereturn on equity(ROE) by dividing a company's net income into averagecommon equity. To be classified as a growth stock, analysts generally expect companies to achieve a 15 percent or higher return on equity.[2]CAN SLIMis a method which identifies growth stocks and was created byWilliam O'Neila stock broker and publisher ofInvestor's Business Daily.[3]In academic finance, theFama–French three-factor modelrelies onbook-to-market ratios(B/M ratios) to identify growth vs. value stocks.[4]Some advisors suggest investing half the portfolio using the value approach and other half using the growth approach.[5]
The definition of a "growth stock" differs among some well-known investors. For example,Warren Buffettdoes not differentiate between value and growth investing. In his 1992 letter to shareholders, he stated that many analysts consider growth and value investing to be opposites which he characterized "fuzzy thinking."[6]Furthermore, Buffett cautions investors against overpaying for growth stocks, noting that growth projections are often overly optimistic. Instead, he prioritizes companies with a durable competitive advantage and a highreturn on capital, rather than focusing solely on revenue or earnings growth.[7]
Peter Lynchclassifies stocks into four categories: "Slow Growers," "Stalwarts," "Fast Growers," and "Turnarounds."[8]He is known for focusing on what he calls "Fast Growers" referring to companies that grow at rates of 20% or higher. However, like Buffett, Lynch also believes in not overpaying for stocks emphasizing that investors should use their "edge" to find companies with high earnings potential that are not yet overvalued.[9]He recommends investing in companies with P/E ratios equal to or lower than their growth rates and suggests holding these investments for three to five years.[8]He is often credited for popularizing thePEG ratioto analyze growth stocks.[10]
This finance-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Growth_stock
|
Investment bankingis an advisory-basedfinancial servicefor institutional investors, corporations, governments, and similar clients. Traditionally associated withcorporate finance, such a bank might assist in raisingfinancial capitalbyunderwritingor acting as the client'sagentin theissuanceof debt or equitysecurities. An investment bank may also assistcompaniesinvolved inmergers and acquisitions(M&A) and provide ancillary services such asmarket making, trading ofderivativesandequity securitiesFICC services (fixed incomeinstruments,currencies, andcommodities) or research (macroeconomic, credit or equity research). Most investment banks maintainprime brokerageandasset managementdepartments in conjunction with theirinvestment researchbusinesses. As an industry, it is broken up into theBulge Bracket(upper tier),Middle Market(mid-level businesses), andboutique market(specialized businesses).
Unlikecommercial banksandretail banks, investment banks do not takedeposits. The revenue model of an investment bank comes mostly from the collection of fees for advising on a transaction, contrary to a commercial or retail bank. From the passage ofGlass–Steagall Actin 1933 until its repeal in 1999 by theGramm–Leach–Bliley Act, theUnited Statesmaintained a separation between investment banking and commercial banks. Other industrialized countries, includingG7countries, have historically not maintained such a separation. As part of theDodd–Frank Wall Street Reform and Consumer Protection Actof 2010 (Dodd–Frank Act of 2010), theVolcker Ruleasserts some institutional separation of investment banking services from commercial banking.[1]
All investment banking activity is classed as either "sell side" or "buy side". The "sell side" involves trading securities for cash or for other securities (e.g. facilitating transactions, market-making), or the promotion of securities (e.g. underwriting, research, etc.). The "buy side" involves the provision of advice to institutions that buy investment services.Private equityfunds,mutual funds,life insurancecompanies,unit trusts, andhedge fundsare the most common types of buy-sideentities.
An investment bank can also be split into private and public functions with ascreenseparating the two to prevent information from crossing. The private areas of the bank deal with privateinsider informationthat may not be publicly disclosed, while the public areas, such as stock analysis, deal with public information. An advisor who provides investment banking services in the United States must be a licensedbroker-dealerand subject toU.S. Securities and Exchange Commission(SEC) andFinancial Industry Regulatory Authority(FINRA) regulation.[2]
TheDutch East India Companywas the first company to issuebondsandsharesofstockto the general public. It was also the firstpublicly traded company, being the first company to be publiclylisted.[3][4]
Investment banking has changed over the years, beginning as a partnership firm focused on underwriting security issuance, i.e.initial public offerings(IPOs) andsecondary market offerings,brokerage, and mergers and acquisitions, and evolving into a "full-service" range includingsecurities research,proprietary trading, andinvestment management.[5]In the 21st century, the SEC filings of the major independent investment banks such asGoldman SachsandMorgan Stanleyreflect three product segments:
In the United States, commercial banking and investment banking were separated by theGlass–Steagall Act, which was repealed in 1999. The repeal led to more "universal banks" offering an even greater range of services. Many large commercial banks have therefore developed investment banking divisions through acquisitions and hiring. Notable full-service investment banks with a significant investment banking division (IBD) includeJPMorgan Chase,Bank of America,Citigroup,Deutsche Bank,UBS(AcquiredCredit Suisse), andBarclays.
After the2008 financial crisisand the subsequent passage of theDodd-Frank Act of 2010, regulations have limited certain investment banking operations, notably with the Volcker Rule's restrictions on proprietary trading.[7]
The traditional service of underwriting security issues has declined as a percentage of revenue. As far back as 1960, 70% ofMerrill Lynch's revenue was derived from transaction commissions while "traditional investment banking" services accounted for 5%. However, Merrill Lynch was a relatively "retail-focused" firm with a large brokerage network.[7]
Investment banking is split intofront office,middle office, andback officeactivities. While large service investment banks offer all lines of business, both "sell side" and "buy side", smaller sell-side advisory firms such asboutique investment banksand small broker-dealers focus on niche segments within investment banking and sales/trading/research, respectively.
For example,Evercore (NYSE:EVR)acquired ISI International Strategy & Investment (ISI) in 2014 to expand their revenue into research-driven equity sales and trading.[8]
Investment banks offer services to both corporations issuing securities and investors buying securities. For corporations, investment bankers offer information on when and how to place their securities on the open market, a highly regulated process by the SEC to ensure transparency is provided to investors. Therefore, investment bankers play a very important role in issuing new security offerings.[7][9]
Front officeis generally described as arevenue-generating role. There are two main areas within front office: investment banking and markets.[10]
Corporate financeis the aspect of investment banks which involves helping customers raisefundsincapital marketsand giving advice onmergers and acquisitions(M&A);[12]transactions in which capital is raised for the corporation include those listed aside.[12]
This work may involve, i.a., subscribing investors to a security issuance, coordinating with bidders, or negotiating with a merger target.
Apitch book, also called a confidential information memorandum (CIM), is a document that highlights the relevant financial information, past transaction experience, and background of the deal team to market the bank to a potential M&A client; if the pitch is successful, the bank arranges the deal for the client.[13]
Recent legal and regulatory developments in the U.S. will likely alter the makeup of the group of arrangers and financiers willing to arrange and provide financing for certain highly leveraged transactions.[14][15]
On behalf of the bank and its clients, a large investment bank's primary function is buying and selling products.[16]
Salesis the term for the investment bank's sales force, whose primary job is to call on institutional and high-net-worth investors to suggest trading ideas (on acaveat emptorbasis) and take orders. Sales desks then communicate their clients' orders to the appropriate bank department, which can price and execute trades, or structure new products that fit a specific need.
Sales make deals tailored to their corporate customers' needs, that is, their terms are often specific. Focusing on their customer relationship, they may deal on the whole range of asset types.
(In distinction, trades negotiated by market-makers usually bear standard terms; inmarket making, traders will buy and sell financial products with the goal of making money on each trade.
See undertrading desk.)
Structuringhas been a relatively recent activity as derivatives have come into play, withhighly technical and numerate employeesworking on creating complex financial products which typically offer much greater margins and returns than underlying cash securities, so-called "yield enhancement". In 2010, investment banks came under pressure as a result of selling complex derivatives contracts to local municipalities in Europe and the US.[17]
Strategistsadvise external as well as internal clients on the strategies that can be adopted in various markets. Ranging from derivatives to specific industries, strategists place companies and industries in a quantitative framework with full consideration of the macroeconomic scene. This strategy often affects the way the firm will operate in the market, the direction it would like to take in terms of its proprietary and flow positions, the suggestions salespersons give to clients, as well as the waystructurerscreate new products.
Banks also undertake risk throughproprietary trading, performed by a special set of traders who do not interface with clients and through "principal risk"—risk undertaken by a trader after he buys or sells a product to a client and does not hedge his total exposure.
Here, and in general, banks seek to maximize profitability for a given amount of risk on their balance sheet.
Note here that theFRTBframework has underscored the distinction between the "Trading book" and the "Banking book"
- i.e. assets intended for active trading, as opposed to assets expected to be held to maturity -
and market riskcapital requirementswill differ accordingly.
The necessity for numerical ability in sales and trading has created jobs forphysics,computer science,mathematics, andengineeringPhDswho act as"front office" quantitative analysts.
Thesecurities researchdivision reviews companies andwrites reportsabout their prospects, often with "buy", "hold", or "sell" ratings. Investment banks typically havesell-side analystswhich cover various industries. Their sponsored funds or proprietary trading offices will also have buy-side research.
Research also coverscredit risk,fixed income,macroeconomics, andquantitative analysis, all of which are used internally and externally to advise clients; alongside "Equity", these may be separate "groups".
The research group(s) typically provide a key service in terms of advisory and strategy.
While the research division may or may not generate revenue (based on the specific compliance policies at different banks), its resources are used to assist traders in trading, the sales force in suggesting ideas to customers, and investment bankers by covering their clients.[18]Research also serves outside clients with investment advice (such as institutional investors and high-net-worth individuals) in the hopes that these clients will execute suggestedtrade ideasthrough the sales and trading division of the bank, and thereby generate revenue for the firm.
WithMiFID IIrequiring sell-side research teams in banks to charge for research, the business model for research is increasingly becoming revenue-generating. External rankings of researchers are becoming increasingly important, and banks have started the process of monetizing research publications, client interaction times, meetings with clients etc.
There is a potential conflict of interest between the investment bank and its analysis, in that published analysis can impact the performance of a security (in the secondary markets or an initial public offering) or influence the relationship between the banker and its corporate clients, and vice versa regardingmaterial non-public information(MNPI), thereby affecting the bank's profitability.[19]See alsoChinese wall § Finance.
This area of the bank includestreasury management, internal controls (such as Risk), and internal corporate strategy.
Corporate treasuryis responsible for an investment bank's funding, capital structure management, andliquidity riskmonitoring; it is (co)responsible for the bank'sfunds transfer pricing(FTP) framework.
Internal controltracks and analyzes the capital flows of the firm, the finance division is the principal adviser to senior management on essential areas such as controlling the firm's global risk exposure and the profitability and structure of the firm's various businesses via dedicated trading deskproduct controlteams. In the United States and United Kingdom, acomptroller(or financial controller) is a senior position, often reporting to the chief financial officer.
Risk management involves analyzing themarketandcredit riskthat an investment bank or its clients take onto their balance sheet during transactions or trades.
Middle office "Credit Risk" focuses around capital markets activities, such assyndicated loans, bond issuance,restructuring, and leveraged finance.
These are not considered "front office" as they tend not to be client-facing and rather 'control' banking functions from taking too much risk.
"Market Risk" is the control function for the Markets' business and conducts review of sales and trading activities utilizing theVaR model.
Other Middle office "Risk Groups" include country risk, operational risk, and counterparty risks which may or may not exist on a bank to bank basis.
Front office risk teams, on the other hand, engage in revenue-generating activities involving debt structuring, restructuring,syndicated loans, and securitization for clients such as corporates, governments, and hedge funds.
Here "Credit Risk Solutions", are a key part of capital market transactions, involvingdebt structuring, exit financing, loan amendment,project finance,leveraged buy-outs, and sometimes portfolio hedging.
The "Market Risk Team" provides services to investors via derivative solutions,portfolio management, portfolio consulting, and risk advisory.
Well-known "Risk Groups" are atJPMorgan Chase,Morgan Stanley,Goldman SachsandBarclays.
J.P. Morgan IB Risk works with investment banking to execute transactions and advise investors, although its Finance & Operation risk groups focus on middle office functions involving internal, non-revenue generating, operational risk controls.[20][21][22]Thecredit default swap, for instance, is a famous credit risk hedging solution for clients invented by J.P. Morgan'sBlythe Mastersduring the 1990s.
The Loan Risk Solutions group[23]within Barclays' investment banking division and Risk Management and Financing group[24]housed in Goldman Sach's securities division are client-driven franchises.
Risk management groups such as credit risk, operational risk, internal risk control, and legal risk are restrained to internal business functions — including firm balance-sheet risk analysis and assigning the trading cap — that are independent of client needs, even though these groups may be responsible for deal approval that directly affects capital market activities.
Similarly, theInternal corporate strategygroup, tackling firm management and profit strategy, unlike corporate strategy groups that advise clients, is non-revenue regenerating yet a key functional role within investment banks.
This list is not a comprehensive summary of all middle-office functions within an investment bank, as specific desks within front and back offices may participate in internal functions.[25]
The back office data-checks trades that have been conducted, ensuring that they are not wrong, and transacts the required transfers. Many banks have outsourced operations. It is, however, a critical part of the bank.[citation needed]
Every major investment bank has considerable amounts of in-housesoftware, created by the technology team, who are also responsible fortechnical support. Technology has changed considerably in the last few years as more sales and trading desks are using electronic processing. Some trades are initiated by complexalgorithmsforhedgingpurposes.
Firms are responsible for compliance with local and foreign government regulations and internal regulations.
The investment banking industry can be broken up intoBulge Bracket(upper tier),Middle Market(mid-level businesses), andboutique market(specialized businesses) categories. There are varioustradeassociations throughout the world which represent the industry inlobbying, facilitate industry standards, and publish statistics. The International Council of Securities Associations (ICSA) is a global group of trade associations.
In the United States, theSecurities Industry and Financial Markets Association(SIFMA) is likely the most significant; however, several of the large investment banks are members of theAmerican Bankers AssociationSecurities Association (ABASA),[27]while small investment banks are members of the National Investment Banking Association (NIBA).
In Europe, the European Forum of Securities Associations was formed in 2007 by various European trade associations.[28]Several European trade associations (principally the London Investment Banking Association and the European SIFMA affiliate) combined in November 2009 to form theAssociation for Financial Markets in Europe(AFME).[29]
In thesecurities industry in China, theSecurities Association of Chinais a self-regulatory organization whose members are largely investment banks.
Global investment banking revenue increased for the fifth year running in 2007, to a record US$84 billion, which was up 22% on the previous year and more than double the level in 2003.[30]Subsequent to their exposure to United Statessub-primesecurities investments, many investment banks have experienced losses. As of late 2012, global revenues for investment banks were estimated at $240 billion, down about a third from 2009, as companies pursued less deals and traded less.[31]Differences in total revenue are likely due to different ways of classifying investment banking revenue, such as subtracting proprietary trading revenue.
In terms of total revenue, SEC filings of the major independent investment banks in the United States show that investment banking (defined as M&A advisory services and security underwriting) made up only about 15–20% of total revenue for these banks from 1996 to 2006, with the majority of revenue (60+% in some years) brought in by "trading" which includes brokerage commissions and proprietary trading; the proprietary trading is estimated to provide a significant portion of this revenue.[6]
The United States generated 46% of global revenue in 2009, down from 56% in 1999. Europe (withMiddle EastandAfrica) generated about a third, while Asian countries generated the remaining 21%.[30]: 8The industry is heavily concentrated in a small number of major financial centers, includingNew York City,City of London,Frankfurt,Hong Kong,Singapore, andTokyo. The majority of the world's largestBulge Bracketinvestment banks and theirinvestment managersare headquartered in New York and are also important participants in other financial centers.[32]The city of London has historically served as a hub of European M&A activity, often facilitating the most capital movement andcorporate restructuringin the area.[33][34]Meanwhile, Asian cities are receiving a growing share of M&A activity.
According to estimates published by theInternational Financial Services London, for the decade prior to the2008 financial crisis, M&A was a primary source of investment banking revenue, often accounting for 40% of such revenue, but dropped during and after the2008 financial crisis.[30]: 9Equity underwriting revenue ranged from 30% to 38%, and fixed-income underwriting accounted for the remaining revenue.[30]: 9
Revenues have been affected by the introduction of new products with highermargins; however, these innovations are often copied quickly by competing banks, pushing down trading margins. For example, brokerages commissions for bond and equity trading is a commodity business, but structuring and trading derivatives have higher margins because eachover-the-countercontract has to be uniquely structured and could involve complex pay-off and risk profiles. One growth area isprivate investment in public equity(PIPEs, otherwise known as Regulation D or Regulation S). Such transactions are privately negotiated between companies andaccredited investors.
Banks also earned revenue by securitizing debt, particularly mortgage debt prior to the2008 financial crisis. Investment banks have become concerned that lenders are securitizing in-house, driving the investment banks to pursuevertical integrationby becoming lenders, which has been allowed in the United States since the repeal of the Glass–Steagall Act in 1999.[35]
According toThe Wall Street Journal, in terms of total M&A advisory fees for the whole of 2020, the top ten investment banks were as listed in the table below.[36]Many of these firms belong either to theBulge Bracket(upper tier),Middle Market(mid-level businesses), or are eliteboutique investment banks(independent advisory investment banks).
The above list is just a ranking of the advisory arm (M&A advisory, syndicated loans,equitycapital markets, anddebtcapital markets) of each bank and does not include the generally much larger portion of revenues fromsales & tradingandasset management. Mergers and acquisitions and capital markets are also often covered byThe Wall Street JournalandBloomberg.
The2008 financial crisisled to the collapse of several notable investment banks, such as the bankruptcy ofLehman Brothers(one of the largest investment banks in the world) and the hurriedfire saleofMerrill Lynchand the much smallerBear Stearnsto much larger banks, which effectively rescued them from bankruptcy. The entire financial services industry, including numerous investment banks, was bailed out by government taxpayer funded loans through theTroubled Asset Relief Program(TARP). Surviving U.S. investment banks such as Goldman Sachs and Morgan Stanley converted to traditional bank holding companies to accept TARP relief.[38]Similar situations have occurred across the globe with countries rescuing their banking industry. Initially, banks received part of a $700 billion TARP intended to stabilize the economy and thaw the frozen credit markets.[39]Eventually, taxpayer assistance to banks reached nearly $13 trillion—most without much scrutiny—[40]lending did not increase,[41]and credit markets remained frozen.[42]
The crisis led to questioning of the investment bankingbusiness model[43]without the regulation imposed on it by Glass–Steagall.[neutralityisdisputed]OnceRobert Rubin, a former co-chairman of Goldman Sachs, became part of theClinton administrationand deregulated banks, the previous conservatism of underwriting established companies and seeking long-term gains was replaced by lower standards and short-term profit.[44]Formerly, the guidelines said that in order to take a company public, it had to be in business for a minimum of five years and it had to show profitability for three consecutive years. After deregulation, those standards were gone, but small investors did not grasp the full impact of the change.[44]
A number of former Goldman Sachs top executives, such asHenry PaulsonandEd Liddy, were in high-level positions in government and oversaw the controversial taxpayer-fundedbank bailout.[44]The TARP Oversight Report released by theCongressional Oversight Panelfound that the bailout tended to encourage risky behavior and "corrupt[ed] the fundamental tenets of amarket economy".[45]
Under threat of asubpoena, Goldman Sachs revealed that it received $12.9 billion in taxpayer aid, $4.3 billion of which was then paid out to 32 entities, including many overseas banks, hedge funds, and pensions.[46]The same year it received $10 billion in aid from the government, it also paid out multimillion-dollar bonuses; the total paid in bonuses was $4.82 billion.[47][48]Similarly, Morgan Stanley received $10 billion in TARP funds and paid out $4.475 billion in bonuses.[49]
The investment banking industry, including boutique investment banks, have come under criticism for a variety of reasons, including perceived conflicts of interest, overly large pay packages, cartel-like or oligopolistic behavior, taking both sides in transactions, and more.[50]Investment banking has also been criticized for its opacity.[51]However, the lack of transparency inherent to the investment banking industry is largely due to the necessity to abide by the non-disclosure agreement (NDA) signed with the client. The accidental leak of confidential client data can cause a bank to incur significant monetary losses.
Conflicts of interest may arise between different parts of a bank, creating the potential formarket manipulation, according to critics. Authorities that regulate investment banking, such as theFinancial Conduct Authority(FCA) in theUnited Kingdomand theSECin theUnited States, require that banks impose a "Chinese wall" to prevent communication between investment banking on one side and equity research and trading on the other. However, critics say such a barrier does not always exist in practice.Independent advisory firmsthat exclusively provide corporate finance advice argue that their advice is not conflicted, unlikebulge bracketbanks.
Conflicts of interest often arise in relation to investment banks' equity research units, which have long been part of the industry. A common practice is for equity analysts to initiate coverage of a company to develop relationships that lead to highly profitable investment banking business. In the 1990s, many equity researchers allegedly traded positive stock ratings for investment banking business. Alternatively, companies may threaten to divert investment banking business to competitors unless their stock was rated favorably. Laws were passed to criminalize such acts, and increased pressure from regulators and a series of lawsuits, settlements, and prosecutions curbed this business to a large extent following the 2001 stock market tumble after thedot-com bubble.
Philip Augar, author ofThe Greed Merchants, said in an interview that, "You cannot simultaneously serve the interest of issuer clients and investing clients. And it’s not just underwriting and sales; investment banks run proprietary trading operations that are also making a profit out of these securities."[50]
Many investment banks also own retail brokerages. During the 1990s, some retail brokerages sold consumers securities which did not meet their stated risk profile. This behavior may have led to investment banking business or even sales of surplus shares during a public offering to keep public perception of the stock favorable.
Since investment banks engage heavily in trading for their own account, there is always the temptation for them to engage in some form offront running—the illegal practice whereby a broker executes orders for their own account before filling orders previously submitted by their customers, thereby benefiting from any changes in prices induced by those orders.
Documentsunder sealin a decade-long lawsuit concerningeToys.com's IPO but obtained byNew York Times'Wall Street Business columnistJoe Noceraalleged that IPOs managed by Goldman Sachs and other investment bankers involved asking forkickbacksfrom their institutional clients who made large profits flipping IPOs which Goldman had intentionally undervalued. Depositions in the lawsuit alleged that clients willingly complied with these demands because they understood it was necessary to participate in future hot issues.[52]ReutersWall Street correspondentFelix Salmonretracted his earlier, more conciliatory statements on the subject and said he believed that the depositions show that companies going public and their initial consumer stockholders are both defrauded by this practice, which may be widespread throughout the IPOfinance industry.[53]The case is ongoing, and the allegations remain unproven.
Nevertheless, the controversy around investment banks intentionally underpricing IPOs for their self-interest has become a highly debated subject. The cause for concern is that the investment banks advising on the IPOs have the incentive to serve institutional investors on the buy-side, creating a valid reason for a potential conflict of interest.[54]
The post-IPO spike in the stock price of newly listed companies has only worsened the problem, with one of the leading critics being high-profile venture capital (VC) investor, Bill Gurley.[55]
Investment banking has been criticized for the enormous pay packages awarded to those who work in the industry. According to Bloomberg Wall Street's five biggest firms paid over $3 billion to their executives from 2003 to 2008, "while they presided over the packaging and sale of loans that helped bring down the investment-banking system".[56]
In 2003-2007, pay packages included $172 million for Merrill Lynch CEOStanley O'Nealbefore the bank was bought by Bank of America, and $161 million for Bear Stearns'James Caynebefore the bank collapsed and was sold to JPMorgan Chase.[56]Such pay arrangements attracted the ire ofDemocratsandRepublicansin theUnited States Congress, who demanded limits on executive pay in 2008 when the U.S. government was bailing out the industry with a $700 billion financial rescue package.[56]
Writing in theGlobal Association of Risk Professionalsjournal, Aaron Brown, a vice president at Morgan Stanley, says "By any standard of human fairness, of course, investment bankers make obscene amounts of money."[50]
|
https://en.wikipedia.org/wiki/Investment_bank
|
Private equity(PE) isstockin aprivate companythat does not offer stock to the general public. Private equity is offered instead to specializedinvestment fundsandlimited partnershipsthat take an active role in the management and structuring of the companies. In casual usage, "private equity" can refer to these investment firms, rather than the companies in which they invest.[1]
Private-equitycapitalis invested into a target company either by an investment management company (private equity firm), aventure capitalfund, or anangel investor; each category of investor has specific financial goals, management preferences, and investment strategies for profiting from their investments. Private equity providesworking capitalto finance a target company's expansion, including the development of new products and services, operational restructuring, management changes, and shifts in ownership and control.[2]
As a financial product, the private-equity fund is a type ofprivate capitalfor financing a long-terminvestment strategyin anilliquidbusiness enterprise.[3]Private equity fund investing has been described by the financial press as the superficial rebranding of investment management companies who specialized in theleveraged buyoutof financially weak companies.[4]
Evaluations of the returns of private equity are mixed: some find that it outperforms public equity, but others find otherwise.[5]
Some key features of private equity investment include:
The strategies private-equity firms may use are as follows, leveraged buyout being the most common.
Leveraged buyout (LBO) refers to a strategy of making equity investments as part of a transaction in which a company, business unit, or business asset is acquired from the current shareholders typically with the use offinancial leverage.[13]The companies involved in these transactions are typically mature and generateoperating cash flows.[14]
Private-equity firms view target companies as either Platform companies, which have sufficient scale and a successful business model to act as a stand-alone entity, or as add-on / tuck-in /bolt-on acquisitions, which would include companies with insufficient scale or other deficits.[15][16]
Leveraged buyoutsinvolve afinancial sponsoragreeing to an acquisition without itself committing all the capital required for the acquisition. To do this, the financial sponsor will raise acquisition debt, which looks to the cash flows of the acquisition target to make interest and principal payments.[17]Acquisition debt in an LBO is oftennon-recourseto the financial sponsor and has no claim on other investments managed by the financial sponsor. Therefore, an LBO transaction's financial structure is particularly attractive to a fund's limited partners, allowing them the benefits of leverage, but limiting the degree of recourse of that leverage. This kind of financing structure leverage benefits an LBO's financial sponsor in two ways: (1) the investor only needs to provide a fraction of the capital for the acquisition, and (2) the returns to the investor will be enhanced, as long as thereturn on assetsexceeds the cost of the debt.[18]
As a percentage of the purchase price for a leverage buyout target, the amount of debt used to finance a transaction varies according to the financial condition and history of the acquisition target, market conditions, the willingness oflendersto extend credit (both to the LBO'sfinancial sponsorsand the company to be acquired) and the interest costs and the ability of the company tocoverthose costs. Historically the debt portion of a LBO will range from 60 to 90% of the purchase price.[19]Between 2000 and 2005, debt averaged between 59.4% and 67.9% of total purchase price for LBOs in the United States.[20]
A private-equity fund, ABC Capital II, borrows $9bn from a bank (or other lender). To this, it adds $2bn ofequity– money from its own partners and fromlimited partners. With this $11bn, it buys all the shares of an underperforming company, XYZ Industrial (afterdue diligence, i.e. checking the books). It replaces the senior management in XYZ Industrial, with others who set out to streamline it. The workforce is reduced, some assets are sold off, etc. The objective is to increase the valuation of the company for an early sale.
The stock market is experiencing abull market, and XYZ Industrial is sold two years after the buy-out for $13bn, yielding a profit of $2bn. The original loan can now be paid off with interest of, say, $0.5bn. The remaining profit of $1.5bn is shared among the partners. Taxation of such gains is at thecapital gains tax rates, which in the United States are lower thanordinary incometax rates.
Note that part of that profit results from turning the company around, and part results from the general increase in share prices in a buoyant stock market, the latter often being the greater component.[21]
Notes:
Growth capitalrefers to equity investments, most often minority investments, in relatively mature companies that are looking for capital to expand or restructure operations, enter new markets or finance a major acquisition without a change of control of the business.[24]
Companies that seek growth capital will often do so in order to finance a transformational event in their life cycle. These companies are likely to be more mature than venture capital-funded companies, able to generate revenue and operating profits, but unable to generate sufficient cash to fund major expansions, acquisitions or other investments. Because of this lack of scale, these companies generally can find few alternative conduits to secure capital for growth, so access to growth equity can be critical to pursue necessary facility expansion, sales and marketing initiatives, equipment purchases, and new product development.[25]
The primary owner of the company may not be willing to take thefinancial riskalone. By selling part of the company to private equity, the owner can take out some value and share the risk of growth with partners.[26]Capital can also be used to effect a restructuring of a company's balance sheet, particularly to reduce the amount ofleverage (or debt)the company has on itsbalance sheet.[27]
Aprivate investment in public equity(PIPE), refer to a form of growth capital investment made into apublicly traded company. PIPE investments are typically made in the form of aconvertibleorpreferredsecurity that is unregistered for a certain period of time.[28][29]
The Registered Direct (RD) is another common financing vehicle used for growth capital. A registered direct is similar to a PIPE, but is instead sold as a registered security.
Mezzanine capitalrefers tosubordinated debtorpreferred equitysecurities that often represent the most junior portion of a company'scapital structurethat is senior to the company'scommon equity. This form of financing is often used by private-equity investors to reduce the amount of equity capital required to finance a leveraged buyout or major expansion. Mezzanine capital, which is often used by smaller companies that are unable to access thehigh yield market, allows such companies to borrow additional capital beyond the levels that traditional lenders are willing to provide through bank loans.[30]In compensation for the increased risk, mezzanine debt holders require a higher return for their investment than secured or other more senior lenders.[31][32]Mezzanine securities are often structured with a current income coupon.
Venture capital[33](VC) is a broad subcategory of private equity that refers to equity investments made, typically in less mature companies, for the launch of a seed or startup company, early-stage development, or expansion of a business. Venture investment is most often found in the application of new technology, new marketing concepts and new products that do not have a proven track record or stable revenue streams.[34][35]
Venture capital is often sub-divided by the stage of development of the company ranging from early-stage capital used for the launch ofstartup companiesto late stage and growth capital that is often used to fund expansion of existing business that are generating revenue but may not yet be profitable or generating cash flow to fund future growth.[36]
Entrepreneurs often develop products and ideas that require substantial capital during the formative stages of their companies' life cycles.[37]Many entrepreneurs do not have sufficient funds to finance projects themselves, and they must, therefore, seek outside financing.[38]The venture capitalist's need to deliver high returns to compensate for the risk of these investments makes venture funding an expensive capital source for companies. Being able to secure financing is critical to any business, whether it is a startup seeking venture capital or a mid-sized firm that needs more cash to grow.[39]Venture capital is most suitable for businesses with large up-frontcapital requirementswhich cannot be financed by cheaper alternatives such asdebt. Although venture capital is often most closely associated with fast-growingtechnology,healthcareandbiotechnologyfields, venture funding has been used for other more traditional businesses.[34][40]
Investors generally commit to venture capital funds as part of a wider diversified private-equityportfolio, but also to pursue the larger returns the strategy has the potential to offer. However, venture capital funds have produced lower returns for investors over recent years compared to other private-equity fund types, particularly buyout.
The category ofdistressed securitiescomprises financial strategies for the profitable investment of working capital into the corporate equity and thesecuritiesof financially weak companies.[41][42][43]The investment of private-equity capital into distressed securities is realised with two financial strategies:
Moreover, the private-equity investment strategies ofhedge fundsalso include activelytradingtheloans heldand thebonds issuedby the financially-weak target companies.[46]
Secondary investments refer to investments made in existing private-equity assets. These transactions can involve the sale ofprivate equity fundinterests or portfolios of direct investments inprivately held companiesthrough the purchase of these investments from existinginstitutional investors.[47]By its nature, the private-equity asset class is illiquid, intended to be a long-term investment forbuy and holdinvestors. Secondary investments allow institutional investors, particularly those new to the asset class, to invest in private equity from older vintages than would otherwise be available to them. Secondaries also typically experience a different cash flow profile, diminishing thej-curveeffect of investing in new private-equity funds.[48][49]Often investments in secondaries are made through third-party fund vehicle, structured similar to afund of fundsalthough many large institutional investors have purchased private-equity fund interests through secondary transactions.[50]Sellers of private-equity fund investments sell not only the investments in the fund but also their remaining unfunded commitments to the funds.
Other strategies that can be considered private equity or a close adjacent market include:
and this to compensate for private equities not being traded on the public market, aprivate-equity secondary markethas formed, where private-equity investors purchase securities and assets from other private equity investors.
The seeds of the US private-equity industry were planted in 1946 with the founding of two venture capital firms:American Research and Development Corporation(ARDC) andJ.H. Whitney & Company.[58]Before World War II, venture capital investments (originally known as "development capital") were primarily the domain of wealthy individuals and families. In 1901 J.P. Morgan arguably managed the first leveraged buyout of theCarnegie Steel Companyusing private equity.[59]Modern era private equity, however, is credited toGeorges Doriot, the "father of venture capitalism" with the founding of ARDC[60]and founder ofINSEAD, with capital raised from institutional investors, to encourageprivate sectorinvestments in businesses run by soldiers who were returning from World War II. ARDC is credited with the first major venture capital success story when its 1957 investment of $70,000 inDigital Equipment Corporation(DEC) would be valued at over $355 million after the company's initial public offering in 1968 (a return of over 5,000 times its investment and anannualized rate of returnof 101%).[61][62][failed verification]It is commonly noted that the first venture-backed startup isFairchild Semiconductor, which produced the first commercially practicable integrated circuit, funded in 1959 by what would later becomeVenrock Associates.[63]
The first leveraged buyout may have been the purchase byMcLean Industries, Inc.ofPan-Atlantic Steamship Companyin January 1955 andWaterman Steamship Corporationin May 1955[64]Under the terms of that transaction, McLean borrowed $42 million and raised an additional $7 million through an issue ofpreferred stock. When the deal closed, $20 million of Waterman cash and assets were used to retire $20 million of the loan debt.[65]Lewis Cullman's acquisition ofOrkin Exterminating Companyin 1964 is often cited as the first leveraged buyout.[66][67]Similar to the approach employed in the McLean transaction, the use ofpublicly tradedholding companies as investment vehicles to acquire portfolios of investments in corporate assets was a relatively new trend in the 1960s popularized by the likes ofWarren Buffett(Berkshire Hathaway) andVictor Posner(DWG Corporation) and later adopted byNelson Peltz(Triarc),Saul Steinberg(Reliance Insurance) andGerry Schwartz(Onex Corporation). These investment vehicles would utilize a number of the same tactics and target the same type of companies as more traditional leveraged buyouts and in many ways could be considered a forerunner of the later private-equity firms. Posner is often credited with coining the term "leveraged buyout" or "LBO".[68]
The leveraged buyout boom of the 1980s was conceived by a number of corporate financiers, most notablyJerome Kohlberg Jr.and later his protégéHenry Kravis. Working forBear Stearnsat the time, Kohlberg and Kravis along with Kravis' cousinGeorge Robertsbegan a series of what they described as "bootstrap" investments. Many of these companies lacked a viable or attractive exit for their founders as they were too small to be taken public and the founders were reluctant to sell out to competitors and so a sale to a financial buyer could prove attractive.[69]In the following years the threeBear Stearnsbankers would complete a series of buyouts including Stern Metals (1965), Incom (a division of Rockwood International, 1971), Cobblers Industries (1971), and Boren Clay (1973) and Thompson Wire, Eagle Motors and Barrows through their investment in Stern Metals.[70]By 1976, tensions had built up betweenBear Stearnsand Kohlberg, Kravis and Roberts leading to their departure and the formation ofKohlberg Kravis Robertsin that year.
In January 1982, formerUnited States Secretary of the TreasuryWilliam E. Simonand a group of investors acquiredGibson Greetings, a producer of greeting cards, for $80 million, of which only $1 million was rumored to have been contributed by the investors. By mid-1983, just sixteen months after the original deal, Gibson completed a $290 million IPO and Simon made approximately $66 million.[71][72]
The success of the Gibson Greetings investment attracted the attention of the wider media to the nascent boom in leveraged buyouts. Between 1979 and 1989, it was estimated that there were over 2,000 leveraged buyouts valued in excess of $250 million.[73]
During the 1980s, constituencies within acquired companies and the media ascribed the "corporate raid" label to many private-equity investments, particularly those that featured ahostile takeoverof the company, perceivedasset stripping, major layoffs or other significant corporate restructuring activities. Among the most notable investors to be labeled corporate raiders in the 1980s includedCarl Icahn,Victor Posner,Nelson Peltz,Robert M. Bass,T. Boone Pickens,Harold Clark Simmons,Kirk Kerkorian,Sir James Goldsmith,Saul SteinbergandAsher Edelman.Carl Icahndeveloped a reputation as a ruthlesscorporate raiderafter his hostile takeover ofTWAin 1985.[74][75][76]Many of the corporate raiders were onetime clients ofMichael Milken, whose investment banking firm,Drexel Burnham Lamberthelped raise blind pools of capital with which corporate raiders could make a legitimate attempt to take over a company and providedhigh-yield debt("junk bonds") financing of the buyouts.
One of the final major buyouts of the 1980s proved to be its most ambitious and marked both a high-water mark and a sign of the beginning of the end of the boom. In 1989, KKR (Kohlberg Kravis Roberts) closed in on a $31.1 billion takeover ofRJR Nabisco. It was, at that time and for over 17 years, the largest leveraged buyout in history. The event was chronicled in the book (and later the movie),Barbarians at the Gate: The Fall of RJR Nabisco. KKR would eventually prevail in acquiring RJR Nabisco at $109 per share, marking a dramatic increase from the original announcement thatShearson Lehman Huttonwould take RJR Nabisco private at $75 per share. A fierce series of negotiations and horse-trading ensued which pittedKKRagainst Shearson and laterForstmann Little & Co.Many of the major banking players of the day, includingMorgan Stanley,Goldman Sachs,Salomon Brothers, andMerrill Lynchwere actively involved in advising and financing the parties. After Shearson's original bid, KKR quickly introduced a tender offer to obtain RJR Nabisco for $90 per share—a price that enabled it to proceed without the approval of RJR Nabisco's management. RJR's management team, working with Shearson and Salomon Brothers, submitted a bid of $112, a figure they felt certain would enable them to outflank any response by Kravis's team. KKR's final bid of $109, while a lower dollar figure, was ultimately accepted by the board of directors of RJR Nabisco.[77]At $31.1 billion of transaction value, RJR Nabisco was by far the largest leveraged buyouts in history. In 2006 and 2007, a number of leveraged buyout transactions were completed that for the first time surpassed the RJR Nabisco leveraged buyout in terms of nominal purchase price. However, adjusted for inflation, none of the leveraged buyouts of the 2006–2007 period would surpass RJR Nabisco. By the end of the 1980s the excesses of the buyout market were beginning to show, with the bankruptcy of several large buyouts includingRobert Campeau's 1988 buyout ofFederated Department Stores, the 1986 buyout of theRevcodrug stores, Walter Industries, FEB Trucking and Eaton Leonard. Additionally, the RJR Nabisco deal was showing signs of strain, leading to a recapitalization in 1990 that involved the contribution of $1.7 billion of new equity from KKR.[78]In the end, KKR lost $700 million on RJR.[79]
Drexel reached an agreement with the government in which it pleadednolo contendere(no contest) to six felonies – three counts of stock parking and three counts ofstock manipulation.[80]It also agreed to pay a fine of $650 million – at the time, the largest fine ever levied under securities laws. Milken left the firm after his own indictment in March 1989.[81][82]On 13 February 1990 after being advised by United States Secretary of the TreasuryNicholas F. Brady, theU.S. Securities and Exchange Commission(SEC), theNew York Stock Exchangeand theFederal Reserve, Drexel Burnham Lambert officially filed forChapter 11bankruptcy protection.[81]
The combination of decreasing interest rates, loosening lending standards and regulatory changes for publicly traded companies (specifically theSarbanes–Oxley Act) would set the stage for the largest boom private equity had seen. Marked by the buyout ofDex Mediain 2002, large multibillion-dollar U.S. buyouts could once again obtain significant high yield debt financing, and larger transactions could be completed. By 2004 and 2005, major buyouts were once again becoming common, including the acquisitions ofToys "R" Us,[83]The Hertz Corporation,[84][85]Metro-Goldwyn-Mayer[86]andSunGard[87]in 2005.
As 2006 began, new "largest buyout" records were set and surpassed several times; nine of the top ten buyouts by the end of 2007 had been announced in an 18-month period from the beginning of 2006 through the middle of 2007. In 2006, private-equity firms bought 654 U.S. companies for $375 billion, representing 18 times the level of transactions closed in 2003.[88]Additionally, U.S.-based private-equity firms raised $215.4 billion in investor commitments to 322 funds, surpassing the previous record set in 2000 by 22% and 33% higher than the 2005 fundraising total[89]The following year, despite the onset of turmoil in the credit markets in the summer, saw yet another record year of fundraising with $302 billion of investor commitments to 415 funds[90]Among the mega-buyouts completed during the 2006 to 2007 boom were:EQ Office,HCA,[91]Alliance Boots[92]andTXU.[93]
In July 2007, the turmoil that had been affecting themortgage markets, spilled over into the leveraged finance and high-yield debt markets.[94][95]The markets had been highly robust during the first six months of 2007, with highly issuer friendly developments includingPIK and PIK Toggle(interest is "PayableInKind") andcovenant lightdebt widely available to finance large leveraged buyouts. July and August saw a notable slowdown in issuance levels in the high yield and leveraged loan markets with few issuers accessing the market. Uncertain market conditions led to a significant widening of yield spreads, which coupled with the typical summer slowdown led many companies and investment banks to put their plans to issue debt on hold until the autumn. However, the expected rebound in the market after 1 May 2007 did not materialize, and the lack of market confidence prevented deals from pricing. By the end of September, the full extent of the credit situation became obvious as major lenders includingCitigroupandUBSannounced major writedowns due to credit losses. The leveraged finance markets came to a near standstill during a week in 2007.[96]As 2008 began, lending standards tightened and the era of "mega-buyouts" came to an end. Nevertheless, private equity continues to be a large and active asset class and the private-equity firms, with hundreds of billions of dollars of committed capital from investors are looking to deploy capital in new and different transactions.[citation needed]
As a result of the global financial crisis, private equity has become subject to increased regulation in Europe and is now subject, among other things, to rules preventing asset stripping of portfolio companies and requiring the notification and disclosure of information in connection with buy-out activity.[97][98]
From 2010 to 2014KKR,Carlyle,ApolloandAreswent public. Starting from 2018 these companies converted from partnerships into corporations with more shareholder rights and the inclusion in stock indices and mutual fund portfolios.[99]But with the increased availability and scope of funding provided by private markets, many companies are staying private simply because they can.McKinsey & Companyreports in its Global Private Markets Review 2018 that global private market fundraising increased by $28.2 billion from 2017, for a total of $748 billion in 2018.[100]Thus, given the abundance of private capital available, companies no longer require public markets for sufficient funding. Benefits may include avoiding the cost of an IPO, maintaining more control of the company, and having the 'legroom' to think long-term rather than focus on short-term or quarterly figures.[101][102]
A new feature in the 2020s: regulated platforms which fractionalize the assets, making possible individual investments of $10,000 or less.[103]
Private equity deal-making in the United Kingdom surged in 2024, with total investment reaching £63 billion, just 7% below the record high of £68 billion in 2021. According toDealogic, there were 305 private equity deals in 2024, marking a significant increase from 229 deals in 2023. The uptick in activity was driven by improving financial conditions and a rebound in investor confidence after a period of high interest rates in 2022 and 2023, which had slowed deal flow.[104]
Notable acquisitions included:
The rapid pace of acquisitions also contributed to the decline in the number of listed companies in London, as private equity firms increasingly targeted publicly traded businesses. Research byGoldman Sachsshowed that the London Stock Exchange experienced its fastest pace of shrinkage in over a decade due to private equity takeovers.[106]
However, concerns have been raised regarding the financial health of private equity-backed companies. TheBank of Englandissued a warning in 2024, stating that businesses owned by private equity firms were more vulnerable to default than other large businesses. The central bank’s research found that more than 2 million people in the UK were employed by firms engaged with private equity and that these companies were responsible for 15% of all corporate debt.[107]
Despite these risks, private equity interest in undervalued British companies has continued into 2025. As of early 2025, 19 deals worth a total of £2.9 billion have already been announced, highlighting the sector’s continued expansion.[108]
Although the capital for private equity originally came from individual investors or corporations, in the 1970s, private equity became an asset class in which variousinstitutional investorsallocated capital in the hopes of achieving risk-adjusted returns that exceed those possible in thepublic equity markets. In the 1980s, insurers were major private-equity investors. Later, public pension funds and university and other endowments became more significant sources of capital.[109]For most institutional investors, private-equity investments are made as part of a broad asset allocation that includes traditional assets (e.g.,public equityandbonds) and otheralternative assets(e.g.,hedge funds, real estate,commodities).
US, Canadian and European public and private pension schemes have invested in the asset class since the early 1980s todiversifyaway from their core holdings (public equity and fixed income).[110]Todaypension investment in private equityaccounts for more than a third of all monies allocated to theasset class, ahead of other institutional investors such as insurance companies, endowments, and sovereign wealth funds.
Most institutional investors do not invest directly inprivately held companies, lacking the expertise and resources necessary to structure and monitor the investment. Instead,institutional investorswill invest indirectly through aprivate equity fund. Certaininstitutional investorshave the scale necessary to develop a diversified portfolio of private-equity funds themselves, while others will invest through afund of fundsto allow a portfolio more diversified than one a single investor could construct.
Returns on private-equity investments are created through one or a combination of three factors that include: debt repayment or cash accumulation through cash flows from operations, operational improvements that increase earnings over the life of the investment and multiple expansion, selling the business for a higher price than was originally paid. A key component of private equity as an asset class for institutional investors is that investments are typically realized after some period of time, which will vary depending on the investment strategy. Private-equity investment returns are typically realized through one of the following avenues:
Large institutional asset owners such as pension funds (with typically long-dated liabilities), insurance companies, sovereign wealth and national reserve funds have a generally low likelihood of facing liquidity shocks in the medium term, and thus can afford the required long holding periods characteristic of private-equity investment.[110]
The median horizon for a LBO transaction is eight years.[111]
The private-equity secondary market (also often called private-equity secondaries) refers to the buying and selling of pre-existing investor commitments to private equity and other alternative investment funds. Sellers of private-equity investments sell not only the investments in the fund but also their remaining unfunded commitments to the funds. By its nature, the private-equity asset class is illiquid, intended to be a long-term investment for buy-and-hold investors. For the vast majority of private-equity investments, there is no listed public market; however, there is a robust and maturing secondary market available for sellers of private-equity assets.
Increasingly,secondariesare considered a distinct asset class with a cash flow profile that is not correlated with other private-equity investments. As a result, investors are allocating capital to secondary investments to diversify their private-equity programs. Driven by strong demand for private-equity exposure, a significant amount of capital has been committed to secondary investments from investors looking to increase and diversify their private-equity exposure.
Investors seeking access to private equity have been restricted to investments with structural impediments such as long lock-up periods, lack of transparency, unlimited leverage, concentrated holdings of illiquid securities and high investment minimums.
Secondary transactions can be generally divided into two primary categories:
This is the most common type of secondary transaction, involving the sale of an investor’s interest in a private-equity fund or a portfolio of multiple fund interests. Transactions may take several forms:
Also known asGP-Centered,secondary directsorsynthetic secondaries, these transactions involve the sale of a portfolio of direct investments in portfolio companies. Subcategories include:
According toPrivate Equity International'slatest PEI 300 ranking,[114]the largest private-equity firm in the world today isThe Blackstone Groupbased on the amount of private-equity direct-investment capital raised over a five-year window.
As ranked by the PEI 300, the 15 largest private-equity firms in the world in 2024 were:
Becauseprivate-equity firmsare continuously in the process of raising, investing and distributing theirprivate equity funds, capital raised can often be the easiest to measure. Other metrics can include the total value of companies purchased by a firm or an estimate of the size of a firm's active portfolio plus capital available for new investments. As with any list that focuses on size, the list does not provide any indication as to relative investment performance of these funds or managers.
Preqin, an independent data provider, ranks the25 largest private-equity investment managers. Among the larger firms in the 2017 ranking wereAlpInvest Partners,Ardian(formerly AXA Private Equity),AIG Investments, andGoldman Sachs Capital Partners.Invest Europepublishes a yearbook which analyses industry trends derived from data disclosed by over 1,300 European private-equity funds.[115]Finally, websites such as AskIvy.net[116]provide lists of London-based private-equity firms.
The investment strategies of private-equity firms differ from those ofhedge funds. Typically, private-equity investment groups are geared towards long-hold, multiple-year investment strategies in illiquid assets (whole companies, large-scale real estate projects, or other tangibles not easily converted to cash) where they have more control and influence over operations or asset management to influence their long-term returns. Hedge funds usually focus on short or medium term liquid securities which are more quickly convertible to cash, and they do not have direct control over the business or asset in which they are investing.[117]Both private-equity firms and hedge funds often specialize in specific types of investments and transactions. Private-equity specialization is usually in specific industry sector asset management while hedge fund specialization is in industry sector risk capital management. Private-equity strategies can include wholesale purchase of a privately held company or set of assets,mezzanine financingfor startup projects,growth capitalinvestments in existing businesses orleveraged buyoutof a publicly held asset converting it to private control.[118]Finally, private-equity firms only takelong positions, forshort sellingis not possible in this asset class.
Private-equity fundraising refers to the action of private-equity firms seeking capital from investors for their funds. Typically an investor will invest in a specific fund managed by a firm, becoming a limited partner in the fund, rather than an investor in the firm itself. As a result, an investor will only benefit from investments made by a firm where the investment is made from the specific fund in which it has invested.
As fundraising has grown over the past few years, so too has the number of investors in the average fund. In 2004, there were 26 investors in the average private-equity fund, this figure has now grown to 42 according toPreqinltd. (formerly known as Private Equity Intelligence).
The managers of private-equity funds will also invest in their own vehicles, typically providing between 1–5% of the overall capital.
Often private-equity fund managers will employ the services of external fundraising teams known as placement agents in order to raise capital for their vehicles. The use of placement agents has grown over the past few years, with 40% of funds closed in 2006 employing their services, according to Preqin ltd. Placement agents will approach potential investors on behalf of the fund manager, and will typically take a fee of around 1% of the commitments that they are able to garner.
The amount of time that a private-equity firm spends raising capital varies depending on the level of interest among investors, which is defined by current market conditions and also the track record of previous funds raised by the firm in question. Firms can spend as little as one or two months raising capital when they are able to reach the target that they set for their funds relatively easily, often through gaining commitments from existing investors in their previous funds, or where strong past performance leads to strong levels of investor interest. Other managers may find fundraising taking considerably longer, with managers of less popular fund types finding the fundraising process more tough. It can take up to two years to raise capital, although the majority of fund managers will complete fundraising within nine months to fifteen months.
Once a fund has reached its fundraising target, it will have a final close. After this point it is not normally possible for new investors to invest in the fund, unless they were to purchase an interest in the fund on the secondary market.
The state of the industry around the end of 2011 was as follows.[120]
Private-equityassets under managementprobably exceeded $2 trillion at the end of March 2012, and funds available for investment totaled $949bn (about 47% of overall assets under management).
Approximately $246bn of private equity was invested globally in 2011, down 6% on the previous year and around two-thirds below the peak activity in 2006 and 2007. Following on from a strong start, deal activity slowed in the second half of 2011 due to concerns over the global economy and sovereign debt crisis in Europe. There was $93bn in investments during the first half of this year as the slowdown persisted into 2012. This was down a quarter on the same period in the previous year. Private-equity backed buyouts generated some 6.9% of global M&A volume in 2011 and 5.9% in the first half of 2012. This was down on 7.4% in 2010 and well below the all-time high of 21% in 2006.
Global exit activity totalled $252bn in 2011, practically unchanged from the previous year, but well up on 2008 and 2009 as private-equity firms sought to take advantage of improved market conditions at the start of the year to realise investments. Exit activity however, has lost momentum following a peak of $113bn in the second quarter of 2011. TheCityUK estimates total exit activity of some $100bn in the first half of 2012, well down on the same period in the previous year.
The fund raising environment remained stable for the third year running in 2011 with $270bn in new funds raised, slightly down on the previous year's total. Around $130bn in funds was raised in the first half of 2012, down around a fifth on the first half of 2011. The average time for funds to achieve a final close fell to 16.7 months in the first half of 2012, from 18.5 months in 2011. Private-equity funds available for investment ("dry powder") totalled $949bn at the end of q1-2012, down around 6% on the previous year. Including unrealised funds in existing investments, private-equity funds under management probably totalled over $2.0 trillion.
Public pensions are a major source of capital for private-equity funds. Increasingly,sovereign wealth fundsare growing as an investor class for private equity.[121]
Private Equity was invested in 13% of the Pharma 1000 in 2021 according to Torreya withEight Roads Ventureshaving the highest number of investments in this industry.[122]
Due to limited disclosure, studying the returns to private equity is relatively difficult. Unlike mutual funds, private-equity funds need not disclose performance data. And, as they invest in private companies, it is difficult to examine the underlying investments. It is challenging to compare private-equity performance to public-equity performance, in particular because private-equity fund investments are drawn and returned over time as investments are made and subsequently realized.
An oft-cited academic paper (Kaplan and Schoar, 2005)[123]suggests that the net-of-fees returns to PE funds are roughly comparable to the S&P 500 (or even slightly under). This analysis may actually overstate the returns because it relies on voluntarily reported data and hence suffers fromsurvivorship bias(i.e. funds that fail will not report data). One should also note that these returns are not risk-adjusted. A 2012 paper by Harris, Jenkinson and Kaplan, 2012[124]found that average buyout fund returns in the U.S. have actually exceeded that of public markets. These findings were supported by earlier work, using a data set from Robinson and Sensoy in 2011.[125]
Commentators have argued that a standard methodology is needed to present an accurate picture of performance, to make individual private-equity funds comparable and so the asset class as a whole can be matched against public markets and other types of investment. It is also claimed that PE fund managers manipulate data to present themselves as strong performers, which makes it even more essential to standardize the industry.[126]
Two other findings in Kaplan and Schoar in 2005: First, there is considerable variation in performance across PE funds. Second, unlike the mutual fund industry, there appears to be performance persistence in PE funds. That is, PE funds that perform well over one period, tend to also perform well the next period. Persistence is stronger for VC firms than for LBO firms.
The application of theFreedom of Information Act(FOIA) in certain states in the United States has made certain performance data more readily available. Specifically, FOIA has required certain public agencies to disclose private-equity performance data directly on their websites.[127]
In the United Kingdom, the second largest market for private equity, more data has become available since the 2007 publication of theDavid WalkerGuidelines for Disclosure and Transparency in Private Equity.[128]
Below is a partial list of billionaires who acquired their wealth through private equity.
Income to private equity firms is primarily in the form of "carried interest", typically 20% of the profits generated by investments made by the firm, and a "management fee", often 2% of the principal invested in the firm by the outside investors whose money the firm holds. As a result of atax loopholeenshrined in the U.S. tax code, carried interest that accrues to private equity firms is treated ascapital gains, which is taxed at a lower rate than isordinary income. Currently, the long term capital gains tax rate is 20% compared with the 37% top ordinary income tax rate for individuals. This loophole has been estimated to cost the government $130 billion over the next decade in unrealized revenue. Armies of corporate lobbyists and huge private equity industrydonations to political campaignsin the United States have ensured that this powerful industry receives this favorable tax treatment by the government. Private equity firms retain close to 200lobbyistsand over the last decade have made almost $600 million in political campaign contributions.[143]
In addition, through an accounting maneuver called "fee waiver", private equity firms often also treat management fee income as capital gains. The U.S.Internal Revenue Service(IRS) lacks the manpower and the expertise that would be necessary to track compliance with even these already quite favorable legal requirements. In fact, the IRS conducts nearly noincome tax auditsof the industry. As a result of the complexity of the accounting that arises from the fact that most private equity firms are organized as large partnerships, such that the firm's profits are apportioned to each of the many partners, a number of private equity firms fail to comply with tax laws, according to industrywhistleblowers.[143]
When a private equity entity invests in a company, industry or public service, there have been reports of reduced quality, both in terms of services and goods produced.[144][145]While a private equity investment into a business might result in short-term improvements, such as new staff and equipment, the incentive is to maximize profits, not necessarily the quality of products or services. Over time, cost-cutting has also been common, and deferring further investments. Private equity investors may also be incentivized to make short-term gains by selling a company once a certain level of profitability is achieved or simply selling off its assets if that is not possible. Both of these situations, and others, can result in a loss of innovation and quality.[146][147][148][149][145][144]
There is a debate around the distinction between private equity andforeign direct investment(FDI), and whether to treat them separately. The difference is blurred on account of private equity not entering the country through the stock market. Private equity generally flows to unlisted firms and to firms where the percentage of shares is smaller than the promoter- or investor-held shares (also known asfree-floating shares). The main point of contention is that FDI is used solely for production, whereas in the case of private equity the investor can reclaim their money after a revaluation period and make investments in other financial assets. At present, most countries report private equity as a part of FDI.[150]
Some studies have shown that private-equity investments in health care and related services, such as nursing homes and hospitals, have decreased the quality of care while driving up costs. Researchers at theBecker Friedman Instituteof theUniversity of Chicagofound that private-equity ownership of nursing homes increased the short-term mortality of Medicare patients by 10%.[151]Treatment by private-equity owned health care providers tends to be associated with a higher rate of "surprise bills".[152]Private equity ownership of dermatology practices has led to pressure to increase profitability, concerns about up-charging and patient safety.[153][154]In a 2024 study of 51 private equity–acquired hospitals matched with 250 controls, the former had a 25% increase in hospital-acquired conditions, such as falls andcentral line-associated bloodstream infections.[155]
According to conservativeOren Cass, private equity captures wealth rather than creating it, and this capture can be "zero-sum, or even value-destroying, in aggregate." He describes "assets get shuffled and reshuffled, profits get made, but relatively little flows toward actual productive uses."[156]
Bloomberg Businessweekstates that:
PE may contribute to inequality in several ways. First, it offers investors higher returns than those available in public stocks and bonds markets. Yet, to enjoy those returns, it helps to already be rich. Private-equity funds are open solely to "qualified" (read: high-net-worth) individual investors and to institutions such as endowments. Only some workers get indirect exposure via pension funds. Second, PE puts pressure on the lower end of the wealth divide. Companies can be broken up, merged, or generally restructured to increase efficiency and productivity, which inevitably means job cuts.[4]
|
https://en.wikipedia.org/wiki/Private_equity
|
Asecurityis a tradablefinancial asset. The term commonly refers to any form offinancial instrument, but its legal definition varies by jurisdiction. In some countries and languages people commonly use the term "security" to refer to any form of financial instrument, even though the underlying legal and regulatory regime may not have such a broad definition. In some jurisdictions the term specifically excludes financial instruments other thanequityandfixed incomeinstruments. In some jurisdictions it includes some instruments that are close to equities and fixed income, e.g.,equity warrants.
Securities may be represented by a certificate or, more typically, they may be "non-certificated", that is in electronic (dematerialized) or "book entryonly" form. Certificates may bebearer, meaning they entitle the holder to rights under the security merely by holding the security, orregistered, meaning they entitle the holder to rights only if they appear on a security register maintained by the issuer or an intermediary. They include shares of corporatecapital stockormutual funds,bondsissued by corporations or governmental agencies,stock optionsor other options, limited partnership units, and various other formal investment instruments that are negotiable andfungible.
In the United Kingdom, theFinancial Conduct Authorityfunctions as the nationalcompetent authorityfor the regulation of financial markets; the definition in itsHandbookof the term "security"[1]applies only to equities,debentures, alternative debentures, government and public securities, warrants, certificates representing certain securities, units, stakeholder pension schemes, personal pension schemes, rights to or interests in investments, and anything that may be admitted to the Official List.
In the United States, a "security" is a tradablefinancial assetof any kind.[2]Securities can be broadly categorized into:
The company or other entity issuing the security is called theissuer. A country's regulatory structure determines what qualifies as a security. For example, private investment pools may have some features of securities, but they may not be registered or regulated as such if they meet various restrictions.
Securities are the traditional method used by commercial enterprises to raise new capital. They may offer an attractive alternative to bank loans - depending on their pricing and market demand for particular characteristics. A disadvantage of bank loans as a source of financing is that the bank may seek a measure of protection against default by the borrower via extensive financial covenants. Through securities, capital is provided by investors who purchase the securities upon their initial issuance. In a similar way, a government may issue securities when it chooses to increasegovernment debt.
Securities are traditionally divided into debt securities and equities.
Debt securities may be calleddebentures,bonds,deposits,notesorcommercial paperdepending on their maturity, collateral and other characteristics. The holder of a debt security is typically entitled to the payment of principal and interest, together with other contractual rights under the terms of the issue, such as the right to receive certain information. Debt securities are generally issued for a fixed term and redeemable by the issuer at the end of that term. Debt securities may be protected by collateral or may be unsecured, and, if they are unsecured, may be contractually "senior" to otherunsecured debtmeaning their holders would have a priority in a bankruptcy of the issuer. Debt that is not senior is "subordinated".
Corporate bondsrepresent the debt of commercial or industrial entities. Debentures have a long maturity, typically at least ten years, whereas notes have a shorter maturity. Commercial paper is a simple form of debt security that essentially represents a post-dated cheque with a maturity of not more than 270 days.
Money market instrumentsare short term debt instruments that may have characteristics of deposit accounts, such ascertificates of deposit,Accelerated Return Notes (ARN), and certainbills of exchange. They are highly liquid and are sometimes referred to as "near cash". Commercial paper is also often highly liquid.
Euro debt securitiesare securities issued internationally outside their domestic market in a denomination different from that of the issuer's domicile. They include eurobonds and euronotes. Eurobonds are characteristically underwritten, and not secured, and interest is paid gross. A euronote may take the form of euro-commercial paper (ECP) or euro-certificates of deposit.
Government bondsare medium or long term debt securities issued by sovereign governments or their agencies. Typically they carry a lower rate of interest than corporate bonds, and serve as a source of finance for governments. U.S. federal government bonds are calledtreasuries.Because of their liquidity and perceived low risk, treasuries are used to manage the money supply in theopen market operationsof non-US central banks.
Sub-sovereign government bonds, known in the U.S. asmunicipal bonds, represent the debt of state, provincial, territorial, municipal or other governmental units other than sovereign governments.
Supranational bondsrepresent the debt of international organizations such as theWorld Bank,[3]theInternational Monetary Fund,[4]regionalmultilateral development bankslike theAfrican Development[5]Bank and theAsian Development Bank,[6]and others.
An equity security is a share of equity interest in an entity such as the capital stock of a company, trust or partnership. The most common form of equity interest is common stock, although preferred equity is also a form of capital stock. The holder of an equity is a shareholder, owning a share, or fractional part of the issuer. Unlike debt securities, which typically require regular payments (interest) to the holder, equity securities are not entitled to any payment. In bankruptcy, they share only in the residual interest of the issuer after all obligations have been paid out to creditors. However, equity generally entitles the holder to a pro rata portion of control of the company, meaning that a holder of a majority of the equity is usually entitled to control the issuer. Equity also enjoys the right toprofitsandcapital gain, whereas holders of debt securities receive only interest and repayment ofprincipalregardless of how well the issuer performs financially. Furthermore, debt securities do not have voting rights outside of bankruptcy. In other words, equity holders are entitled to the "upside" of the business and to control the business.
Hybrid securities combine some of the characteristics of both debt and equity securities.
Preference sharesform an intermediate class of security between equities and debt. If the issuer is liquidated, preference shareholders have the right to receive interest or a return of capital prior to ordinary shareholders. However, from a legal perspective, preference shares are capital stocks and therefore may entitle the holders to some degree of control depending on whether they carry voting rights.
Convertiblesare bonds orpreferred stocksthat can be converted, at the election of the holder of the convertibles, into the ordinary shares of the issuing company. The convertibility, however, may be forced if the convertible is acallable bond, and the issuer calls the bond. The bondholder has about one month to convert it, or the company will call the bond by giving the holder the call price, which may be less than the value of the converted stock. This is referred to as a forced conversion.
Equity warrantsare options issued by the company that allow the holder of the warrant to purchase a specific number of shares at a specified price within a specified time. They are often issued together with bonds or existing equities, and are, sometimes, detachable from them and separately tradeable. When the holder of the warrant exercises it, he pays the money directly to the company, and the company issues new shares to the holder.
Warrants, like other convertible securities, increases the number of shares outstanding, and are always accounted for in financial reports as fully diluted earnings per share, which assumes that all warrants and convertibles will be exercised.
Securities may be classified according to many categories or classification systems:
Investors in securities may beretail, i.e., members of the public investing personally, other than by way of business.
In distinction, the greatest part of investment in terms of volume, iswholesale, i.e., by financial institutions acting on their own account, or on behalf of clients.
Importantinstitutional investorsincludeinvestment banks,insurancecompanies,pension fundsand other managed funds.
The "wholesaler" is typically anunderwriteror abroker-dealerwho trades with other broker-dealers, rather than with the retail investor.[7]
This distinction carries over tobanking; compareRetail bankingandWholesale banking.
The traditional economic function of the purchase of securities is investment, with the view to receivingincomeor achievingcapital gain. Debt securities generally offer a higher rate of interest than bank deposits, and equities may offer the prospect of capital growth.Equity investmentmay also offer control of the business of the issuer. Debt holdings may also offer some measure of control to the investor if the company is a fledgling start-up or an old giant undergoingrestructuring. In these cases, if interest payments are missed, the creditors may take control of the company and liquidate it to recover some of their investment.
The last decade has seen an enormous growth in the use of securities ascollateral. Purchasing securities with borrowed money secured by other securities or cash itself is called "buying on margin". Where A is owed a debt or other obligation by B, A may require B to deliverproperty rightsin securities to A, either at inception (transfer of title) or only in default (non-transfer-of-title institutional). For institutional loans, property rights are not transferred but nevertheless enable A to satisfy its claims in case B fails to make good on its obligations to A or otherwise becomesinsolvent. Collateral arrangements are divided into two broad categories, namelysecurity interestsand outright collateral transfers. Commonly, commercial banks, investment banks, government agencies and other institutional investors such asmutual fundsare significant collateral takers as well as providers. In addition, private parties may utilize stocks or other securities as collateral for portfolio loans insecurities lendingscenarios.
On the consumer level, loans against securities have grown into three distinct groups over the last[which?]decade:
Of the three, transfer-of-title loans have fallen into the very high-risk category as the number of providers has dwindled as regulators have launched an industry-wide crackdown on transfer-of-title structures where the private lender may sell orsell shortthe securities to fund the loan. Institutionally managed consumer securities-based loans on the other hand, draw loan funds from the financial resources of the lending institution, not from the sale of the securities.
Collateral and sources of collateral are changing, in 2012 gold became a more acceptable form of collateral.[8]By 2015, recently Exchange-traded funds (ETFs) previously seen by many as unpromising had started to become more readily available and acceptable.[9]
Public securities markets are either primary or secondary markets. In the primary market, the money for the securities is received by the issuer of the securities from investors, typically in aninitial public offering(IPO). In the secondary market, the securities are simply assets held by one investor selling them to another investor, with the money going from one investor to the other.
An initial public offering is when a company issues public stock newly to investors, called an "IPO" for short. A company can later issue more new shares, or issue shares that have been previously registered in a shelf registration. These later new issues are also sold in the primary market, but they are not considered to be an IPO, and are often called a "secondary offering". Issuers usually retaininvestment banksto assist them in administering the IPO, obtaining regulatory approval of the offering filing, and selling the new issue. When the investment bank buys the entire new issue from the issuer at a discount to resell it at a markup, it is called afirm commitment underwriting. However, if the investment bank considers the risk too great for an underwriting, it may only assent to abest effort agreement, where the investment bank will simply do its best to sell the new issue.
For the primary market to thrive, there must be asecondary market, or aftermarket that provides liquidity for the investment security—where holders of securities can sell them to other investors for cash. Otherwise, few people would purchase primary issues, and, thus, companies and governments would be restricted in raising equity capital (money) for their operations. Organized exchanges constitute the main secondary markets. Many smaller issues and most debt securities trade in the decentralized, dealer-basedover-the-countermarkets.
In Europe, theprincipal tradeorganization for securities dealers is the International Capital Market Association.[10]In the U.S., the principal trade organization for securities dealers is the Securities Industry and Financial Markets Association,[11]which is the result of the merger of the Securities Industry Association and the Bond Market Association. The Financial Information Services Division of the Software and Information Industry Association (FISD/SIIA)[12]represents a round-table of market data industry firms, referring to them as Consumers, Exchanges, and Vendors. In India the equivalent organisation is the securities exchange board of India (SEBI).
In the primary markets, securities may be offered to the public in apublic offering. Alternatively, they may be offered privately to a limited number of qualified persons in aprivate placement. Sometimes a combination of the two is used. The distinction between the two is important to securities regulation andcompany law. Privately placed securities are not publicly tradable and may only be bought and sold by sophisticated qualified investors. As a result, the secondary market is not nearly as liquid as it is for public (registered) securities.
Another category,sovereign bonds, is generally sold by auction to a specialized class of dealers.
Securities are often listed in astock exchange, an organized and officially recognized market on which securities can be bought and sold. Issuers may seek listings for their securities to attract investors, by ensuring there is a liquid and regulated market that investors can buy and sell securities in.
Growth in informal electronic trading systems has challenged the traditional business of stock exchanges. Large volumes of securities are also bought and sold "over the counter" (OTC). OTC dealing involves buyers and sellers dealing with each other by telephone or electronically on the basis of prices that are displayed electronically, usually byfinancial data vendorssuch as SuperDerivatives,Reuters,Investing.comandBloomberg.
There are also eurosecurities, which are securities that are issued outside their domestic market into more than one jurisdiction. They are generally listed on theLuxembourg Stock Exchangeor admitted to listing inLondon. The reasons for listing eurobonds include regulatory and tax considerations, as well as the investment restrictions.
Securities Services refers to the products and services that are offered to institutional clients that issue, trade, and hold securities. The bank engaged in securities services are usually called a custodian bank. Market players includeBNY Mellon,J.P. Morgan,HSBC,Citi,BNP Paribas,Société Généraleetc.
London is the centre of the eurosecurities markets. There was a huge rise in the eurosecurities market in London in the early 1980s. Settlement of trades in eurosecurities is currently effected through two international central securities depositories, namelyEuroclear Bank(in Belgium) andClearstream Banking SA(formerly Cedel, in Luxembourg).
The main market for Eurobonds is the EuroMTS, owned by Borsa Italiana and Euronext. There are ramp up market in Emergent countries, but it is growing slowly.
Securities that are represented in paper (physical) form are called certificated securities. They may bebearerorregistered.
Securities may also be held in the Direct Registration System (DRS), which is a method of recording shares of stock in book-entry form. Book-entry means the company's transfer agent maintains the shares on the owner's behalf without the need for physical share certificates. Shares held in un-certificated book-entry form have the same rights and privileges as shares held in certificated form.
Bearer securities are completely negotiable and entitle the holder to the rights under the security (e.g., to payment if it is a debt security, and voting if it is an equity security). They are transferred by delivering the instrument from person to person. In some cases, transfer is by endorsement, or signing the back of the instrument, and delivery.
Regulatory and fiscal authorities sometimes regard bearer securities negatively, as they may be used to facilitate the evasion of regulatory restrictions and tax. In theUnited Kingdom, for example, the issue of bearer securities was heavily restricted firstly by theExchange Control Act1947 until 1953. Bearer securities are very rare in the United States because of the negative tax implications they may have to the issuer and holder.
In Luxembourg, the law of 28 July 2014 concerning the compulsory deposit and immobilization of shares and units in bearer form adopts the compulsory deposit and immobilization of bearer shares and units with a depositary allowing identification of the holders thereof.
In the case of registered securities, certificates bearing the name of the holder are issued, but these merely represent the securities. A person does not automatically acquire legal ownership by having possession of the certificate. Instead, the issuer (or its appointed agent) maintains a register in which details of the holder of the securities are entered and updated as appropriate. A transfer of registered securities is effected by amending the register.
Modern practice has developed to eliminate both the need for certificates and maintenance of a complete security register by the issuer. There are two general ways this has been accomplished.
In some jurisdictions, such as France, it is possible for issuers of that jurisdiction to maintain a legal record of their securities electronically.
In theUnited States, the current "official" version of Article 8 of theUniform Commercial Codepermits non-certificated securities. However, the "official" UCC is a mere draft that must be enacted individually by eachU.S. state. Though all 50 states (as well as theDistrict of Columbiaand theU.S. Virgin Islands) have enacted some form of Article 8, many of them still appear to use older versions of Article 8, including some that did not permit non-certificated securities.[13]
To facilitate the electronic transfer of interests in securities without dealing with inconsistent versions of Article 8, a system has developed whereby issuers deposit a single global certificate representing all the outstanding securities of a class or series with a universal depository. This depository is calledThe Depository Trust Company, or DTC. DTC's parent,Depository Trust & Clearing Corporation(DTCC), is a non-profit cooperative owned by approximately thirty of the largest Wall Street players that typically act as brokers or dealers in securities. These thirty banks are called the DTC participants. DTC, through a legal nominee, owns each of the global securities on behalf of all the DTC participants.
All securities traded through DTC are in fact held, in electronic form, on the books of various intermediaries between the ultimate owner, e.g., a retail investor, and the DTC participants. For example, Mr. Smith may hold 100 shares of Coca-Cola, Inc. in his brokerage account at local broker Jones & Co. brokers. In turn, Jones & Co. may hold 1000 shares of Coca-Cola on behalf of Mr. Smith and nine other customers. These 1000 shares are held by Jones & Co. in an account with Goldman Sachs, a DTC participant, or in an account at another DTC participant. Goldman Sachs in turn may hold millions of Coca-Cola shares on its books on behalf of hundreds of brokers similar to Jones & Co. Each day, the DTC participants settle their accounts with the other DTC participants and adjust the number of shares held on their books for the benefit of customers like Jones & Co. Ownership of securities in this fashion is calledbeneficial ownership. Each intermediary holds on behalf of someone beneath him in the chain. The ultimate owner is called the beneficial owner. This is also referred to as owning in "Street name".
Among brokerages and mutual fund companies, a large amount of mutual fund share transactions take place among intermediaries as opposed to shares being sold and redeemed directly with the transfer agent of the fund. Most of these intermediaries such as brokerage firms clear the shares electronically through the National Securities Clearing Corp. or "NSCC", a subsidiary of DTCC.
Besides DTC in the US, central securities depositories (CSDs) exist on a national basis in most jurisdictions. In addition, two major international CSDs exist, both based in Europe, namely Euroclear Bank and Clearstream Banking SA.
The terms "divided" and "undivided" relate to the proprietary nature of a security.
Each divided security constitutes a separate asset, which is legally distinct from each other security in the same issue. Pre-electronic bearer securities were divided. Each instrument constitutes the separate covenant of the issuer and is a separate debt.
With undivided securities, the entire issue makes up one single asset, with each of the securities being a fractional part of this undivided whole. Shares in the secondary markets are always undivided. The issuer owes only one set of obligations to shareholders under its memorandum, articles of association and company law. Asharerepresents an undivided fractional part of the issuing company. Registered debt securities also have this undivided nature.
In a fungible security, all holdings of the security are treated identically and are interchangeable.
Sometimes securities are not fungible with other securities, for example different series of bonds issued by the same company at different times with different conditions attaching to them.
In the US, the public offer and sale of securities must be either registered pursuant to a registration statement that is filed with theU.S. Securities and Exchange Commission(SEC) or are offered and sold pursuant to an exemption therefrom. Dealing in securities is regulated by both federal authorities (SEC) and state securities departments. In addition, the brokerage industry is supposedly self policed byself-regulatory organizations(SROs), such as theFinancial Industry Regulatory Authority(FINRA), formerly the National Association of Securities Dealers (or NASD), or theMunicipal Securities Rulemaking Board(MSRB).
With respect to investment schemes that do not fall within the traditional categories of securities listed in the definition of a security (Sec. 2(a)(1) of theSecurities Act of 1933and Sec. 3(a)(10) of the 34 act) the US Courts have developed a broad definition for securities that must then be registered with the SEC. When determining if there is an "investment contract" that must be registered the courts look for an investment of money, a common enterprise and expectation of profits to come primarily from the efforts of others. SeeSEC v. W.J. Howey Co.
"Anynote,capital stock,treasury stock,bond,debenture, certificate of interest or participation in anyprofit-sharing agreementor in any oil, gas, or other mineralroyaltyorlease, anycollateraltrust certificate, preorganization certificate or subscription, transferable share,investmentcontract, voting-trust certificate,certificate of deposit, for a security, anyput,call,straddle,option, or group or index of securities (including any interest therein or based on the value thereof), or any put, call, straddle, option, or privilege entered into on a nationalsecurities exchangerelating toforeign currency, or in general, anyinstrumentcommonly known as a 'security'; or any certificate of interest or participation in, temporary or interim certificate for, receipt for, or warrant or right to subscribe to or purchase, any of the foregoing; but shall not include currency or any note, draft,bill of exchange, or banker's acceptance which has amaturityat the time of issuance of not exceeding nine months, exclusive of days of grace, or any renewal thereof the maturity of which is likewise limited."
|
https://en.wikipedia.org/wiki/Security_(finance)
|
Empiricalmethods
Prescriptiveand policy
Astock market,equity market, orshare marketis the aggregation of buyers and sellers ofstocks(also called shares), which representownershipclaims on businesses; these may includesecuritieslisted on a publicstock exchangeas well as stock that is only traded privately, such as shares of private companies that are sold toinvestorsthroughequity crowdfundingplatforms. Investments are usually made with aninvestment strategyin mind.
The totalmarket capitalizationof all publicly traded stocks worldwide rose fromUS$2.5 trillion in 1980 to US$111 trillion by the end of 2023.[1]
As of 2016[update], there are 60 stock exchanges in the world. Of these, there are 16 exchanges with amarket capitalizationof $1 trillion or more, and they account for 87% ofglobal marketcapitalization. Apart from theAustralian Securities Exchange, these 16 exchanges are all inNorth America,Europe, orAsia.[2]
By country, the largest stock markets as of January 2022 are in the United States of America (about 59.9%), followed by Japan (about 6.2%) and United Kingdom (about 3.9%).[3]
Astock exchangeis anexchange(or bourse) wherestockbrokersandtraderscan buy and sellshares(equitystock),bonds, and othersecurities. Manylarge companieshave their stocks listed on a stock exchange. This makes the stock more liquid and thus more attractive to many investors. The exchange may also act as a guarantor of settlement. These and other stocks may also be traded "over the counter" (OTC), that is, through a dealer. Some large companies will have their stock listed on more than one exchange in different countries, so as to attract international investors.[4]
Stock exchanges may also cover other types of securities, such as fixed-interest securities (bonds) or (less frequently) derivatives, which are more likely to be traded OTC.
Trade in stock markets means the transfer (in exchange for money) of a stock or security from a seller to a buyer. This requires these two parties to agree on a price.Equities(stocks or shares) confer an ownership interest in a particular company.
Participants in the stock market range from small individualstock investorsto larger investors, who can be based anywhere in the world, and may includebanks,insurancecompanies,pension fundsandhedge funds. Their buy or sell orders may be executed on their behalf by a stock exchangetrader.
Some exchanges are physical locations where transactions are carried out on a trading floor, by a method known asopen outcry. This method is used in some stock exchanges andcommodities exchanges, and involves traders shouting bid and offer prices. The other type of stock exchange has a network of computers where trades are made electronically. An example of such an exchange is theNASDAQ.
A potential buyerbidsa specific price for a stock, and a potential sellerasksa specific price for the same stock. Buying or sellingat theMarketmeans you will acceptanyask price or bid price for the stock. When the bid and ask prices match, a sale takes place, on a first-come, first-served basis if there are multiple bidders at a given price.
The purpose of a stock exchange is to facilitate the exchange of securities between buyers and sellers, thus providing amarketplace. The exchanges provide real-time trading information on the listed securities, facilitatingprice discovery.
TheNew York Stock Exchange(NYSE) is a physical exchange, with ahybrid marketfor placing orders electronically from any location as well as on thetrading floor. Orders executed on the trading floor enter by way of exchange members and flow down to afloor broker, who submits the order electronically to the floor trading post for the Designatedmarket maker("DMM") for that stock to trade the order. The DMM's job is to maintain a two-sided market, making orders to buy and sell the security when there are no other buyers or sellers. If abid–ask spreadexists, no trade immediately takes place – in this case, the DMM may use their own resources (money or stock) to close the difference. Once a trade has been made, the details are reported on the "tape" and sent back to the brokerage firm, which then notifies the investor who placed the order. Computers play an important role, especially forprogram trading.
TheNASDAQis an electronic exchange, where all of the trading is done over acomputer network. The process is similar to the New York Stock Exchange. One or more NASDAQmarket makerswill always provide a bid and ask the price at which they will always purchase or sell 'their' stock.
TheParis Bourse, now part ofEuronext, is an order-driven, electronic stock exchange. It was automated in the late 1980s. Prior to the 1980s, it consisted of an open outcry exchange.Stockbrokersmet on the trading floor of the Palais Brongniart. In 1986, theCATS trading systemwas introduced, and theorder matching systemwas fully automated.
People trading stock will prefer to trade on the mostpopular exchangesince this gives the largest number of potential counter parties (buyers for a seller, sellers for a buyer) and probably the best price. However, there have always been alternatives such as brokers trying to bring parties together to trade outside the exchange. Some third markets that were popular areInstinet, and later Island and Archipelago (the latter two have since been acquired by Nasdaq and NYSE, respectively). One advantage is that this avoids thecommissionsof the exchange. However, it also has problems such asadverse selection.[5]Financial regulators have probeddark pools.[6][7]
Market participantsinclude individual retail investors,institutional investors(e.g.,pension funds,insurance companies,mutual funds,index funds,exchange-traded funds,hedge funds, investor groups, banks and various otherfinancial institutions), and also publicly traded corporations trading in their own shares.Robo-advisors, which automate investment for individuals are also major participants.
In 2021, the value of world stock markets experienced an increase of 26.5%, amounting to US$22.3 trillion. Developing economies contributed US$9.9 trillion and developed economies US$12.4 trillion. Asia and Oceania accounted for 45%, Europe had 37%, and America had 16%, while Africa had 2% of the global market.[8]
Factors such as high trading prices, market ratings, information about stock exchange dynamics, and financial institutions can influence individual and corporate participation in stock markets. Additionally, the appeal of stock ownership, driven by the potential for higher returns compared to other financial instruments, plays a crucial role in attracting individuals to invest in the stock market.
Regional and country-specific factors can also impact stock market participation rates. For example, in the United States, stock market participation rates vary widely across states, with regional factors potentially influencing these disparities. It is noted that individual participation costs alone cannot explain such large differences in participation rates from state to state, indicating the presence of other regional factors at play.[9]
Behavioral factors are recognized as significant influences on stock market participation, as evidenced by the low participation rates observed in the Ghanaian stock market.[10]
Factors such as factor endowments, geography, political stability,liberal trade policies, foreign direct investment inflows, and domestic industrial capacity are also identified as important in determining participation.[11]
Indirect investment involves owning shares indirectly, such as via a mutual fund or an exchange traded fund. Direct investment involves direct ownership of shares.[12]
Direct ownership of stock by individuals rose slightly from 17.8% in 1992 to 17.9% in 2007, with the median value of these holdings rising from $14,778 to $17,000.[13][14]Indirect participation in the form of retirement accounts rose from 39.3% in 1992 to 52.6% in 2007, with the median value of these accounts more than doubling from $22,000 to $45,000 in that time.[13][14]Rydqvist, Spizman, andStrebulaevattribute the differential growth in direct and indirect holdings to differences in the way each are taxed in the United States. Investments in pension funds and 401ks, the two most common vehicles of indirect participation, are taxed only when funds are withdrawn from the accounts. Conversely, the money used to directly purchase stock is subject to taxation as are any dividends or capital gains they generate for the holder. In this way, the current tax code incentivizes individuals to invest indirectly.[15]
Rates of participation and the value of holdings differ significantly across strata of income. In the bottom quintile of income, 5.5% of households directly own stock and 10.7% hold stocks indirectly in the form of retirement accounts.[14]The top decile of income has a direct participation rate of 47.5% and an indirect participation rate in the form of retirement accounts of 89.6%.[14]The median value of directly owned stock in the bottom quintile of income is $4,000 and is $78,600 in the top decile of income as of 2007.[16]The median value of indirectly held stock in the form of retirement accounts for the same two groups in the same year is $6,300 and $214,800 respectively.[16]Since the Great Recession of 2008 households in the bottom half of theincome distributionhave lessened their participation rate both directly and indirectly from 53.2% in 2007 to 48.8% in 2013, while over the same period households in the top decile of the income distribution slightly increased participation 91.7% to 92.1%.[17]The mean value of direct and indirect holdings at the bottom half of the income distribution moved slightly downward from $53,800 in 2007 to $53,600 in 2013.[17]In the top decile, mean value of all holdings fell from $982,000 to $969,300 in the same time.[17]The mean value of all stock holdings across the entire income distribution is valued at $269,900 as of 2013.[17]
The racial composition of stock market ownership shows households headed by whites are nearly four and six times as likely to directly own stocks than households headed by blacks and Hispanics respectively. As of 2011 the national rate of direct participation was 19.6%, for white households the participation rate was 24.5%, for black households it was 6.4% and for Hispanic households it was 4.3%. Indirect participation in the form of 401k ownership shows a similar pattern with a national participation rate of 42.1%, a rate of 46.4% for white households, 31.7% for black households, and 25.8% for Hispanic households. Households headed by married couples participated at rates above the national averages with 25.6% participating directly and 53.4% participating indirectly through a retirement account. 14.7% of households headed by men participated in the market directly and 33.4% owned stock through a retirement account. 12.6% of female-headed households directly owned stock and 28.7% owned stock indirectly.[14]
In a 2003 paper by Vissing-Jørgensen attempts to explain disproportionate rates of participation along wealth and income groups as a function of fixed costs associated with investing. Her research concludes that a fixed cost of $200 per year is sufficient to explain why nearly half of all U.S. households do not participate in the market.[18]Participation rates have been shown to strongly correlate with education levels, promoting the hypothesis that information and transaction costs of market participation are better absorbed by more educated households. Behavioral economists Harrison Hong, Jeffrey Kubik and Jeremy Stein suggest that sociability and participation rates of communities have a statistically significant impact on an individual's decision to participate in the market. Their research indicates that social individuals living in states with higher than average participation rates are 5% more likely to participate than individuals that do not share those characteristics.[19]This phenomenon also explained in cost terms. Knowledge of market functioning diffuses through communities and consequently lowers transaction costs associated with investing.
In 12th-century France, the courtiersde changewere concerned with managing and regulating the debts of agricultural communities on behalf of the banks. Because these men also traded with debts, they could be called the firstbrokers. The Italian historian Lodovico Guicciardini described how, in late 13th-centuryBruges, commodity traders gathered outdoors at a market square containing an inn owned by a family calledVan der Beurze, and in 1409 they became the "Brugse Beurse", institutionalizing what had been, until then, an informal meeting.[20]The idea quickly spread aroundFlandersand neighboring countries and "Beurzen" soon opened inGhentandRotterdam. International traders, and specially the Italian bankers, present in Bruges since the early 13th-century, took back the word in their countries to define the place for stock market exchange: first the Italians (Borsa), but soon also the French (Bourse), the Germans (börse), Russians (birža), Czechs (burza), Swedes (börs), Danes and Norwegians (børs). In most languages, the word coincides with that for money bag, dating back to the Latin bursa, from which obviously also derives the name of the Van der Beurse family.
In the middle of the13th century,Venetianbankers began to trade in government securities. In 1351 the Venetian government outlawed spreading rumors intended to lower the price of government funds. Bankers inPisa,Verona,GenoaandFlorencealso began trading in government securities during the 14th century. This was only possible because these were independent city-states not ruled by a duke but a council of influential citizens. Italian companies were also the first to issue shares. Companies in England and the Low Countries followed in the 16th century. Around this time, ajoint stock company—one whose stock is owned jointly by the shareholders—emerged and became important for the colonization of what Europeans called the "New World".[21]
There are now stock markets in virtually every developed and most developing economies, with the world's largest markets being in the United States, United Kingdom, Japan,India, China,Canada, Germany, France,South Koreaand theNetherlands.[22]
Even in the days beforeperestroika,socialismwas never a monolith. Within theCommunist countries, the spectrum of socialism ranged from thequasi-market, quasi-syndicalistsystemof Yugoslaviato the centralizedtotalitarianismofneighboring Albania. One time I asked Professorvon Mises, the great expert on the economics of socialism, at what point on this spectrum of statism would he designate a country as "socialist" or not. At that time, I wasn't sure that any definite criterion existed to make that sort of clear-cut judgment. And so I was pleasantly surprised at the clarity and decisiveness of Mises's answer. "A stock market," he answered promptly. "A stock market is crucial to the existence ofcapitalismandprivate property. For it means that there is a functioning market in the exchange of private titles to themeans of production. There can be no genuine private ownership of capital without a stock market: there can be no true socialism if such a market is allowed to exist."
The stock market is one of the most important ways forcompaniesto raise money, along with debt markets which are generally more imposing but do not trade publicly.[24]This allows businesses to be publicly traded, and raise additional financial capital for expansion by selling shares of ownership of the company in a public market. Theliquiditythat an exchange affords the investors enables their holders to quickly and easily sell securities. This is an attractive feature of investing in stocks, compared to other less liquid investments such aspropertyand other immoveable assets.
History has shown that the price ofstocksand other assets is an important part of the dynamics of economic activity, and can influence or be an indicator of social mood. An economy where the stock market is on the rise is considered to be an up-and-coming economy. The stock market is often considered the primary indicator of a country's economic strength and development.[25]
Rising share prices, for instance, tend to be associated with increased business investment and vice versa. Share prices also affect the wealth of households and their consumption. Therefore,central bankstend to keep an eye on the control and behavior of the stock market and, in general, on the smooth operation offinancial systemfunctions. Financial stability is theraison d'êtreof central banks.[26]
Exchanges also act as the clearinghouse for each transaction, meaning that they collect and deliver the shares, and guarantee payment to the seller of a security. This eliminates the risk to an individual buyer or seller that thecounterpartycould default on the transaction.[27]
The smooth functioning of all these activities facilitateseconomic growthin that lower costs and enterprise risks promote the production of goods and services as well as possibly employment. In this way the financial system is assumed to contribute to increased prosperity, although some controversy exists as to whether the optimal financial system is bank-based or market-based.[28]
Events such as the2008 financial crisishave prompted a heightened degree of scrutiny of the impact of the structure of stock markets[29][30](calledmarket microstructure), in particular to the stability of the financial system and the transmission ofsystemic risk.[31]
A transformation is the move toelectronic tradingto replace human trading of listedsecurities.[30]
Changes in stock prices are mostly caused by external factors such associoeconomicconditions, inflation, exchange rates.Intellectual capitaldoes not affect a company stock's current earnings.Intellectual capitalcontributes to a stock's return growth.[32]
Theefficient-market hypothesis(EMH) is a hypothesis in financial economics that states that asset prices reflect all available information at the current time.
The 'hard'efficient-market hypothesisdoes not explain the cause of events such as thecrash in 1987, when theDow Jones Industrial Averageplummeted 22.6 percent—the largest-ever one-day fall in the United States.[33]
This event demonstrated that share prices can fall dramatically even though no generally agreed upon definite cause has been found: a thorough search failed to detectany'reasonable' development that might have accounted for the crash. (Such events are predicted to occur strictly byrandomness, although very rarely.) It seems also to be true more generally that many price movements (beyond those which are predicted to occur 'randomly') arenotoccasioned by new information; a study of the fifty largest one-day share price movements in the United States in the post-war period seems to confirm this.[33]
A 'soft' EMH has emerged which does not require that prices remain at or near equilibrium, but only that market participants cannotsystematicallyprofit from any momentary 'market anomaly'. Moreover, while EMH predicts that all price movement (in the absence of change in fundamental information) is random (i.e. non-trending)[dubious–discuss],[34]many studies have shown a marked tendency for the stock market to trend over time periods of weeks or longer. Various explanations for such large and apparently non-random price movements have been promulgated. For instance, some research has shown that changes in estimated risk, and the use of certain strategies, such as stop-loss limits andvalue at risklimits,theoretically couldcause financial markets to overreact. But the best explanation seems to be that the distribution of stock market prices is non-Gaussian[35](in which case EMH, in any of its current forms, would not be strictly applicable).[36][37]
Other research has shown that psychological factors may result inexaggerated(statistically anomalous) stock price movements (contrary to EMH which assumes such behaviors 'cancel out'). Psychological research has demonstrated that people are predisposed to 'seeing' patterns, and often will perceive a pattern in what is, in fact, justnoise, e.g. seeing familiar shapes in clouds or ink blots. In the present context, this means that a succession of good news items about a company may lead investors to overreact positively, driving the price up. A period of good returns also boosts the investors' self-confidence, reducing their (psychological) risk threshold.[38]
Another phenomenon—also from psychology—that works against anobjectiveassessment isgroup thinking. As social animals, it is not easy to stick to an opinion that differs markedly from that of a majority of the group. An example with which one may be familiar is the reluctance to enter a restaurant that is empty; people generally prefer to have their opinion validated by those of others in the group.
In one paper the authors draw an analogy withgambling.[39]In normal times the market behaves like a game ofroulette; the probabilities are known and largely independent of the investment decisions of the different players. In times of market stress, however, the game becomes more like poker (herding behavior takes over). The players now must give heavy weight to the psychology of other investors and how they are likely to react psychologically.[40]
Stock markets play an essential role in growing industries that ultimately affect the economy through transferring available funds from units that have excess funds (savings) to those who are suffering from funds deficit (borrowings) (Padhi and Naik, 2012). In other words, capital markets facilitate funds movement between the above-mentioned units. This process leads to the enhancement of available financial resources which in turn affects the economic growth positively.
Economic and financial theories argue that stock prices are affected by macroeconomic trends. Macroeconomic trends include such as changes in GDP, unemployment rates, national income, price indices, output, consumption, unemployment, inflation, saving, investment, energy, international trade, immigration, productivity, aging populations, innovations, international finance.[41]increasing corporate profit, increasing profit margins, higher concentration of business, lower company income, less vigorous activity, less progress, lower investment rates, lower productivity growth, less employee share of corporate revenues,[42]decreasing Worker to Beneficiary ratio (year 1960 5:1, year 2009 3:1, year 2030 2.2:1),[43]increasing female to male ratio college graduates.[44]
Sometimes, the market seems to react irrationally to economic or financial news, even if that news is likely to have no real effect on the fundamental value of securities itself.[45]However, this market behaviour may be more apparent than real, since often such news was anticipated, and a counter reaction may occur if the news is better (or worse) than expected. Therefore, the stock market may be swayed in either direction by press releases, rumors,euphoriaandmass panic.
Over the short-term, stocks and other securities can be battered or bought by any number of fast market-changing events, making the stock market behavior difficult to predict. Emotions can drive prices up and down, people are generally not as rational as they think, and the reasons for buying and selling are generally accepted.
Behaviorists argue that investors often behaveirrationallywhen making investment decisions thereby incorrectly pricing securities, which causes market inefficiencies, which, in turn, are opportunities to make money.[46]However, the whole notion of EMH is that these non-rational reactions to information cancel out, leaving the prices of stocks rationally determined.
A stock market crash is often defined as a sharp dip inshare pricesofstockslisted on the stock exchanges. In parallel with various economic factors, a reason for stock market crashes is also due to panic and investing public's loss of confidence. Often, stock market crashes end speculativeeconomic bubbles.
There have been famousstock market crashesthat have ended in the loss of billions of dollars and wealth destruction on a massive scale. An increasing number of people are involved in the stock market, especially since thesocial securityandretirement plansare being increasingly privatized and linked tostocksand bonds and other elements of the market. There have been a number of famous stock market crashes like theWall Street Crash of 1929, thestock market crash of 1973–4, theBlack Monday of 1987, theDot-com bubbleof 2000, and the Stock Market Crash of 2008.
One of the most famous stock market crashes started October 24, 1929, on Black Thursday. TheDow Jones Industrial Averagelost 50% during this stock market crash. It was the beginning of theGreat Depression.
Another famous crash took place on October 19, 1987 – Black Monday. The crash began in Hong Kong and quickly spread around the world.
By the end of October, stock markets in Hong Kong had fallen 45.5%, Australia 41.8%, Spain 31%, the United Kingdom 26.4%, the United States 22.68%, and Canada 22.5%. Black Monday itself was the largest one-day percentage decline in stock market history – the Dow Jones fell by 22.6% in a day. The names "Black Monday" and "Black Tuesday" are also used for October 28–29, 1929, which followed Terrible Thursday—the starting day of the stock market crash in 1929.
The crash in 1987 raised some puzzles – main news and events did not predict the catastrophe and visible reasons for the collapse were not identified. This event raised questions about many important assumptions of modern economics, namely, thetheory of rational human conduct, thetheory of market equilibriumand theefficient-market hypothesis. For some time after the crash, trading in stock exchanges worldwide was halted, since the exchange computers did not perform well owing to enormous quantity of trades being received at one time. This halt in trading allowed theFederal Reserve Systemand central banks of other countries to take measures to control the spreading of afinancial crisis. In the United States the SEC introduced several new measures of control into the stock market in an attempt to prevent a re-occurrence of the events of Black Monday.
This marked the beginning of theGreat Recession. Starting in 2007 and lasting through 2009, financial markets experienced one of the sharpest declines in decades. It was more widespread than just the stock market as well. The housing market, lending market, and even global trade experienced unimaginable decline. Sub-prime lending led to the housing bubble bursting and was made famous by movies likeThe Big Shortwhere those holding large mortgages were unwittingly falling prey to lenders. This saw banks and major financial institutions completely fail in many cases and took major government intervention to remedy during the period. From October 2007 to March 2009, the S&P 500 fell 57% and wouldn't recover to its 2007 levels until April 2013.
The 2020 stock market crash was a major and sudden global stock market crash that began on 20 February 2020 and ended on 7 April. This market crash was due to the sudden outbreak of the global pandemic,COVID-19. The crash ended with a new deal that had a positive impact on the market.[48]
Since the early 1990s, many of the largest exchanges have adopted electronic 'matching engines' to bring together buyers and sellers, replacing the open outcry system. Electronic trading now accounts for the majority of trading in many developed countries. Computer systems were upgraded in the stock exchanges to handle larger trading volumes in a more accurate and controlled manner. The SEC modified the margin requirements in an attempt to lower thevolatilityof common stocks, stock options and the futures market. TheNew York Stock Exchangeand theChicago Mercantile Exchangeintroduced the concept of a circuit breaker. The circuit breaker halts trading if the Dow declines a prescribed number of points for a prescribed amount of time. In February 2012, the Investment Industry Regulatory Organization of Canada (IIROC) introduced single-stock circuit breakers.[49]
The movements of the prices in global, regional or local markets are captured in price indices called stock market indices, of which there are many, e.g. theS&P, theFTSE, theEuronextindices and theNIFTY&SENSEXof India. Such indices are usuallymarket capitalizationweighted, with the weights reflecting the contribution of the stock to the index. The constituents of the index are reviewed frequently to include/exclude stocks in order to reflect the changing business environment.
Financial innovation has brought many new financial instruments whose pay-offs or values depend on the prices of stocks. Some examples areexchange-traded funds(ETFs),stock indexandstock options,equity swaps,single-stock futures, and stock indexfutures. These last two may be traded onfutures exchanges(which are distinct from stock exchanges—their history traces back tocommodityfutures exchanges), or tradedover-the-counter. As all of these products are onlyderivedfrom stocks, they are sometimes considered to be traded in a (hypothetical)derivatives market, rather than the (hypothetical) stock market.
Stock that a trader does not actually own may be traded usingshort selling;margin buyingmay be used to purchase stock with borrowed funds; or,derivativesmay be used to control large blocks of stocks for a much smaller amount of money than would be required by outright purchase or sales.
In short selling, the trader borrows stock (usually from his brokerage which holds its clients shares or its own shares on account to lend to short sellers) then sells it on the market, betting that the price will fall. The trader eventually buys back the stock, making money if the price fell in the meantime and losing money if it rose. Exiting a short position by buying back the stock is called "covering". This strategy may also be used by unscrupulous traders in illiquid or thinly traded markets to artificially lower the price of a stock. Hence most markets either prevent short selling or place restrictions on when and how a short sale can occur. The practice ofnaked shortingis illegal in most (but not all) stock markets.
In margin buying, the trader borrows money (at interest) to buy a stock and hopes for it to rise. Most industrialized countries have regulations that require that if the borrowing is based on collateral from other stocks the trader owns outright, it can be a maximum of a certain percentage of those other stocks' value. In the United States, the margin requirements have been 50% for many years (that is, if you want to make a $1000 investment, you need to put up $500, and there is often a maintenance margin below the $500).
A margin call is made if the total value of the investor's account cannot support the loss of the trade. (Upon a decline in the value of the margined securities additional funds may be required to maintain the account's equity, and with or without notice the margined security or any others within the account may be sold by the brokerage to protect its loan position. The investor is responsible for any shortfall following such forced sales.)
Regulation of margin requirements (by theFederal Reserve) was implemented after theCrash of 1929. Before that, speculators typically only needed to put up as little as 10 percent (or even less) of the totalinvestmentrepresented by the stocks purchased. Other rules may include the prohibition offree-riding:putting in an order to buy stocks without paying initially (there is normally a three-day grace period for delivery of the stock), but then selling them (before the three-days are up) and using part of the proceeds to make the original payment (assuming that the value of the stocks has not declined in the interim).
Financial markets can be divided into different subtypes:
While the stock market is the marketplace for buying and selling company stocks, the foreign exchange market, also known asforexor FX, is the global marketplace for the purchase and sale of national currencies. It serves several functions, including facilitating currency conversions, managing foreign exchange risk through futures and forwards, and providing a platform for speculative investors to earn a profit on FX trading. The market includes various types of products, such as thespot market,futures market, forward market,swap market, and options market. For example, the spot market involves the immediate buying and selling of currencies, while the forward market allows for the buying and selling of currencies at an agreed exchange rate, with the actual exchange taking place at a future delivery date. The foreign exchange market is needed for facilitating global trade, including investments, the exchange of goods and services, and financial transactions, and it is considered one of the largest markets in the global economy.[52][53]
The electronic trading market refers to the digital marketplace where financial instruments such as stocks, bonds, currencies, commodities, and derivatives are bought and sold through online platforms. This market operates viaelectronic trading platforms, also known as online trading platforms, which are software applications that enable the trading of financial products over a network, typically through a financial intermediary. Platforms, such aseToro,Plus500,Robinhood, and AvaTrade serve as a digital medium for trading financial instruments and make financial markets more accessible, allowing individual investors to participate in trading without the need for traditional brokers or substantial capital. They also provide features such as real-time market data, stock price analysis, research reports, and news updates, which support decision-making in trading activities.[54]
These platforms often incorporate systems, such as theMartingale Trading System, used in forex trading. Additionally, online trading has evolved to includemobile trading apps, enabling transactions to be conducted remotely via smartphones.[55]
Many strategies can be classified as eitherfundamental analysisortechnical analysis.Fundamental analysisrefers to analyzing companies by theirfinancial statementsfound inSEC filings, business trends, and general economic conditions.Technical analysisstudies price actions in markets through the use of charts and quantitative techniques to attempt to forecast price trends based on historical performance, regardless of the company's financial prospects. One example of a technical strategy is theTrend followingmethod, used byJohn W. HenryandEd Seykota, which uses price patterns and is also rooted inrisk managementanddiversification.
Additionally, many choose to invest via passiveindex funds. In this method, one holds a portfolio of the entire stock market or some segment of the stock market (such as theS&P 500 IndexorWilshire 5000). The principal aim of this strategy is to maximize diversification, minimize taxes from realizing gains, and ride the general trend of the stock market to rise.
Responsible investment emphasizes and requires a long-term horizon on the basis offundamental analysisonly, avoiding hazards in the expected return of the investment.Socially responsible investingis another investment preference.
The average annual growth rate of the stock market, as measured by theS&P 500 index, has historically been around 10%.[56]This figure represents the long-term average return and is often cited as a benchmark for assessing the performance of the stock market as a whole.
The market's results from one year to the next may vary substantially from the long-term average. For instance, in 2012–2021, the S&P 500 index had an average annual return of 14.8%.[57]However, individual annual returns can fluctuate widely, with some years experiencing negative growth and others seeing substantial gains.
While the average stock market return is around 10% per year, there is also the impact ofinflation, resulting in investors' losing purchasing power of 2% to 3% every year due to it, which reduces the real rate of return on investments.[58]
Taxation is a consideration of all investment strategies; profit from owning stocks, including dividends received, is subject to different tax rates depending on the type of security and the holding period. Most profit from stock investing is taxed via acapital gains tax. In many countries, the corporations pay taxes to the government and the shareholders once again pay taxes when they profit from owning the stock, known as "double taxation".
The Indianstock exchanges,Bombay Stock ExchangeandNational Stock Exchange of India, have been rocked by several high-profile corruption scandals.[59][60]At times, theSecurities and Exchange Board of India(SEBI) has barred various individuals and entities from trading on the exchanges forstock manipulation, especially inilliquidsmall-capandpenny stocks.[61][62][63]
|
https://en.wikipedia.org/wiki/Stock_market
|
Strategic financial managementis the study offinancewith a long term view considering the strategic goals of the enterprise. Financial management is sometimes referred to as "Strategic Financial Management" to give it an increased frame of reference.
To understand what strategic financial management is about, we must first understand what is meant by the term "Strategic". Which is something that is done as part of a plan that is meant to achieve a particular purpose.
Therefore, Strategic Financial Management are those aspect of the overall plan of the organisation that concerns financial management. This includes different parts of the business plan, for example marketing and sales plan, production plan, personnel plan, capital expenditure, etc. These all have financial implications for the financial managers of an organisation.[1]
The objective of the Financial Management is the maximisation of shareholders wealth. To satisfy this objective a company requires a "long term course of action" and this is where strategy fits in.
Strategic planning is an organisation’s process to outlining and defining its strategy, direction it is going. This led to decision making and allocation of resources inline with this strategy. Some techniques used in strategic planning includes: SWOT analysis, PEST analysis, STEER analysis. Often it is a plan for one year but more typically 3 to 5 years if a longer term view is taken.
When making a financial strategy, financial managers need to include the following basic elements. More elements could be added, depending on the size and industry of the project.
Startup cost:For new business ventures and those started by existing companies. Could include new fabricating equipment costs, new packaging costs, marketing plan.
Competitive analysis:analysis on how the competition will affect your revenues.
Ongoing costs:Includes labour, materials, equipment maintenance, shipping and facilities costs. Needs to be broken down into monthly numbers and subtracted from the revenue forecast (see below).
Revenue forecast:over the length of the project, to determine how much will be available to pay the ongoing cost and if the project will be profitable.
Broadly speaking, financial managers have to have decisions regarding 4 main topics within a company. Those are as follow:
Each decisions made by financial managers must be strategic sound and not only have benefits financially (e.g. Increasing value on the Discounted Cash Flow Analysis) but must also consider uncertain, unquantifiable factors which could be strategically beneficial.
To explain this further, a proposal could have a negative impact from the Discounted Cash Flow analysis, but if it is strategically beneficial to the company this decision will be accepted by the financial managers over a decision which has a positive impact on the Discounted Cash Flow analysis but is not strategically beneficial.
For a financial manager in an organisation this will be mainly regarding the selection of assets which funds from the firm will be invested in. These assets will be acquired if they are proven to be strategically sound and assets are classified into 2 classifications:
Financial managers in this field must select assets or an investment proposals which provides a beneficial course of action, that will most likely come in the future and over the lifetime of the project. This is one of the most crucial financial decisions for a firm.
Important for short term survival of the organisation; thus prerequisite for long term success; mainly concerning the management of current assets that’s held on the company’s balance sheet.
As a more minor role under this section; it comes under investment decisions because revenue generated will be from investments and divestments.
Under each of the above headings: financial managers have to use the following financial figures as part of the evaluation process to determine if a proposal should be accepted. Payback period with NPV (Net Present Value), IRR (internal rate of return) and DCF (Discounted Cash Flow).
For a financial managers, they have to decide the financing mix, capital structure or leverage of a firm. Which is the use of a combination of equity, debt or hybrid securities to fund a firm's activities, or new venture.
Financial manager often uses the Theory of capital structure to determine the ratio between equity and debt which should be used in a financing round for a company. The basis of the theory is that debt capital used beyond the point of minimumweighted average cost of capitalwill cause devaluation and unnecessary leverage for the company.
SeeCorporate finance § Capitalization structurefor discussion andWeighted average cost of capital § Calculationfor formula.
The role of a financial manager often includes making sure the firm is liquid – the firm is able to finance itself in the short run, without running out of cash. They also have to make the firm’s decision in investing into current assets: which can generally be defined as the assets which can be converted into cash within one accounting year, which includes cash, short term securities debtors, etc.
The main indicator to be used here is the net working capital: which is the difference between current assets and current liabilities. Being able to be positive and negative, indicating the companies current financial position and the health of the balance sheet.
This can be further split into:
Which includes investment in receivables that is the volume of credit sales, and collection period. Credit policy which includes credit standards, credit terms and collection efforts.
Which are stocks of manufactured products and the material that make up the product, which includes raw materials, work-in-progress, finished goods, stores and spares (supplies). For a retail business, for example, this will be a major component of their current assets.
SeeInventory optimization.
Concerned with the management of cash flow in and out of the firm, within the firm, and cash balances held by the firm at a point of time by financing deficit or investing surplus cash. Seecash management.
Financial managers often have to influence the dividend to 2 outcomes: The ratio as which this is distributed is called the dividend-pay out ratio.
This is largely dependent on the preference of the shareholders and the investment opportunities available within the firm. But also on the theory that there must be a balance between the pay out to satisfy shareholders for them to continue to invest in the company. But the company will also need to retain profits to be reinvested so more profits can be made for the future. This is also beneficial to the shareholders for growth in the value of shares and for increased dividends paid out in the future. This infers that it is important for management and shareholders to agree to a balanced ratio which both sides can benefit from, in the long term. Although this is often an exception for shareholders who only wish to hold for the short term dividend gain.
|
https://en.wikipedia.org/wiki/Strategic_financial_management
|
Venture capital(VC) is a form ofprivate equityfinancing provided by firms orfundstostartup, early-stage, and emerging companies, that have been deemed to have high growth potential or that have demonstrated high growth in terms of number of employees, annual revenue, scale of operations, etc. Venture capital firms or funds invest in these early-stage companies in exchange forequity, or an ownership stake. Venture capitalists take on the risk offinancingstart-ups in the hopes that some of the companies they support will become successful. Becausestartupsface high uncertainty,[1]VC investments have high rates of failure. Start-ups are usually based on aninnovativetechnology orbusiness modeland often come fromhigh technologyindustries such asinformation technology(IT) orbiotechnology.
Pre-seed andseedrounds are the initial stages of funding for a startup company,[2]typically occurring early in its development. During a seed round, entrepreneurs seek investment fromangel investors, venture capital firms, or other sources to finance the initial operations and development of their business idea. Seed funding is often used to validate the concept, build a prototype, or conductmarket research. This initial capital injection is crucial for startups to kickstart their journey and attract further investment in subsequent funding rounds.
Typical venture capital investments occur after an initial "seed funding" round. The first round of institutional venture capital to fund growth is called theSeries A round. Venture capitalists provide this financing in the interest of generating areturnthrough an eventual "exit" event, such as the company selling shares to the public for the first time in aninitial public offering(IPO), or disposal of shares happening via a merger, via a sale to another entity such as a financial buyer in theprivate equity secondary marketor via a sale to a trading company such as a competitor.
In addition toangel investing,equity crowdfundingand otherseed fundingoptions, venture capital is attractive for new companies with limited operating history that are too small to raisecapitalin thepublic marketsand have not reached the point where they are able to secure abank loanor complete adebt offering. In exchange for the highriskthat venture capitalists assume by investing in smaller and early-stage companies, venture capitalists usually get significant control over company decisions, in addition to a significant portion of the companies' ownership (and consequently value). Companies who have reached a market valuation of over $1 billion are referred to asUnicorns. As of May 2024 there were a reported total of 1248 Unicorn companies.[3]Venture capitalists also often provide strategic advice to the company's executives on itsbusiness modeland marketing strategies.
Venture capital is also a way in which theprivateandpublic sectorscan construct an institution that systematically createsbusiness networksfor the new firms and industries so that they can progress and develop. This institution helps identify promising new firms and provide them with finance, technical expertise,mentoring, talent acquisition, strategic partnership, marketing "know-how", andbusiness models. Once integrated into the business network, these firms are more likely to succeed, as they become "nodes" in the search networks for designing and building products in their domain.[4]However, venture capitalists' decisions are often biased, exhibiting for instance overconfidence and illusion of control, much like entrepreneurial decisions in general.[5]
BeforeWorld War II(1939–1945) venture capital was primarily the domain of wealthy individuals and families.J.P. Morgan, theWallenbergs, theVanderbilts, theWhitneys, theRockefellers, and theWarburgswere notable investors in private companies. In 1938,Laurance S. Rockefellerhelped finance the creation of bothEastern Air LinesandDouglas Aircraft, and the Rockefeller family had vast holdings in a variety of companies.Eric M. WarburgfoundedE.M. Warburg & Co.in 1938, which would ultimately becomeWarburg Pincus, with investments in bothleveraged buyoutsand venture capital. TheWallenberg familystartedInvestor ABin 1916 in Sweden and were early investors in several Swedish companies such asABB,Atlas Copco, andEricssonin the first half of the 20th century.
Only after 1945 did "true" venture capital investment firms begin to emerge, notably with the founding ofAmerican Research and Development Corporation(ARDC) andJ.H. Whitney & Companyin 1946.[6][7]
Georges Doriot, the "father of venture capitalism",[8]along withRalph FlandersandKarl Compton(former president ofMIT) founded ARDC in 1946 to encourage private-sector investment in businesses run by soldiers returning from World War II. ARDC became the first institutional private-equity investment firm to raise capital from sources other than wealthy families. Unlike most present-day venture capital firms, ARDC was a publicly traded company. ARDC's most successful investment was its 1957 funding ofDigital Equipment Corporation(DEC), which would later be valued at more than $355 million after its initial public offering in 1968. This represented a return of over 1200 times its investment and anannualized rate of returnof 101% to ARDC.[9]
Former employees of ARDC went on to establish several prominent venture capital firms includingGreylock Partners, founded in 1965 by Charlie Waite and Bill Elfers; Morgan, Holland Ventures, the predecessor of Flagship Ventures, founded in 1982 by James Morgan; Fidelity Ventures, now Volition Capital, founded in 1969 by Henry Hoagland; andCharles River Ventures, founded in 1970 by Richard Burnes.[10]ARDC continued investing until 1971, when Doriot retired. In 1972 Doriot merged ARDC withTextronafter having invested in over 150 companies.[11]
John Hay Whitney(1904–1982) and his partnerBenno Schmidt(1913–1999) founded J.H. Whitney & Company in 1946. Whitney had been investing since the 1930s, foundingPioneer Picturesin 1933 and acquiring a 15% interest inTechnicolor Corporationwith his cousinCornelius Vanderbilt Whitney. Florida Foods Corporation proved Whitney's most famous investment. The company developed an innovative method for delivering nutrition to American soldiers, later known asMinute Maidorange juice and was sold toThe Coca-Cola Companyin 1960. J.H. Whitney & Company continued to make investments inleveraged buyouttransactions and raised $750 million for its sixthinstitutionalprivate-equity fundin 2005.[citation needed]
One of the first steps toward a professionally managed venture capital industry was the passage of theSmall Business Investment Act of 1958. The 1958 Act officially allowed the U.S.Small Business Administration(SBA) to license private "Small Business Investment Companies" (SBICs) to help the financing and management of the small entrepreneurial businesses in the United States.[12]The Small Business Investment Act of 1958 provided tax breaks that helped contribute to the rise of private-equity firms.[13]
During the 1950s, putting a venture capital deal together may have required the help of two or three other organizations to complete the transaction. It was a business that was growing very rapidly, and as the business grew, the transactions grew exponentially.[14]Arthur Rock, one of the pioneers of Silicon Valley during his venturing theFairchild Semiconductoris often credited with the introduction of the term "venture capitalist" that has since become widely accepted.[15]
During the 1960s and 1970s, venture capital firms focused their investment activity primarily on starting and expanding companies. More often than not, these companies were exploiting breakthroughs in electronic, medical, or data-processing technology. As a result, venture capital came to be almost synonymous with financing of technology ventures. An early West Coast venture capital company was Draper and Johnson Investment Company, formed in 1962[16]byWilliam Henry Draper IIIand Franklin P. Johnson, Jr. In 1965,Sutter Hill Venturesacquired the portfolio of Draper and Johnson as a founding action.[17]Bill Draper and Paul Wythes were the founders, and Pitch Johnson formed Asset Management Company at that time.
It was also in the 1960s that the common form ofprivate-equity fund, still in use today, emerged.Private-equity firmsorganizedlimited partnershipsto hold investments in which the investment professionals served asgeneral partnerand the investors, who were passivelimited partners, put up the capital. The compensation structure, still in use today, also emerged with limited partners paying an annual management fee of 1.0–2.5% and acarried interesttypically representing up to 20% of the profits of the partnership.
The growth of the venture capital industry was fueled by the emergence of the independent investment firms onSand Hill Road, beginning withKleiner PerkinsandSequoia Capitalin 1972. Located inMenlo Park, California, Kleiner Perkins, Sequoia and later venture capital firms would have access to the manysemiconductorcompanies based in theSanta Clara Valleyas well as earlycomputerfirms using their devices and programming and service companies.[note 1]Kleiner Perkinswas the first venture capital firm to open an office on Sand Hill Road in 1972.[18]
Throughout the 1970s, a group of private-equity firms, focused primarily on venture capital investments, would be founded that would become the model for later leveraged buyout and venture capital investment firms. In 1973, with the number of new venture capital firms increasing, leading venture capitalists formed the National Venture Capital Association (NVCA). The NVCA was to serve as theindustry trade groupfor the venture capital industry.[19]Venture capital firms suffered a temporary downturn in 1974, when the stock market crashed and investors were naturally wary of this new kind of investment fund.
It was not until 1978 that venture capital experienced its first major fundraising year, as the industry raised approximately $750 million. With the passage of theEmployee Retirement Income Security Act(ERISA) in 1974, corporate pension funds were prohibited from holding certain risky investments including many investments inprivately heldcompanies. In 1978, theUS Labor Departmentrelaxed certain restrictions of the ERISA, under the "prudent man rule"[note 2], thus allowing corporate pension funds to invest in the asset class and providing a major source of capital available to venture capitalists.
The public successes of the venture capital industry in the 1970s and early 1980s (e.g.,Digital Equipment Corporation,Apple Inc.,Genentech) gave rise to a major proliferation of venture capital investment firms. From just a few dozen firms at the start of the decade, there were over 650 firms by the end of the 1980s, each searching for the next major "home run". The number of firms multiplied, and the capital managed by these firms increased from $3 billion to $31 billion over the course of the decade.[20]
The growth of the industry was hampered by sharply declining returns, and certain venture firms began posting losses for the first time. In addition to the increased competition among firms, several other factors affected returns. The market for initial public offerings cooled in the mid-1980s before collapsing after the stock market crash in 1987, and foreign corporations, particularly fromJapanandKorea, flooded early-stage companies with capital.[20]
In response to the changing conditions, corporations that had sponsored in-house venture investment arms, includingGeneral ElectricandPaine Webbereither sold off or closed these venture capital units. Additionally, venture capital units withinChemical BankandContinental Illinois National Bank, among others, began shifting their focus from funding early stage companies toward investments in more mature companies. Even industry foundersJ.H. Whitney & CompanyandWarburg Pincusbegan to transition towardleveraged buyoutsandgrowth capitalinvestments.[20][21][22]
By the end of the 1980s, venture capital returns were relatively low, particularly in comparison with their emergingleveraged buyoutcousins, due in part to the competition for hot startups, excess supply of IPOs and the inexperience of many venture capital fund managers. Growth in the venture capital industry remained limited throughout the 1980s and the first half of the 1990s, increasing from $3 billion in 1983 to just over $4 billion more than a decade later in 1994.[23]
The advent of theWorld Wide Webin the early 1990s reinvigorated venture capital as investors saw companies with huge potential being formed.NetscapeandAmazon (company)were founded in 1994, andYahoo!in 1995. All were funded by venture capital.InternetIPOs—AOL in 1992; Netcom in 1994; UUNet, Spyglass and Netscape in 1995; Lycos, Excite, Yahoo!, CompuServe, Infoseek, C/NET, and E*Trade in 1996; and Amazon, ONSALE, Go2Net, N2K, NextLink, and SportsLine in 1997—generated enormous returns for their venture capital investors. These returns, and the performance of the companies post-IPO, caused a rush of money into venture capital, increasing the number of venture capital funds raised from about 40 in 1991 to more than 400 in 2000, and the amount of money committed to the sector from $1.5 billion in 1991 to more than $90 billion in 2000.[24]
The bursting of thedot-com bubblein 2000 caused many venture capital firms to fail and financial results in the sector to decline.[citation needed]
TheNasdaqcrash and technology slump that started in March 2000 shook virtually the entire venture capital industry as valuations for startup technology companies collapsed. Over the next two years, many venture firms had been forced to write-off large proportions of their investments, and many funds were significantly "under water" (the values of the fund's investments were below the amount of capital invested). Venture capital investors sought to reduce the size of commitments they had made to venture capital funds, and, in numerous instances, investors sought to unload existing commitments for cents on the dollar in thesecondary market. By mid-2003, the venture capital industry had shriveled to about half its 2001 capacity. Nevertheless,PricewaterhouseCoopers'MoneyTree Survey[25]shows that total venture capital investments held steady at 2003 levels through the second quarter of 2005.[citation needed]
Although the post-boom years represent just a small fraction of the peak levels of venture investment reached in 2000, they still represent an increase over the levels of investment from 1980 through 1995. As a percentage of GDP, venture investment was 0.058% in 1994, peaked at 1.087% (nearly 19 times the 1994 level) in 2000 and ranged from 0.164% to 0.182% in 2003 and 2004. The revival of anInternet-driven environment in 2004 through 2007 helped to revive the venture capital environment. However, as a percentage of the overall private-equity market, venture capital has still not reached its mid-1990s level, let alone its peak in 2000.[citation needed]
Venture capital funds, which were responsible for much of the fundraising volume in 2000 (the height of thedot-com bubble), raised only $25.1 billion in 2006, a 2% decline from 2005 and a significant decline from its peak.[26]The decline continued till their fortunes started to turn around in 2010 with $21.8 billion invested (not raised).[27]The industry continued to show phenomenal growth and in 2020 hit $80 billion in fresh capital.[28]
Obtaining venture capital is substantially different from raising debt or a loan. Lenders have a legal right to interest on a loan and repayment of the capital irrespective of the success or failure of a business. Venture capital is invested in exchange for an equity stake in the business. The return of the venture capitalist as a shareholder depends on the growth and profitability of the business. This return is generally earned when the venture capitalist "exits" by selling its shareholdings when the business is sold to another owner.[29]
Venture capitalists are typically very selective in deciding what to invest in, with a Stanford survey of venture capitalists revealing that 100 companies were considered for every company receiving financing.[30]Ventures receiving financing must demonstrate an excellent management team, a large potential market, and most importantly high growth potential, as only such opportunities are likely capable of providing financial returns and a successful exit within the required time frame (typically 8–12 years) that venture capitalists expect.[31]
Because investments areilliquidand require the extended time frame to harvest, venture capitalists are expected to carry out detaileddue diligenceprior to investment. Venture capitalists also are expected to nurture the companies in which they invest, in order to increase the likelihood of reaching anIPOstage whenvaluationsare favourable. Venture capitalists typically assist at four stages in the company's development:[32]
Because there are no public exchanges listing their securities, private companies meet venture capital firms and other private-equity investors in several ways, including warm referrals from the investors' trusted sources and other business contacts; investor conferences and symposia; and summits where companies pitch directly to investor groups in face-to-face meetings, including a variant known as "Speed Venturing", which is akin to speed-dating for capital, where the investor decides within 10 minutes whether he wants a follow-up meeting. In addition, some new private online networks are emerging to provide additional opportunities for meeting investors.[33]
This need for high returns makes venture funding an expensive capital source for companies, and most suitable for businesses having large up-frontcapital requirements, which cannot be financed by cheaper alternatives such as debt. That is most commonly the case for intangible assets such as software, and other intellectual property, whose value is unproven. In turn, this explains why venture capital is most prevalent in the fast-growingtechnologyandlife sciencesorbiotechnologyfields.[34]
If a company does have the qualities venture capitalists seek including a solid business plan, a good management team, investment and passion from the founders, a good potential to exit the investment before the end of their funding cycle, and target minimum returns in excess of 40% per year, it will find it easier to raise venture capital.[citation needed]
There are multiple stages ofventurefinancing offered in venture capital, that roughly correspond to these stages of a company's development.[35]
In early stage and growth stage financings, venture-backed companies may also seek to takeventure debt.[39]
Aventure capitalist, or sometimes simply called acapitalist, is a person who makes capital investments in companies in exchange for anequity stake. The venture capitalist is often expected to bring managerial and technical expertise, as well as capital, to their investments. A venture capital fund refers to apooled investmentvehicle (in the United States, often anLPorLLC) that primarily invests thefinancial capitalof third-party investors in enterprises that are too risky for the standardcapital marketsorbank loans. These funds are typically managed by a venture capital firm, which often employs individuals with technology backgrounds (scientists, researchers), business training and/or deep industry experience.[40]
A core skill within VCs is the ability to identify novel or disruptive technologies that have the potential to generate high commercial returns at an early stage. By definition, VCs also take a role in managing entrepreneurial companies at an early stage, thus adding skills as well as capital, thereby differentiating VC from buy-out private equity, which typically invest in companies with proven revenue, and thereby potentially realizing much higher rates of returns. Inherent in realizing abnormally high rates of returns is the risk of losing all of one's investment in a given startup company. As a consequence, most venture capital investments are done in a pool format, where several investors combine their investments into one large fund that invests in many different startup companies. By investing in the pool format, the investors are spreading out their risk to many different investments instead of taking the chance of putting all of their money in one start up firm.
Venture capital firms are typically structured aspartnerships, thegeneral partnersof which serve as the managers of the firm and will serve as investment advisors to the venture capital funds raised. Venture capital firms in the United States may also be structured aslimited liability companies, in which case the firm's managers are known as managing members. Investors in venture capital funds are known aslimited partners. This constituency comprises both high-net-worth individuals and institutions with large amounts of available capital, such as state and privatepension funds, universityfinancial endowments, foundations,insurancecompanies, andpooled investmentvehicles, calledfunds of funds.[41]
Venture capitalist firms differ in their motivations[42]and approaches. There are multiple factors, and each firm is different.
Venture capital funds are generally three in types:[43]
Some of the factors that influence VC decisions include:
Within the venture capital industry, the general partners and other investment professionals of the venture capital firm are often referred to as "venture capitalists" or "VCs". Typical career backgrounds vary, but, broadly speaking, venture capitalists come from either an operational or a finance background. Venture capitalists with an operational background (operating partner) tend to be former founders or executives of companies similar to those which the partnership finances or will have served as management consultants. Venture capitalists with finance backgrounds tend to haveinvestment bankingor othercorporate financeexperience.
Although the titles are not entirely uniform from firm to firm, other positions at venture capital firms include:
The average maturity of mostventure capital fundsranges from 10 years to 12 years, with the possibility of a few years of extensions to allow for private companies still seeking liquidity. The investing cycle for most funds is generally three to five years, after which the focus is managing and making follow-on investments in an existing portfolio.[45]This model was pioneered by successful funds inSilicon Valleythrough the 1980s to invest in technological trends broadly but only during their period of ascendance, and to cut exposure to management and marketing risks of any individual firm or its product.
In such a fund, the investors have a fixed commitment to the fund that is initially unfunded and subsequently "called down" by the venture capital fund over time as the fund makes its investments. There are substantial penalties for a limited partner (or investor) that fails to participate in acapital call.[46]
It can take anywhere from a month to several years for venture capitalists to raise money from limited partners for their fund. At the time when all of the money has been raised, the fund is said to be closed and the 10-year lifetime begins. Some funds have partial closes when one half (or some other amount) of the fund has been raised. Thevintage yeargenerally refers to the year in which the fund was closed and may serve as a means to stratify VC funds for comparison.
From an investor's point of view, funds can be: (1)traditional—where all the investors invest with equal terms; or (2)asymmetric—where different investors have different terms. Typically asymmetry is seen in cases where investors have opposing interests, such as the need to not have unrelated business taxable income in the case of public tax-exempt investors.[47]
The decision process to fund a company is elusive. One study report in theHarvard Business Review[48]states that VCs rarely use standard financial analytics.[48]First, VCs engage in a process known as "generating deal flow," where they reach out to their network to source potential investments.[48]The study also reported that few VCs use any type of financial analytics when they assess deals; VCs are primarily concerned about the cash returned from the deal as a multiple of the cash invested.[48]According to 95% of the VC firms surveyed, VCs cite the founder or founding team as the most important factor in their investment decision.[48]Other factors are also considered, including intellectual property rights and the state of the economy.[49]Some argue that the most important thing a VC looks for in a company is high-growth.[50]
The funding decision process has spawned bias in the form of a large disparity between the funding received by men and minority groups, such as women and people of color.[51][52][53]In 2021, female founders only received 2% of VC funding in the United States.[54][52]Some research studies have found that VCs evaluate women differently and are less likely to fund female founders.[51]
Venture capitalists are compensated through a combination of management fees andcarried interest(often referred to as a "two and 20" arrangement):
Because a fund may run out of capital prior to the end of its life, larger venture capital firms usually have several overlapping funds at the same time; doing so lets the larger firm keep specialists in all stages of the development of firms almost constantly engaged. Smaller firms tend to thrive or fail with their initial industry contacts; by the time the fund cashes out, an entirely new generation of technologies and people is ascending, whom the general partners may not know well, and so it is prudent to reassess and shift industries or personnel rather than attempt to simply invest more in the industry or people the partners already know.[citation needed]
Because of the strict requirements venture capitalists have for potential investments, many entrepreneurs seekseed fundingfromangel investors, who may be more willing to invest in highly speculative opportunities, or may have a prior relationship with the entrepreneur. Additionally, entrepreneurs may seek alternative financing, such asrevenue-based financing, to avoid giving up equity ownership in the business. For entrepreneurs seeking more than just funding, startup studios can be an appealing alternative to venture capitalists, as they provide operational support and an experienced team.[59]
Furthermore, many venture capital firms will only seriously evaluate an investment in astart-up companyotherwise unknown to them if the company can prove at least some of its claims about the technology and/or market potential for its product or services. To achieve this, or even just to avoid the dilutive effects of receiving funding before such claims are proven, many start-ups seek to self-financesweat equityuntil they reach a point where they can credibly approach outside capital providers such as venture capitalists orangel investors. This practice is called "bootstrapping".
Equity crowdfundingis emerging as an alternative to traditional venture capital. Traditionalcrowdfundingis an approach to raising the capital required for a new project or enterprise by appealing to large numbers of ordinary people for small donations. While such an approach has long precedents in the sphere of charity, it is receiving renewed attention from entrepreneurs, now that social media and online communities make it possible to reach out to a group of potentially interested supporters at very low cost. Someequity crowdfundingmodels are also being applied specifically for startup funding, such as those listed atComparison of crowd funding services. One of the reasons to look for alternatives to venture capital is the problem of the traditional VC model. The traditional VCs are shifting their focus to later-stage investments, andreturn on investmentof many VC funds have been low or negative.[33][60]
In Europe and India,Media for equityis a partial alternative to venture capital funding. Media for equity investors are able to supply start-ups with often significant advertising campaigns in return for equity. In Europe, an investment advisory firm offers young ventures the option to exchange equity for services investment; their aim is to guide ventures through the development stage to arrive at a significant funding, mergers and acquisition, or other exit strategy.[61]
In industries where assets can besecuritizedeffectively because they reliably generate future revenue streams or have a good potential for resale in case offoreclosure, businesses may more cheaply be able to raise debt to finance their growth. Good examples would include asset-intensive extractive industries such as mining, or manufacturing industries. Offshore funding is provided via specialist venture capital trusts, which seek to use securitization in structuring hybrid multi-market transactions via an SPV (special purpose vehicle): a corporate entity that is designed solely for the purpose of the financing.
In addition to traditional venture capital and angel networks, groups have emerged, which allow groups of small investors or entrepreneurs themselves to compete in a privatized business plan competition where the group itself serves as the investor through a democratic process.[62]
Law firms are also increasingly acting as an intermediary between clients seeking venture capital and the firms providing it.[63]
Other forms include venture resources that seek to provide non-monetary support to launch a new venture.
Every year, there are nearly 2 million businesses created in the US, but only 600–800 get venture capital funding.[64]According to the National Venture Capital Association, 11% of private sector jobs come from venture-backed companies and venture-backed revenue accounts for 21% of US GDP.[65]
In 2020, female-founded companies raised 2.8% of capital investment from venture capital, the highest amount recorded.[66][67]Babson College's Diana Report found that the number of womenpartnersin VC firms decreased from 10% in 1999 to 6% in 2014. The report also found that 97% of VC-funded businesses had malechief executives, and that businesses with all-male teams were more than four times as likely to receive VC funding compared to teams with at least one woman.[68]
Currently, about 3% of all venture capital is going to woman-led companies. More than 75% of VC firms in the US did not have any female venture capitalists at the time they were surveyed.[69]It was found that a greater fraction of VC firms had never had a woman represent them on the board of one of theirportfolio companies. For comparison, a UC Davis study focusing on large public companies in California found 49.5% with at least one femaleboard seat.[70]
Venture capital, as an industry, originated in the United States, and American firms have traditionally been the largest participants in venture deals with the bulk of venture capital being deployed in American companies. However, increasingly, non-US venture investment is growing, and the number and size of non-US venture capitalists have been expanding.[citation needed]
Venture capital has been used as a tool foreconomic developmentin a variety of developing regions. In many of these regions, with less developed financial sectors, venture capital plays a role in facilitatingaccess to financeforsmall and medium enterprises(SMEs), which in most cases would not qualify for receiving bank loans.[citation needed]
In the year of 2008, while VC funding were still majorly dominated by U.S. money ($28.8 billion invested in over 2550 deals in 2008), compared to international fund investments ($13.4 billion invested elsewhere), there has been an average 5% growth in the venture capital deals outside the US, mainly inChinaandEurope.[71]Geographical differences can be significant. For instance, in the UK, 4% of British investment goes to venture capital, compared to about 33% in the U.S.[72]
VC funding has been shown to be positively related to a country'sindividualistic culture.[73]According to economistJeffrey Funkhowever more than 90% of US startups valued over $1 billion lost money between 2019–2020 and return on investment from VC barely exceed return from public stock markets over the last 25 years.[74]
In theUnited States, venture capital investing reached $209.4 billion in 2022, the second-highest investment year in history.[75]
Venture capitalists invested some $29.1 billion in 3,752 deals in the U.S. through the fourth quarter of 2011, according to a report by the National Venture Capital Association. The same numbers for all of 2010 were $23.4 billion in 3,496 deals.[76]
According to a report by Dow Jones VentureSource, venture capital funding fell to $6.4 billion in the US in the first quarter of 2013, an 11.8% drop from the first quarter of 2012, and a 20.8% decline from 2011. Venture firms have added $4.2 billion into their funds this year, down from $6.3 billion in the first quarter of 2013, but up from $2.6 billion in the fourth quarter of 2012.[77]
Canadian technology companies have attracted interest from the global venture capital community partially as a result of generous tax incentive through theScientific Research and Experimental Development(SR&ED) investment tax credit program.[citation needed]The basic incentive available to any Canadian corporation performing R&D is a refundable tax credit that is equal to 20% of "qualifying" R&D expenditures (labour, material, R&D contracts, and R&D equipment). An enhanced 35% refundable tax credit of available to certain (i.e. small) Canadian-controlled private corporations (CCPCs). Because the CCPC rules require a minimum of 50% Canadian ownership in the company performing R&D, foreign investors who would like to benefit from the larger 35% tax credit must accept minority position in the company, which might not be desirable. The SR&ED program does not restrict the export of any technology or intellectual property that may have been developed with the benefit of SR&ED tax incentives.[citation needed]
Canada also has a fairly unusual form of venture capital generation in itslabour-sponsored venture capital corporations (LSVCC). These funds, also known as Retail Venture Capital or Labour Sponsored Investment Funds (LSIF), are generally sponsored by labor unions and offertax breaksfrom government to encourage retail investors to purchase the funds. Generally, these Retail Venture Capital funds only invest in companies where the majority of employees are in Canada. However, innovative structures have been developed to permit LSVCCs to direct in Canadian subsidiaries of corporations incorporated in jurisdictions outside of Canada.[citation needed]In 2022, theInformation and Communications Technology (ICT)sector closed around 50% of Canada's venture capital deals, 16% were in theLife Sciences.[78]
The Venture Capital industry inMexicois a fast-growing sector in the country that, with the support of institutions and private funds, is estimated to reach US$100 billion invested by 2018.[79][needs update]
In Australia and New Zealand, there have been 3 waves of VC, starting with Bill Ferris who founded IVC in 1970. are more than one hundred active VC funds, syndicates, or angel investors making VC-style investments. The 2nd wave was led by Starfish & Southern Cross VC, with the latter producing the leading VC of the 3rd wave, Blackbird.[80]There was a boom in 2018, and today there are more than one hundred active VC funds, syndicates, or angel investors making VC-style investments. There have been fewNasdaq IPOsof Australian VC backed startups, with only Looksmart[81]fromBill Ferris's fund, and Quantenna[82]fromLarry Marshall's Southern Cross VC, but Blackbird is expected to IPOCanvasoon.
The State of Startup Funding report found that in 2021, over AUD $10 billion AUD was invested into Australian and New Zealand startups across 682 deals. This represents a 3x increase from the $3.1 billion that was invested in 2020.[83]
Some notable Australian and New Zealand startup success stories include graphic design companyCanva,[84]financial services providerAirwallex, New Zealand payments provider Vend (acquired by Lightspeed), rent-to-buy companyOwnHome,[85]and direct-to-consumer propositions such asEucalyptus(a house of direct-to-consumer telehealth brands), andLyka(a pet wellness company).[86]
In 2022, the largest Australian funds areBlackbird Ventures,Square Peg Capital, andAirtree Ventures. These three funds have more than $1 billion AUD under management across multiple funds. These funds have funding from institutional capital, including AustralianSuper and Hostplus, family offices, and sophisticated individual high-net-wealth investors.[87]
Outside of the 'Big 3', other notable institutional funds includeAfterWork Ventures,[88]Artesian, Folklore Ventures, Equity Venture Partners, Our Innovation Fund, Investible, Main Sequence Ventures (the VC arm of the CSIRO), OneVentures, Proto Axiom, and Tenacious Ventures.
As the number of capital providers in the Australian and New Zealand ecosystem has grown, funds have started to specialise and innovate to differentiate themselves. For example, Tenacious Ventures is a $35 million specialised agritech fund,[89]while AfterWork Ventures is a 'community-powered fund' that has coalesced a group of 120 experienced operators from across Australia's startups and tech companies. Its community is invested in its fund, and lean into assist with sourcing and evaluating deal opportunities, as well as supporting companies post-investment.[90]
Several Australian corporates have corporate VC arms, including NAB Ventures, Reinventure (associated with Westpac), IAG Firemark Ventures, and Telstra Ventures.
Leading early-stage venture capital investors in Europe include Mark Tluszcz of Mangrove Capital Partners and Danny Rimer ofIndex Ventures, both of whom were named onForbesMagazine's Midas List of the world's top dealmakers in technology venture capital in 2007.[91]In 2020, the first Italian Venture capital Fund named Primo Space was launched by Primomiglio SGR. This fund first closed €58 million out a target €80 million and is focused onSpaceinvesting.[92]
Comparing the EU market to the United States, in 2020 venture capital funding was seven times lower, the EU having less unicorns. This hampers the EU's transformation into a green anddigital economy.[93][94][95]
As of 2024, tighter financial conditions have harmed venture capital funding in the European Union, which remains undeveloped in comparison to the United States.[96]
The EU lags significantly behind the US and China in venture capital investments, with the EU capturing only 5% of global venture capital compared to 52% in the US and 40% in China.[97]Venture capital funds in the EU account for just 5% of the global total, whereas those in the United States and China secure 52% and 40%, respectively.[97][98]The financing gap for EU scale-ups is significant, with companies raising 50% less capital than those inSilicon Valley. This disparity exists across industries and is unaffected by the business cycle or year of establishment.[97][99]
TheEuropean Green Dealhas fostered policies that contributed to a 30% rise in venture capital specifically for greentech companies in the EU from 2021 to 2023, despite a downturn in other sectors during the same period.[97][100]
Recent years have seen a revival of the Nordic venture scene with more than €3 billion raised by VC funds in the Nordic region over the last five years. Over the past five years, a total of €2.7 billion has been invested into Nordic startups. Known Nordic early-stage venture capital funds include NorthZone (Sweden), Maki.vc (Finland) and ByFounders (Copenhagen).[101]
Many Swiss start-ups are university spin-offs, in particular from its federal institutes of technology inLausanneandZurich.[102]According to a study by theLondon School of Economicsanalysing 130ETH Zurichspin-offsover 10 years, about 90% of these start-ups survived the first five critical years, resulting in anaverage annual IRRof more than 43%.[103]Switzerland's most active early-stage investors are TheZurich Cantonal Bank, investiere.ch, Swiss Founders Fund, as well as a number ofangel investorclubs.[104]In 2022, half of the total amount ofCHF4 billion investments went to theICTandFintechsectors, whereas 21% was invested inCleantech.[105]
As of March 2019, there are 130 active VC firms inPolandwhich have invested locally in over 750 companies, an average of 9 companies per portfolio. Since 2016, new legal institutions have been established for entities implementing investments in enterprises in the seed or startup phase. In 2018, venture capital funds invested€178Min Polish startups (0.033% of GDP). As of March 2019, total assets managed by VC companies operating in Poland are estimated at€2.6B. The total value of investments of the Polish VC market is worth€209.2M.[106]
The Bulgarian venture capital industry has been growing rapidly in the past decade. As of the beginning of 2021, there are 18 VC and growth equity firms on the local market, with the total funding available for technology startups exceeding €200M. According to BVCA – Bulgarian Private Equity and Venture Capital Association, 59 transactions of total value of €29.4 million took place in 2020.[107]Most of the venture capital investments in Bulgaria are concentrated in the seed and Series A stages. Sofia-based LAUNCHub Ventures recently launched one of the biggest funds in the region, with a target size of €70 million.[108]
South Koreahas been undergoing an investment boom over the last ten years, peaking at US$10 billion in 2021. The Korean government and mega-corporations such as Kakao, Smilegate, SK, and Lotte has been behind much of the funding, backing both venture firms and accelerators, but new venture capitalists are in dire straits as they announce a 40% cut in financing in 2024.[109]
Indiais catching up with the West in the field of venture capital and a number of venture capital funds have a presence in the country (IVCA). In 2006, the total amount of private equity and venture capital in India reached $7.5 billion across 299 deals.[110]In the Indian market, venture capital consists of investing in equity, quasi-equity, or conditional loans in order to promote unlisted, high-risk, or high-tech firms driven by technically or professionally qualified entrepreneurs. It is also used to refer to investors "providing seed", "start-up and first-stage financing",[111]or financing companies that have demonstrated extraordinary business potential. Venture capital refers to capital investment; equity and debt; both of which carry indubitable risk. The anticipated risk is very high. The venture capital industry follows the concept of "high risk, high return", innovative entrepreneurship, knowledge-based ideas and human capital intensive enterprises have become common as venture capitalists invest in risky finance to encourage innovation.[112]A large portion of funding from startups in India arise from Foreign Venture Capital Funds such as Sequoia, Accel, Tiger Global, SoftBank, etc.[113]
Chinais also starting to develop a venture capital industry (CVCA).
Vietnamis experiencing its first foreign venture capitals, including IDG Venture Vietnam ($100 million) and DFJ Vinacapital ($35 million).[114]
Singaporeis widely recognized and featured as one of the hottest places to both start up and invest, mainly due to its healthy ecosystem, its strategic location and connectedness to foreign markets.[115]With 100 deals valued at US$3.5 billion, Singapore saw a record value of PE and VC investments in 2016. The number of PE and VC investments increased substantially over the last 5 years: In 2015, Singapore recorded 81 investments with an aggregate value of US$2.2 billion while in 2014 and 2013, PE and VC deal values came to US$2.4 billion and US$0.9 billion respectively. With 53 percent, tech investments account for the majority of deal volume.
Moreover, Singapore is home to two of South-East Asia's largest unicorns.Garenais reportedly the highest-valued unicorn in the region with a US$3.5 billion price tag, whileGrabis the highest-funded, having raised a total of US$1.43 billion since its incorporation in 2012.[116]
Start-ups and small businesses in Singapore receive support from policymakers and the local government fosters the role VCs play to support entrepreneurship in Singapore and the region. For instance, in 2016, Singapore'sNational Research Foundation (NRF)has given out grants up to around $30 million to four large local enterprises for investments in startups in the city-state. This first of its kind partnership NRF has entered into is designed to encourage these enterprises to source for new technologies and innovative business models.[117]
Currently, the rules governing VC firms are being reviewed by theMonetary Authority of Singapore(MAS) to make it easier to set up funds and increase funding opportunities for start-ups. This mainly includes simplifying and shortening the authorization process for new venture capital managers and to study whether existing incentives that have attracted traditional asset managers here will be suitable for the VC sector. A public consultation on the proposals was held in January 2017 with changes expected to be introduced by July.[118]
In recent years, Singapore's focus in venture capital investments has geared more towards more early stage, deep tech startups,[119]with the government launching SGInnovate in 2016[120]to support the development of deep tech startups. Deep tech startups aim to address significant scientific problems. Singapore's tech startup scene has grown in recent years, and the city-state ranked seventh in the latest Global Innovation Index 2022. For the first nine months of 2022, investments up to Series B rounds amounted to $5.5 billion Singapore dollars ($4 billion), an increase of 14% by volume and 45% by value, according to data from government agency Enterprise Singapore.
The Middle East and North Africa (MENA) venture capital industry is an early stage of development but growing. According toH1 2019 MENA Venture Investment Reportby MAGNiTT, 238 startup investment deals have taken place in the region in the first half of 2019, totaling in $471 million in investments. Compared to 2018's H1 report, this represents an increase of 66% in total funding and 28% in number of deals.
According to the report, theUAEis the most active ecosystem in the region with 26% of the deals made in H1, followed byEgyptat 21%, andLebanonat 13%. In terms of deals by sector, fintech remains the most active industry with 17% of the deals made, followed by e-commerce at 12%, and delivery and transport at 8%.
The report also notes that a total of 130 institutions invested in MENA-based startups in H1 2019, 30% of which were headquartered outside the MENA, demonstrating international appetite for investments in the region. 15 startup exits have been recorded in H1 2019, withCareem's $3.1 billion acquisition byUberbeing the first unicorn exit in the region.[121]Other notable exits include Souq.com exit toAmazonin 2017 for $650 million.[122]
In Israel, high-tech entrepreneurship and venture capital have flourished well beyond the country's relative size. As it has very little natural resources and, historically has been forced to build its economy on knowledge-based industries, its VC industry has rapidly developed, and nowadays has about 70 active venture capital funds, of which 14 international VCs with Israeli offices, and additional 220 international funds which actively invest in Israel. In addition, as of 2010, Israel led the world in venture capital invested per capita. Israel attracted $170 per person compared to $75 in the US.[123]About two thirds of the funds invested were from foreign sources, and the rest domestic. In 2013,Wix.comjoined 62 other Israeli firms on the Nasdaq.[124]
The Southern African venture capital industry is developing. The South African Government and Revenue Service is following the international trend of using tax-efficient vehicles to propel economic growth and job creation through venture capital. Section 12 J of theIncome Tax Actwas updated to include venture capital. Companies are allowed to use a tax-efficient structure similar to VCTs in the UK. Despite the above structure, the government needs to adjust its regulation aroundintellectual property, exchange control and other legislation to ensure that Venture capital succeeds.[citation needed]
Currently, there are not many venture capital funds in operation and it is a small community; however, the number of venture funds are steadily increasing with new incentives slowly coming in from government. Funds are difficult to come by and due to the limited funding, companies are more likely to receive funding if they can demonstrate initial sales or traction and the potential for significant growth. The majority of the venture capital in Sub-Saharan Africa is centered on South Africa and Kenya.[citation needed]
Entrepreneurship is a key to growth. Governments will need to ensure business friendly regulatory environments in order to help foster innovation. In 2019, venture capital startup funding grew to 1.3 billion dollars, increasing rapidly. The causes are as of yet unclear, but education is certainly a factor.[125]
Unlikepublic companies, information regarding an entrepreneur's business is typicallyconfidentialand proprietary. As part of thedue diligenceprocess, most venture capitalists will require significant detail with respect to a company'sbusiness plan. Entrepreneurs must remain vigilant about sharing information with venture capitalists that are investors in their competitors. Most venture capitalists treat information confidentially, but as a matter of business practice, they do not typically enter intoNon Disclosure Agreementsbecause of the potential liability issues those agreements entail. Entrepreneurs are typically well advised to protect truly proprietaryintellectual property.[citation needed]Startups commonly use adata roomto securely share this information with potential investors during the due diligence process.
Limited partnersof venture capital firms typically have access only to limited amounts of information with respect to the individual portfolio companies in which they are invested and are typically bound by confidentiality provisions in the fund'slimited partnership agreement.[citation needed]
There are several strict guidelines regulating those that deal in venture capital. Namely, they are not allowed to advertise or solicit business in any form as per theU.S. Securities and Exchange Commissionguidelines.[126]
|
https://en.wikipedia.org/wiki/Venture_capital
|
Following is a partial list ofprofessional certificationsinfinancial services, with an overview of the educational and continuing requirements for each; seeProfessional certification § Accountancy, auditing and financeandCategory:Professional certification in financefor all articles.
As the field offinancehas increased in complexity in recent years, the number of available designations has grown, and, correspondingly, some will have more recognition than others.[1][2][3]
In the US, many state securities and insuranceregulatorsdo not allow financial professionals to use a designation — in particular a"senior" designation— unless it has been accredited by either theAmerican National Standards Instituteor theNational Commission for Certifying Agencies.[4]
The Certificate in Investment Performance Measurement (CIPM) is aprofessional accreditationin the field ofinvestment performanceanalysis. It includes investment performance measurement and attribution. It is offered by the CIPM Association, a body associated with theCFA Institute.
Certified International Investment Analyst (CIIA) is an internationally recognised advanced professional qualification for individuals working in the finance and investment industry.
The CIIA maintains standards both at the national and international levels: ACIIA tests candidates at the local level (at their home country), and, having cleared those country specific exams, at the common international level. The topics are largely similar to the CFA; see below.
The Chartered Alternative Investment Analyst (CAIA) designation is a financial certification for investment professionals conferred by the CAIA Association.
The curriculum is designed to provide finance professionals with a broad base of knowledge inalternative investments.
Candidates must complete two examinations in succession and pay an ongoing certification fee to retain rights to use the financial designation.
The Chartered Financial Analyst (CFA) is a post-graduate professional qualification offered internationally by the American-basedCFA Institute.
The program covers a considerably wide range of topics relating to advancedinvestment managementandsecurity analysis- thuseconomics,financial reportingand analysis,corporate finance,alternative investmentsandportfolio management- and provides a generalist knowledge of other areas of finance.
The program consists of three examinations in succession, each about four and a half hours long.
To attain the Charter, candidates require three years work experience; thereafter they must adhere to a code of ethics, and pay an ongoing certification fee to retain rights to use the designation.
Forretail focusedprofessionals, CISI - the UK basedChartered Institute for Securities & Investment- offers the Chartered Wealth Manager Qualification (CWM),[5]comprising three sequential exams in financial markets,portfolio construction, and then appliedwealth management. With further experience requirements met, the Chartered Wealth Manager title may be used additional to other CISI membership designations.
TheCISI Diploma in Capital Markets, also offered by CISI, is a leading qualification[6]for practitioners working inwholesale securities markets. It comprises sequential modules in (i) financial securities, (ii) financial markets, and then (iii) a role specific selection from fixed income, derivatives, or fund management. The three exams typically take between 18 months and two years to complete. Candidates become full Members and may use the post-nominal "MCSI". It was previously known as the "SII Diploma".
A Development Finance Certified Professional (DFCP) is a specialist indevelopment financetheory and practice that has been professionally accredited by the Chartered Institute of Development Finance;[7]the professional association which engages with academic institutions,development finance institutions, and support agencies to support and maintain ethical conduct and professionalism in the development finance discipline globally. It is the highest professional qualification for development finance practitioners.
The Certified Financial Planner (CFP) designation is a certification mark forfinancial plannersconferred by theCFP Board of Standards.
To receive authorization to use the designation, the candidate must meet education, examination, experience and ethics requirements, and pay an ongoing certification fee.
It is offered in the United States, and in 25 other countries through affiliated organizations.
The Chartered Financial Consultant (ChFC) is the "advanced financial planning" designation awarded byThe American College of Financial Services. To secure the designation, applicants must have three years of full-time business experience within the preceding five years and must complete nine college-level courses; the award is also contingent on adherence to a set of ethical guidelines. The designation exempts one from sitting theSeries 65examination.
Chartered Financial Divorce Analyst(CFDA) refers to the Canadian designation for specialists facilitating objectivefinancial analysisfor families & individuals going throughdivorce,marital separationorlegal separationand life transitions.
The regulating body is the Academy of Financial Divorce Specialists (AFDS).[8]Members are required to have an existing financial designation and be in good standing to be eligible for the course. Once passing, members must maintain credentials and ongoing annual education, and application/work in field.[9]
The Chartered Financial Planner is a designation awarded by the UK basedChartered Insurance Institute. To attain "Chartered status" the candidate must sit 14 exams, and have five years relevant experience. Thereafter continued learning is required annually.
The Fellow Chartered Financial Practitioner (FChFP) designation[10]is afinancial planningdesignation issued by the Asia Pacific Financial Services Association (APFinSA).[11]Candidates must have 2 years of full-time experience, and then pass 6 exams.
The designation was developed by the National Association of Malaysian Life Insurance and Financial Advisors (NAMLIFA)[12]in 1996 and later on adopted by APFinSA (of which NAMLIFA is a member) in 2001 as the flagship designation for its 11 member associations.
Registered Financial Planner (RFP) refers to one of several separate designations infinancial planning; there is currently no connection between these.
The Certified Financial Technician (CFTe)[17]is a designation intechnical analysisoffered by the International Federation of Technical Analysts (IFTA).[18]It comprises two sequential examinations, usually completed over two years; to register candidates require abachelor's degreeand three years'relevant experience.
Once qualified, a CFTe may pursue the MFTA (Master of Financial Technical Analysis),[19]requiring submission of a research thesis.
Members are in 22 countries.
The Chartered Market Technician (CMT) is a designation intechnical analysisoffered by theCMT Association.
The program comprises three examination levels, certifying that the individual is competent in the use of technical analysis, and knowledgeable re the underlying theory.
To earn the designation, candidates must hold a degree, and have three years relevant experience.
The STA Diploma in Technical Analysis is a designation inTechnical analysisoffered by the UK based Society of Technical Analysts[20](STA). It comprises two sequential examinations. The qualification is accredited by the International Federation of Technical Analysts and the Chartered Institute for Securities & Investment. Designants are also entitled to the above Certified Financial Technician (CFTe) designation and certain Chartered Wealth Management Qualification (CWM) exemptions.
CISI, in conjunction withICAEW,[21]offers the two tiered Certificate,[22]and then Diploma inCorporate Finance.[23]The qualification is "designed with a focus on the commercial, practical and technical skills"applicable in corporate finance.
With three years appropriate experience,[24]these lead to the ICAEW Corporate Finance (CF) designatory letters, and to full CISI Chartered Member status.
The Certified CorporateFP&AProfessional, or "FPAC",[25]is a designation conferred by the Association for Financial Professionals (AFP), known for their CTP treasury qualificationcovered below. The FPAC syllabus is over two exams:
the first 3-hour paper, covers underlying knowledge offinancial planning and analysis;
the second 4.5 hour paper, is acase-basedtest of appliedanalyticsandbusiness support.
Certificants have three years experience and hold a relevant degree or other qualification; AFP thereafter specifies continuing education requirements.
The International Certificate in Corporate Finance (ICCF)[26]is a professional designation for employees in corporate finance, coveringfinancial analysis,valuationanddecision making. The program comprises three 6-week online courses, three major cases studies, and a 2-hour final exam.
The program is delivered by First Finance Institute[27]in partnership with the following fourbusiness schools:HEC Paris,Columbia,WhartonandIE Business School.
The Certificate in Quantitative Finance (CQF)[28]is an online part-timefinancial engineeringprogram;
it was founded byPaul Wilmottin 2003, and is conferred by the CQF Institute.[29]The CQF can be completed as a single six-month program or split into two three-month levels.
It is designed for in-depth training for individuals inderivatives, IT,quantitative trading, insurance,model validationor risk management.
The program's focus is on the practical implementation of techniques ("real-world quantitative finance"), it thus incorporates an element of questioning and analyzing models and methods;
it assumes some background in mathematics and programming.[30]See also underQuantitative analysis (finance) § Education,Financial engineering § Education, andFinancial modeling § Quantitative finance.
The Certified Risk Management Professional (RIMS-CRMP)[31]is anenterprise risk management(ERM) focused credential offered by RIMS, theRisk and Insurance Management Society.
Candidates sit a two-hourcompetency basedexam, and require aBachelor's degreemajoring in Risk Management together with a year's appropriate experience (or more with other qualifications);
certificants are then required to uphold a Code of Ethics and meetcontinuing educationrequirements in order to maintain the certification.
(A similarly named certificate is offered by IRMSA inSouth Africa.[32])
"Certified Risk Professional"[33]is a graduate-level qualification offered by theInstitute of Risk Management(IRM),[34]allowing for thepost nominaldesignation "CMIRM".
It is achieved by completing a certificate and then diploma, together with three year's relevant experience.
TheInternational Certificate in Financial Services Risk Management,[35][36][37]comprises two modules, usually taken over 9 months;
with four further modules, over three years, the certificant articulates to the ERM focusedInternational Diploma in Risk Management,[38]thereby qualifying.
(Several UK universities haveMSc programmesaligned with these;[39]students may gain exemption from specified modules.)
At the certificate level IRM also offers theInternational Certificate in Enterprise Risk Management, as well as others.
The CERA credential
—Chartered Enterprise Risk Actuarythrough theInstitute and Faculty of Actuaries, andChartered Enterprise Risk Analystthrough theSociety of Actuaries—
provides risk professionals with "strong ERM knowledge that drives better business decisions applied in finance and insurance".[40]Under both, certificants have completed various of the underlyingactuarial qualifying exams, as well as further specified modules and training in risk management.
TheFinancial Risk Manager(FRM) is a professional certification inrisk managementoffered by theGlobal Association of Risk Professionals(GARP).[41]The coverage - focusing onmarket risk,credit riskandoperational risk, and including requisite quantitative andinvestment managementmaterial - is over two exams.
Certificants are in more than 190 countries and territories worldwide,[42]and have taken an average of two years to earn their Certification.[43]
TheProfessional Risk Managercertification (PRM), offered byPRMIA, emphasizes practice-related skills and knowledge required within the risk management profession, andfinancial risk managementmore particularly; its coverage, structure and recognition are similar to the FRM.[2][1]It additionally requires a commitment toprofessional ethics, and 20 annual hours of continuing education.
The Association of Corporate Treasurers offers training and various qualifications incash-andtreasury management.
The Diploma in Treasury Management (3 papers over 12–18 months) allows for Associate Membership, with post-nominal letters AMCT,
while the subsequent Advanced Diploma (of similar structure and duration, and requiring also a dissertation) grants full membership, MCT.
The FCT fellowship is conferred following several years of experience.
The Certified Treasury Professional (CTP) designation is a certification for treasurers, cash managers, treasury managers, and other treasury-related professionals administered by the Association for Financial Professionals (AFP). The CTP was formerly known as the Certified Cash Manager or CCM designation but was renamed due to treasury's increasing role in managing the entire balance sheet and implementing the strategic direction prescribed byChief Financial Officers. The CTP certification is held by over 20,000 finance professionals and, in the US, is considered[citation needed]the leading certification in thetreasury managementprofession.
|
https://en.wikipedia.org/wiki/Professional_certification_in_financial_services#Corporate_finance
|
This page is anindex of accounting topics.
Accounting ethics-Accounting information system-Accounting research-Activity-Based Costing-Assets
Balance sheet-Big Four auditors-Bond-Bookkeeping-Book value
Cash-basis accounting-Cash-basis versus accrual-basis accounting-Cash flow statement-Certified General Accountant-Certified Management Accountants-Certified Public Accountant-Chartered accountant-Chart of accounts-Common stock-Comprehensive income-Construction accounting-Convention of conservatism-Convention of disclosure-Cost accounting-Cost of capital-Cost of goods sold-Creative accounting-Credit-Credit note-Current asset-Current liability
Debitcapitalreserve
-Debit note-Debt-Deficit (disambiguation)-Depreciation-Diluted earnings per share-Dividend-Double-entry bookkeeping system-Dual aspect
E-accounting-EBIT-EBITDA-Earnings per share-Engagement Letter-Entity concept-Environmental accounting-Expense-Equity-Equivalent Annual Cost
Financial Accounting Standards Board-Financial accountancy-Financial audit-Financial reports-Financial statements-Fixed assets-Fixed assets management-Forensic accounting-Fraud deterrence-Free cash flow-Fund accounting
Gain-General ledger-Generally Accepted Accounting Principles-Going concern-Goodwill-Governmental Accounting Standards Board
Historical cost-History of accounting
Income-Income statement-Institute of Chartered Accountants in England and Wales-Institute of Chartered Accountants of Scotland-Institute of Management Accountants-Intangible asset-Interest-Internal audit-International Accounting Standards Board-International Accounting Standards Committee-International Accounting Standards-International Federation of Accountants-International Financial Reporting Standards-Inventory-Investment-Invoices-Indian Accounting Standards
Job costing-Journal
Lean accounting-Ledger-Liability-Long-term asset-Long-term liabilities-Loss on sale of residential property
Maker-checker-Management accounting-Management Assertions-Mark-to-market accounting-Matching principle-Materiality-Money measurement concept-Mortgage loan
Negative assurance-Net income-Notes to the Financial Statements-net worth
OBERAC-One-for-one checking-Online Accounting-Operating expense-Ownership equity
Payroll-Petty cash-Philosophy of Accounting-Preferred stock-P/E ratio-Positive accounting-Positive assurance-PricewaterhouseCoopers-Profit and loss account-Pro-forma amount-Production accounting-Project accounting
Retained earnings-Revenue-Revenue recognition
Sales journal-Security-Social accounting-Spreadsheet-Statement of changes in equity-Statutory accounting principles-Stock option-Stock split-Stock-Shareholder-Shareholders' equity-South African Institute of Chartered Accountants-Sunk cost
Three lines of defence-Throughput accounting-Trade credit-Treasury stock-Trial balance
UK generally accepted accounting principles-Unified Ledger Accounting-U.S. Securities and Exchange Commission-US generally accepted accounting principles-Work sheet-Write off
|
https://en.wikipedia.org/wiki/List_of_accounting_topics
|
The followingoutlineis provided as an overview of and topical guide to finance:
Finance– addresses the ways in which individuals and organizations raise and allocate monetaryresourcesover time, taking into account therisksentailed in their projects.
The termfinancemay incorporate any of the following:
Financial institutions
|
https://en.wikipedia.org/wiki/List_of_finance_topics
|
The followingoutlineis provided as an overview of and topical guide to finance:
Finance– addresses the ways in which individuals and organizations raise and allocate monetaryresourcesover time, taking into account therisksentailed in their projects.
The termfinancemay incorporate any of the following:
Financial institutions
|
https://en.wikipedia.org/wiki/List_of_finance_topics#Corporate_finance
|
The followingoutlineis provided as an overview of and topical guide to finance:
Finance– addresses the ways in which individuals and organizations raise and allocate monetaryresourcesover time, taking into account therisksentailed in their projects.
The termfinancemay incorporate any of the following:
Financial institutions
|
https://en.wikipedia.org/wiki/List_of_finance_topics#Valuation
|
Adaptive management, also known asadaptive resource managementoradaptive environmental assessment and management, is a structured,iterativeprocess of robustdecision makingin the face ofuncertainty, with an aim to reducing uncertainty over time viasystem monitoring. In this way, decision making simultaneously meets one or moreresource managementobjectives and, either passively or actively, accrues information needed to improve future management. Adaptive management is a tool which should be used not only to change a system, but also to learn about the system.[1]Because adaptive management is based on a learning process, it improves long-run management outcomes. The challenge in using the adaptive management approach lies in finding the correct balance between gaining knowledge to improve management in the future and achieving the best short-term outcome based on current knowledge.[2]This approach has more recently been employed in implementinginternational developmentprograms.
There are a number of scientific and social processes which are vital components of adaptive management, including:
The achievement of these objectives requires an open management process which seeks to include past, present and futurestakeholders. Adaptive management needs to at least maintainpolitical openness, but usually aims to create it. Adaptive management must therefore be ascientificand social process. It must focus on the development of newinstitutionsand institutional strategies in balance withscientific hypothesisand experimental frameworks (resilience.org).
Adaptive management can proceed as either passive or active adaptive management, depending on how learning takes place. Passive adaptive management values learning only insofar as it improves decision outcomes (i.e. passively), as measured by the specified utility function. In contrast, active adaptive management explicitly incorporates learning as part of the objective function, and hence, decisions which improve learning are valued over those which do not.[1][3]In both cases, as new knowledge is gained, the models are updated and optimal management strategies are derived accordingly. Thus, while learning occurs in both cases, it is treated differently. Often, deriving actively adaptive policies is technically very difficult, which prevents it being more commonly applied.[4]
Key features of both passive and active adaptive management are:
However, a number of process failures related to information feedback can prevent effective adaptive management decision making:[5]
The use of adaptive management techniques can be traced back to peoples from ancient civilisations. For example, theYappeople of Micronesia have been using adaptive management techniques to sustain highpopulation densitiesin the face of resource scarcity for thousands of years (Falanruw 1984). In using these techniques, the Yap people have altered their environment creating, for example, coastalmangrovedepressions andseagrass meadowsto support fishing and termite resistant wood (Stankey and Shinder 1997).
The origin of the adaptive management concept can be traced back to ideas ofscientific managementpioneered byFrederick Taylorin the early 1900s (Haber 1964). While the term "adaptive management" evolved in natural resource management workshops through decision makers, managers and scientists focussing on building simulation models to uncover key assumptions and uncertainties (Bormannet al.1999)
Two ecologists at TheUniversity of British Columbia,C.S. Holling[1]and C.J Walters[3]further developed the adaptive management approach as they distinguished between passive and active adaptive management practice.Kai Lee, notable Princeton physicist, expanded upon the approach in the late 1970s and early 1980s while pursuing a post-doctorate degree at UCBerkeley. The approach was further developed at the International Institute for Applied Systems Analysis (IIASA) inVienna,Austria, while C.S. Holling was director of the institute. In 1992, Hilbourne described three learning models for federal land managers, around which adaptive management approaches could be developed, these are reactive, passive and active.
Adaptive management has probably been most frequently applied in Yap,AustraliaandNorth America, initially applied infisherymanagement, but received more broad application in the 1990s and 2000s. One of the most successful applications of adaptive management has been in the area of waterfowl harvest management in North America, most notably for themallard.[6]
Adaptive management in a conservation project and program context can trace its roots back to at least the early 1990s, with the establishment of the Biodiversity Support Program (BSP)[7]in 1989. BSP was aUSAID-funded consortium of WWF[8]The Nature Conservancy (TNC),[9]and World Resources Institute (WRI).[10]Its Analysis and Adaptive Management Program sought to understand the conditions under which certain conservation strategies were most effective and to identify lessons learned across conservation projects. When BSP ended in 2001, TNC and Foundations of Success[11](FOS, a non-profit which grew out of BSP) continued to actively work in promoting adaptive management for conservation projects and programs. The approaches used included Conservation by Design[12](TNC) and Measures of Success[13](FOS).
In 2004, the Conservation Measures Partnership (CMP)[14]– which includes several former BSP members – developed a common set of standards and guidelines[15]for applying adaptive management to conservation projects and programs.
Applying adaptive management in aconservationorecosystem managementproject involves the integration of project/program design, management, and monitoring to systematically test assumptions in order to adapt and learn. The three components of adaptive management in environmental practice are:
Open Standards for the Practice of Conservation[18]lays out five main steps to an adaptive management project cycle (see Figure 1). TheOpen Standardsrepresent a compilation and adaptation of best practices and guidelines across several fields and across several organizations within the conservation community. Since the release of the initialOpen Standards(updated in 2007 and 2013), thousands of project teams from conservation organizations (e.g., TNC, Rare, and WWF), local conservation groups, and donors alike have begun applying theseOpen Standardsto their work. In addition, several CMP members have developed training materials and courses to help apply the Standards.
Some recent write-ups of adaptive management in conservation include wildlife protection (SWAP, 2008), forests ecosystem protection (CMER, 2010), coastal protection and restoration (LACPR, 2009), natural resource management (water, land and soil), species conservation especially, fish conservation fromoverfishing(FOS, 2007) andclimate change(DFG, 2010). In addition, some other examples follow:
The concept of adaptive management is not restricted to natural resources orecosystem management, as similar concepts have been applied tointernational developmentprogramming.[20][21]This has often been a recognition to the "wicked" nature of many development challenges and the limits of traditional planning processes.[22][23][24]One of the principal changes facing international development organizations is the need to be more flexible, adaptable and focused on learning.[25]This is reflected in international development approaches such as Doing Development Differently, Politically Informed Programming and Problem Driven Iterative Adaptation.[26][27][28]
One recent example of the use of adaptive management by international development donors is the planned Global Learning for Adaptive Management (GLAM) programme to support adaptive management inDepartment for International DevelopmentandUSAID. The program is establishing a centre for learning about adaptive management to support the utilization and accessibility of adaptive management.[29][30]In addition, donors have been focused on amending their own programmatic guidance to reflect the importance of learning within programs: for instance, USAID's recent focus in their ADS guidance on the importance of collaborating, learning and adapting.[31][32]This is also reflected in Department for International Development's Smart Rules that provide the operating framework for their programs including the use of evidence to inform their decisions.[33]There are a variety of tools used to operationalize adaptive management in programs, such aslearning agendasanddecision cycles.[34]
Collaborating, learning and adapting (CLA) is a concept related to the operationalizing of adaptive management in international development that describes a specific way of designing, implementing, adapting and evaluating programs.[35]: 85[36]: 46CLA involves three concepts:
CLA integrates three closely connected concepts within the organizational theory literature: namely collaborating, learning and adapting. There is evidence of the benefits of collaborating internally within an organization and externally with organizations.[38]Much of the production and transmission of knowledge—bothexplicit knowledgeandtacit knowledge—occurs through collaboration.[39]There is evidence for the importance of collaboration among individuals and groups for innovation, knowledge production, and diffusion—for example, the benefits of staff interacting with one another and transmitting knowledge.[40][41][42]The importance of collaboration is closely linked to the ability of organizations to collectively learn from each other, a concept noted in the literature onlearning organizations.[43][44][45]
CLA, an adaptive management practice, is being employed by implementing partners[46][47]that receive funding from thefederal government of the United States,[48][49][50]but it is primarily a framework for internal change efforts that aim at incorporating collaboration, learning, and adaptation within theUnited States Agency for International Development(USAID) including its missions located around the world.[51]CLA has been linked to a part of USAID's commitment to becoming a learning organization.[52]CLA represents an approach to combine strategic collaboration, continuous learning, and adaptive management.[53]A part of integrating the CLA approach is providing tools and resources, such as the Learning Lab, to staff and partner organizations.[54]The CLA approach is detailed for USAID staff in the recently revised program policy guidance.[31]
Adaptive management as a systematic process for improving environmental management policies and practices is the traditional application however, the adaptive management framework can also be applied to other sectors seekingsustainabilitysolutions such as business and community development. Adaptive management as a strategy emphasizes the need to change with the environment and to learn from doing. Adaptive management applied to ecosystems makes overt sense when considering ever changing environmental conditions. The flexibility and constant learning of an adaptive management approach is also a logical application for organizations seeking sustainability methodologies.
Businesses pursuing sustainability strategies would employ an adaptive management framework to ensure that the organization is prepared for the unexpected and geared for change. By applying an adaptive management approach the business begins to function as an integrated system adjusting and learning from a multi-faceted network of influences not just environmental but also, economic and social (Dunphy, Griffths, & Benn, 2007). The goal of any sustainable organization guided by adaptive management principals must be to engage in active learning to direct change towards sustainability (Verine, 2008). This "learning to manage by managing to learn" (Bormann BT, 1993) will be at the core of a sustainable business strategy.
Sustainable community development requires recognition of the relationship between environment, economics and social instruments within the community. An adaptive management approach to creating sustainable community policy and practice also emphasizes the connection and confluence of those elements. Looking into the cultural mechanisms which contribute to a community value system often highlights the parallel to adaptive management practices, "with [an] emphasis on feedback learning, and its treatment of uncertainty and unpredictability" (Berkes, Colding, & Folke, 2000). Often this is the result of indigenous knowledge and historical decisions of societies deeply rooted in ecological practices (Berkes, Colding, & Folke, 2000). By applying an adaptive management approach to community development the resulting systems can develop built in sustainable practice as explained by the Environmental Advisory Council (2002), "active adaptive management views policy as a set of experiments designed to reveal processes that build or sustain resilience. It requires, and facilitates, a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options" (p. 1121). A practical example of adaptive management as a tool for sustainability was the application of a modified variation of adaptive management using artvoice,photovoice, andagent-based modelingin a participatory social framework of action. This application was used in field research on tribal lands to first identify the environmental issue and impact of illegal trash dumping and then to discover a solution through iterative agent-based modeling usingNetLogoon a theoretical "regional cooperative clean-energy economy". Thiscooperativeeconomy incorporated a mixed application of: traditional trash recycling and a waste-to-fuels process of carbon recycling of non-recyclable trash intoethanol fuel. This industrial waste-to-fuels application was inspired by pioneering work of the Canadian-based company,Enerkem. See Bruss, 2012 - PhD dissertation: Human Environment Interactions and Collaborative Adaptive Capacity Building in a Resilience Framework, GDPE Colorado State University.
In an ever-changing world, adaptive management appeals to many practices seeking sustainable solutions by offering a framework for decision making that proposes to support a sustainable future which, "conserves and nurtures the diversity—of species, of human opportunity, of learning institutions and of economic options"(The Environmental Advisory Council, 2002, p. 1121).
It is difficult to test the effectiveness of adaptive management in comparison to other management approaches. One challenge is that once a system is managed using one approach it is difficult to determine how another approach would have performed in exactly the same situation.[55]One study tested the effectiveness of formal passive adaptive management in comparison to human intuition by having natural resource management students make decisions about how to harvest a hypothetical fish population in an online computer game. The students on average performed poorly in comparison to the computer programs implementing passive adaptive management.[55][56]
Collaborative adaptive management is often celebrated as an effective way to deal with natural resource management under high levels of conflict, uncertainty and complexity.[57]The effectiveness of these efforts can be constrained by both social and technical barriers. As the case of theGlenn Canyon DamAdaptive Management Program in the US illustrates, effective collaborative adaptive management efforts require clear and measurable goals and objectives, incentives and tools to foster collaboration, long-term commitment to monitoring and adaptation, and straightforward joint fact-finding protocols.[58]In Colorado, USA, a ten-year,ranch-scale (2590 ha) experiment began in 2012 at theAgricultural Research Service(ARS) Central Plains Experimental range to evaluate the effectiveness and process of collaborative adaptive management[57]onrangelands. The Collaborative Adaptive Rangeland Management or “CARM” project monitors outcomes from yearling steer grazing management on 10, 130 ha pastures conducted by a group of conservationists, ranchers, and public employees, and researchers. This team compares ecological monitoring data tracking profitability and conservation outcomes with outcomes from a “traditional” management treatment: a second set of ten pastures managed without adaptive decision making but with the same stocking rate. Early evaluations of the project by social scientists offer insights for more effective adaptive management.[59]First, trust is primary and essential to learning in adaptive management, not a side benefit. Second, practitioners cannot assume that extensive monitoring data or large-scale efforts will automatically facilitate successful collaborative adaptive management. Active, long-term efforts to build trust among scientists and stakeholders are also important. Finally, explicit efforts to understand, share and respect multiple types of manager knowledge, including place-based ecological knowledge practiced by local managers, is necessary to manage adaptively for multiple conservation and livelihood goals on rangelands.[59]Practitioners can expect adaptive management to be a complex, non-linear process shaped by social, political and ecological processes, as well as by data collection and interpretation.
Information and guidance on the entire adaptive management process is available from CMP members' websites and other online sources:
|
https://en.wikipedia.org/wiki/Adaptive_management
|
Adecisional balance sheetordecision balance sheetis atabularmethod for representing the pros and cons of different choices and for helping someone decide what to do in a certain circumstance. It is often used in working withambivalencein people who are engaged in behaviours that are harmful to their health (for example, problematicsubstance useorexcessive eating),[1]as part of psychological approaches such as those based on thetranstheoretical modelof change,[2]and in certain circumstances inmotivational interviewing.[3]
The decisional balance sheet records the advantages and disadvantages of different options. It can be used both for individual and organisational decisions. The balance sheet recognises that both gains and losses can be consequences of a single decision. It might, for example, be introduced in a session with someone who is experiencing problems with their alcohol consumption with a question such as: "Could you tell me what you get out of your drinking and what you perhaps find less good about it?" Therapists are generally advised to use this sort of phrasing rather than a blunter injunction to think about the negative aspects of problematic behaviour, as the latter could increasepsychological resistance.[4]
An early use of a decisional balance sheet was byBenjamin Franklin. In a 1772 letter toJoseph Priestley, Franklin described his own use of the method,[5]which is now often called theBen Franklin method.[6]It involves making a list of pros and cons, estimating the importance of each one, eliminating items from the pros and cons lists of roughly equal importance (or groups of items that can cancel each other out) until one column (pro or con) is dominant. Experts ondecision support systemsfor practical reasoning have warned that the Ben Franklin method is only appropriate for very informal decision making: "A weakness in applying this rough-and-ready approach is a poverty of imagination and lack of background knowledge required to generate a full enough range and detail of competing considerations."[7]Social psychologistTimothy D. Wilsonhas warned that the Ben Franklin method can be used in ways that fool people into falsely believingrationalisationsthat do not accurately reflect their truemotivationsor predict their future behaviour.[8]
In papers from 1959 onwards,Irving Janisand Leon Mann coined the phrasedecisional balance sheetand used the concept as a way of looking atdecision-making.[9]James O. Prochaskaand colleagues then incorporated Janis and Mann's concept into thetranstheoretical modelof change,[10]anintegrative theory of therapythat is widely used for facilitatingbehaviour change.[2]Research studies on the transtheoretical model suggest that, in general, for people to succeed at behaviour change, the pros of change should outweigh the cons before they move from the contemplation stage to the action stage of change.[11]Thus, the balance sheet is both an informal measure of readiness for change and an aid for decision-making.[12]
One research paper reported that combining the decisional balance sheet technique with theimplementation intentionstechnique was "more effective in increasing exercise behaviour than acontrolor either strategy alone."[13]Another research paper said that a decisional balance intervention may strengthen a person's commitment to change when that person has already made a commitment to change, but could decrease commitment to change if that person is ambivalent; the authors suggested thatevocation of change talk(a technique from motivational interviewing) is more appropriate than a decisional balance sheet when a clinician intends to help ambivalent clients resolve their ambivalence in the direction of change.[14]William R. MillerandStephen Rollnick's textbook on motivational interviewing discusses decisional balance in a chapter titled "Counseling with Neutrality", and describes "decisional balance as a way of proceeding when you wish to counsel with neutrality rather than move toward a particular change goal".[15]
There are several variations of the decisional balance sheet.[16]In Janis and Mann's original description there are eight or more cells depending on how many choices there are.[17]For each new choice there are pairs of cells (one for advantages, one for disadvantages) for these four different aspects:[18]
John C. Norcrossis among the psychologists who have simplified the balance sheet to four cells: the pros and cons of changing, for self and for others.[19]Similarly, a number of psychologists have simplified the balance sheet to a four-cell format consisting of the pros and cons of the current behaviour and of a changed behaviour.[20]Some authors separate out short- and long-term benefits and risks of a behaviour.[21]The example below allows for three options: carrying on as before, reducing a harmful behaviour to a level where it might be less harmful, or stopping it altogether; it therefore has six cells consisting of a pro and con pair for each of the three options.
Any evaluation is subject to change and often the cells are inter-connected. For example, looking at the table above, if something were to happen in the individual's marital life (an argument or the partner leaves or becomes pregnant or has an accident), the event can either increase or decrease how much weight the person gives to the elements in the balance sheet that refer to the relationship.
Another refinement of the balance sheet is to use a scoring system to give numerical weights to different elements of the balance sheet; in such cases, the balance sheet becomes what is often called adecision matrix.[22]
Similarly, Fabio Losa andValerie Beltoncombineddrama theoryandmultiple-criteria decision analysis, two decision-making techniques from the field ofoperations research, and applied them to an example of interpersonal conflict over substance abuse, which they described as follows:
A couple, Jo and Chris, have lived together for a number of years. However, Chris cannot stand any longer that Jo is always drunk and threatens to leave. Thescene settingestablishes the initial frame, the situation seen by a particular actor (Chris) at a specific point. The actors are Jo and Chris and each has a single yes/no policy option—for Chris this is to stay or leave and for Jo it is to stop drinking or not. These options define four possible scenarios or futures...[23]
Dialectical behavior therapyincludes a form of decisional balance sheet called apros and cons grid.[24]
Kickstarterco-founderYancey Stricklercreated a four-cell matrix similar in appearance to a decisional balance sheet that he compared to abento box, with cells for self and others, present and future.[25]
Psychology professorFinn Tschudi's ABC model ofpsychotherapyuses a structure similar to a decisional balance sheet: A is a row that defines the problem; B is a row that listsschemas(tacit assumptions) about the advantages and disadvantages of resolving the problem; and C is a row that lists schemas about the advantages and disadvantages of maintaining the problem.[26]Tschudi was partly inspired byHarold Greenwald's bookDecision Therapy,[27]which posited that much of psychotherapy involves helping people make decisions.[28]In the ABC model, people are said to be blocked or stuck in resolving a problem when their C schemas define strong advantages to maintaining the problem and/or strong disadvantages to resolving the problem, and often their C schemas are at a low level of awareness.[29]In such cases, resolving the problem usually requires raising awareness and restructuring the C schemas, although several other general strategies for resolving the problem are available as alternatives or adjuncts.[30]
In an approach to psychotherapy calledcoherence therapy, A is called thesymptom, B is called theanti-symptom positionand C is called thepro-symptom position,[31]although coherence therapy also differentiates between "functional" symptoms that are directly caused by C and "functionless" symptoms that are not directly caused by C.[32]In terms ofbehaviour modification, the problematic half of A describes one or more costlyoperants, and C describes thereinforcementthat the operant provides.[33]
The following table summarizes the structure of the ABC model.[26]
In an approach to psychotherapy called focusedacceptance and commitment therapy(FACT) the four square tool is a tabular method similar in appearance to a decisional balance sheet.[34]The four square tool shows four sets of behaviors: positive behaviors (called "workable" behaviors) and negative behaviors (called "unworkable" behaviors) that a person does publicly and privately. In the four square tool, the advantages and disadvantages of the behaviors are implied, rather than listed in separate cells as in a decisional balance sheet. The following table is a blank four square tool.[34]
|
https://en.wikipedia.org/wiki/Decisional_balance_sheet
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Feedbackoccurs when outputs of a system are routed back as inputs as part of achainofcause and effectthat forms a circuit or loop.[1]The system can then be said tofeed backinto itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems:
Simple causal reasoning about a feedback system is difficult because the first system influences the second and second system influences the first, leading to a circular argument. This makes reasoning based upon cause and effect tricky, and it is necessary to analyze the system as a whole. As provided by Webster, feedback in business is the transmission of evaluative or corrective information about an action, event, or process to the original or controlling source.[2]
Self-regulating mechanisms have existed since antiquity, and the idea of feedback started to entereconomic theoryin Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name.[4]
The first ever known artificial feedback device was afloat valve, for maintaining water at a constant level, invented in 270 BC inAlexandria,Egypt.[5]This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates.[5]
Centrifugal governorswere used to regulate the distance and pressure betweenmillstonesinwindmillssince the 17th century. In 1788,James Wattdesigned his first centrifugal governor following a suggestion from his business partnerMatthew Boulton, for use in thesteam enginesof their production. Early steam engines employed a purelyreciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed.
In1868,James Clerk Maxwellwrote a famous paper, "On governors", that is widely considered a classic in feedback control theory.[6]This was a landmark paper oncontrol theoryand the mathematics of feedback.
The verb phraseto feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s,[7][8]and in 1909, Nobel laureateKarl Ferdinand Braunused the term "feed-back" as a noun to refer to (undesired)couplingbetween components of anelectronic circuit.[9]
By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (throughregeneration), but would also cause the audion to howl or sing.[10]This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920.[10]
The development ofcyberneticsfrom the 1940s onwards was centred around the study of circular causal feedback mechanisms.
Over the years there has been some dispute as to the best definition of feedback. According to cyberneticianAshby(1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection.
[Practical experimenters] object to the mathematician's definition, pointing out that this would force them to say that feedback was present in the ordinary pendulum ... between its position and its momentum—a "feedback" that, from the practical point of view, is somewhat mystical. To this the mathematician retorts that if feedback is to be considered present only when there is an actual wire or nerve to represent it, then the theory becomes chaotic and riddled with irrelevancies.[11]: 54
Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action.[12]
Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback.
Negative feedback: If the signal feedback is out of phase by 180° with respect to the input signal, the feedback is called negative feedback.
As an example of negative feedback, the diagram might represent acruise controlsystem in a car that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by aspeedometer. The error signal is the difference of the speed as measured by the speedometer from the target speed (set point). The controller interprets the speed to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the change of road grade to reduce the error in speed, minimising the changing slope.
The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback already existed in the 1920s when theregenerative circuitwas made.[13]Friis and Jensen (1924) described this circuit in a set of electronic amplifiers as a case wherethe "feed-back" action is positivein contrast to negative feed-back action, which they mentioned only in passing.[14]Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:
Positive feed-back increases the gain of the amplifier, negative feed-back reduces it.[15]
According to Mindell (2002) confusion in the terms arose shortly after this:
...Friis and Jensen had made the same distinction Black used between "positive feed-back" and "negative feed-back", based not on the sign of the feedback itself but rather on its effect on the amplifier's gain. In contrast, Nyquist and Bode, when they built on Black's work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition.[13]: 121
Even before these terms were being used,James Clerk Maxwellhad described their concept through several kinds of "component motions" associated with thecentrifugal governorsused in steam engines. He distinguished those that lead to a continuedincreasein a disturbance or the amplitude of a wave or oscillation, from those that lead to adecreaseof the same quality.[16]
The terms positive and negative feedback are defined in different ways within different disciplines.
The two definitions may be confusing, like when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacingpositiveandnegativewithself-reinforcingandself-correcting,[18]reinforcingandbalancing,[19]discrepancy-enhancinganddiscrepancy-reducing[20]orregenerativeanddegenerative[21]respectively. And for definition 2, some authors promote describing the action or effect aspositiveandnegativereinforcementorpunishmentrather than feedback.[12][22]Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced.[23]
This confusion may arise because feedback can be used to provideinformationormotivate, and often has both aqualitativeand aquantitativecomponent. As Connellan and Zemke (1993) put it:
Quantitativefeedback tells us how much and how many.Qualitativefeedback tells us how good, bad or indifferent.[24]: 102
While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be shoehorned into either type, and this is especially true when multiple loops are present.
When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if every one affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system.[11]: 54
In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system.
The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa.[25]
Some systems with feedback can have very complex behaviors such aschaotic behaviorsin non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems.
Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it.
By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to theedge of chaos.[26]
Physical systems present feedback through the mutual interactions of its parts. Feedback is also relevant for the regulation of experimental conditions, noise reduction, and signal control.[27]The thermodynamics of feedback-controlled systems has intrigued physicist since theMaxwell's demon, with recent advances on the consequences for entropy reduction and performance increase.[28][29]
Inbiologicalsystems such asorganisms,ecosystems, or thebiosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this isinsulin oscillations.
Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts,positiveandnegativedo not imply that the feedback causesgoodorbadeffects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. Themirror neuronsare part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action.
Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function.[30]In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity.[31][32]
Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions.[33]Feedback is also central to the operations ofgenesandgene regulatory networks.Repressor(seeLac repressor) andactivatorproteinsare used to create geneticoperons, which were identified byFrançois JacobandJacques Monodin 1961 asfeedback loops.[34]These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case inmetabolicconsumption).
On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise topredator-prey cycles.[35]
Inzymology, feedback serves as regulation of activity of an enzyme by its direct product(s) or downstream metabolite(s) in the metabolic pathway (seeAllosteric regulation).
Thehypothalamic–pituitary–adrenal axisis largely controlled by positive and negative feedback, much of which is still unknown.
Inpsychology, the body receives a stimulus from the environment or internally that causes the release ofhormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on.[36]
The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is theice–albedo positive feedbackloop whereby melting snow exposes more dark ground (of loweralbedo), which in turn absorbs heat and causes more snow to melt.
Feedback is extensively used in control theory, using a variety of methods includingstate space (controls),full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback".[39]
The most common general-purposecontrollerusing a control-loop feedback mechanism is aproportional-integral-derivative(PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on thepresenterror, the integral term on the accumulation ofpasterrors, and the derivative term is a prediction offutureerror, based on current rate of change.[40]
For feedback in the educational context, seecorrective feedback.
In ancient times, thefloat valvewas used to regulate the flow of water in Greek and Romanwater clocks; similar float valves are used to regulate fuel in acarburettorand also used to regulate tank water level in theflush toilet.
The Dutch inventorCornelius Drebbel(1572–1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added afantailto keep the face of the windmill pointing into the wind. In 1787,Tom Meadregulated the rotation speed of a windmill by using acentrifugal pendulumto adjust the distance between the bedstone and the runner stone (i.e., to adjust the load).
The use of thecentrifugal governorbyJames Wattin 1788 to regulate the speed of hissteam enginewas one factor leading to theIndustrial Revolution. Steam engines also use float valves andpressure release valvesas mechanical regulation devices. Amathematical analysisof Watt's governor was done byJames Clerk Maxwellin 1868.[16]
TheGreat Easternwas one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 byJohn McFarlane Gray.Joseph Farcotcoined the wordservoin 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns.Elmer Ambrose Sperryof theSperry Corporationdesigned the firstautopilotin 1912.Nicolas Minorskypublished a theoretical analysis of automatic ship steering in 1922 and described thePID controller.[41]
Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as thevacuum timing advancebut mechanical feedback was replaced by electronicengine management systemsonce small, robust and powerful single-chipmicrocontrollersbecame affordable.
The use of feedback is widespread in the design ofelectroniccomponents such asamplifiers,oscillators, and statefullogic circuitelements such asflip-flopsandcounters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes.
If the signal is inverted on its way round the control loop, the system is said to havenegative feedback;[43]otherwise, the feedback is said to bepositive. Negative feedback is often deliberately introduced to increase thestabilityand accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output tooscillateor "hunt".[44]While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators.
Harry NyquistatBell Labsderived theNyquist stability criterionfor determining the stability of feedback systems. An easier method, but less general, is to useBode plotsdeveloped byHendrik Bodeto determine thegain margin and phase margin. Design to ensure stability often involvesfrequency compensationto control the location of thepolesof the amplifier.
Electronic feedback loops are used to control the output ofelectronicdevices, such asamplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operatingopen loopif no output feedback is being employed andclosed loopif feedback is being used.[45]
When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. Thesemultivibratorsare widely used and include:
Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as beingout of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations.[46]In feedback amplifiers, this correction is generally for waveformdistortionreduction[47]or to establish a specifiedgainlevel. A general expression for the gain of a negative feedback amplifier is theasymptotic gain model.
Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the deviceoscillatesbetween its maximum and minimum possible states. Positive feedback may also introducehysteresisinto a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to makebistablecircuits for volatile storage of information.
The loud squeals that sometimes occurs inaudio systems,PA systems, androck musicare known asaudio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If theloop gainis sufficient, howling or squealing at the maximum power of the amplifier is possible.
Anelectronic oscillatoris anelectronic circuitthat produces a periodic,oscillatingelectronic signal, often asine waveor asquare wave.[48][49]Oscillators convertdirect current(DC) from a power supply to analternating currentsignal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast byradioandtelevision transmitters, clock signals that regulate computers andquartz clocks, and the sounds produced by electronic beepers andvideo games.[48]
Oscillators are often characterized by thefrequencyof their output signal:
Oscillators designed to produce a high-power AC output from a DC supply are usually calledinverters.
There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear orrelaxation oscillator.[49][50]
A latch or aflip-flopis acircuitthat has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element insequential logic. Latches and flip-flops are fundamental building blocks ofdigital electronicssystems used in computers, communications, and many other types of systems.
Latches and flip-flops are used as data storage elements. Such data storage can be used for storage ofstate, and such a circuit is described assequential logic. When used in afinite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
Flip-flops can be either simple (transparent or opaque) orclocked(synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the termflip-flopexclusively for discussing clocked circuits; the simple ones are commonly calledlatches.[51][52]
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge.
Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems.[53]Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems.[54]In particular, they have been applied to the development of products such asIBM Db2andIBM Tivoli. From a software perspective, theautonomic(MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems.[55][56]
Feedback is also a useful design principle for designinguser interfaces.
Video feedbackis thevideoequivalent ofacoustic feedback. It involves a loop between avideo camerainput and a video output, e.g., atelevision screenormonitor. Aiming the camera at the display produces a complex video image based on the feedback.[57]
|
https://en.wikipedia.org/wiki/Feedback
|
Alearning cycleis a concept of how people learn from experience. A learning cycle will have a number of stages or phases, the last of which can be followed by the first.
In 1933 (based on work first published in 1910),John Deweydescribed five phases or aspects of reflective thought:
In between, as states of thinking, are (1) suggestions, in which the mind leaps forward to a possible solution; (2) an intellectualization of the difficulty or perplexity that has been felt (directly experienced) into a problem to be solved, a question for which the answer must be sought; (3) the use of one suggestion after another as a leading idea, or hypothesis, to initiate and guide observation and other operations in the collection of factual material; (4) the mental elaboration of the idea or supposition as an idea or supposition (reasoning, in the sense in which reasoning is a part, not the whole of inference); and (5) testing the hypothesis by overt or imaginative action.
In the 1940s,Kurt Lewindevelopedaction researchand described a cycle of:
Lewin particularly highlighted the need for fact finding, which he felt was missing from much of management and social work. He contrasted this to the military where
the attack is pressed home and immediately a reconnaissance plane follows with the one objective of determining as accurately and objectively as possible the new situation. This reconnaissance or fact-finding has four functions. First it should evaluate the action. It shows whether what has been achieved is above or below expectation. Secondly, it gives the planners a chance to learn, that is, to gather new general insight, for instance, regarding the strength and weakness of certain weapons or techniques of action. Thirdly, this fact-finding should serve as a basis for correctly planning the next step. Finally, it serves as a basis for modifying the "overall plan."
In the early 1970s,David A. Kolband Ronald E. Fry developed the experiential learning model (ELM), composed of four elements:[3]
Testing the new concepts gives concrete experience which can be observed and reflected upon, allowing the cycle to continue.
Kolb integrated this learning cycle with a theory oflearning styles, wherein each style prefers two of the four parts of the cycle. The cycle isquadrisectedby a horizontal and vertical axis. The vertical axis represents how knowledge can be grasped, throughconcrete experienceor throughabstract conceptualization, or by a combination of both. The horizontal axis represents how knowledge is transformed or constructed throughreflective observationoractive experimentation. These two axes form the four quadrants that can be seen as four stages: concrete experience (CE), reflective observation (RO), abstract conceptualization (AC) and active experimentation (AE) and as four styles of learning: diverging, assimilating, converging and accommodating.[4]The concept of learning styles has been criticised, seeLearning styles § Criticism.
In the 1980s, Peter Honey and Alan Mumford developed Kolb and Fry's ideas into slightly different learning cycle.[5]The stages are:
While the cycle can be entered at any of the four stages, a cycle must be completed to give learning that will change behaviour. The cycle can be performed multiple times to build up layers of learning.
Honey and Mumford gave names (also calledlearning styles) to the people who prefer to enter the cycle at different stages:Activist,Reflector,TheoristandPragmatist. Honey and Mumford's learning styles questionnaire has been criticized for poorreliabilityandvalidity.[6]
In the late 1980s, the 5E learning cycle was developed byBiological Sciences Curriculum Study, specifically for use in teaching science.[7]The learning cycle has four phases:
The fifth E stands forEvaluate, in which the instructor observes each student's knowledge and understanding, and leads students to assess whether what they have learned is true. Evaluation should take place throughout the cycle, not within its own set phase.
In the 1990s, Alistair Smith developed theaccelerated learning cycle, also for use in teaching.[8]The phases are:[9]
Unlike other learning cycles, step 8 is normally followed by step 2, rather than step 1.
In the 2000s, Fred Korthagen and Angelo Vasalos (and others) developed the ALACT model, specifically for use in personal development.[10]The five phases of the ALACT cycle are:
As with Kolb and Fry, trial is an action that can be looked back on. Korthagen and Vasalos listedcoachinginterventions for each phase.[10]
Korthagen and Vasalos also described anonion modelof "levels of reflection" (from inner to outer: mission, identity, beliefs, competencies, behavior, environment) inspired byGregory Bateson's hierarchy oflogical types.[10]In 2010, they connected their model of reflective learning to the practice ofmindfulnessand toOtto Scharmer'sTheory U, which, in contrast to a learning cycle, emphasizes reflecting on a desired future rather than on past experience.[11]: 539–545
|
https://en.wikipedia.org/wiki/Learning_cycle
|
Insystems engineering,information systemsandsoftware engineering, thesystems development life cycle(SDLC), also referred to as theapplication development life cycle, is a process for planning, creating, testing, and deploying aninformation system.[1]The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both.[2]There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliverinformation systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates.[3]Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such aswaterfall,spiral,agile,rapid prototyping,incremental, and synchronize and stabilize.[4]
SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such asXPandScrum, focus on lightweight processes that allow for rapid changes.[5]Iterativemethodologies, such asRational Unified Processanddynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results.[6]Anamorphic developmentis guided by project scope and adaptive iterations.
Inproject managementa project can include both aproject life cycle(PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of theproject, while the systems development life cycle focuses on realizing the productrequirements".[7]
SDLC is not a methodology per se, but rather a description of the phases that a methodology should address. The list of phases is not definitive, but typically includes planning, analysis, design, build, test, implement, and maintenance/support. In the Scrum framework,[8]for example, one could say a single user story goes through all the phases of the SDLC within a two-week sprint. By contrast the waterfall methodology, where every business requirement[citation needed]is translated into feature/functional descriptions which are then all implemented typically over a period of months or longer.[citation needed]
According to Elliott (2004), SDLC "originated in the 1960s, to develop large scale functionalbusiness systemsin an age of large scalebusiness conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines".[9]
Thestructured systems analysis and design method(SSADM) was produced for the UK governmentOffice of Government Commercein the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".[9]
SDLC provides a set of phases/steps/activities for system designers and developers to follow. Each phase builds on the results of the previous one.[10][11][12][13]Not every project requires that the phases be sequential. For smaller, simpler projects, phases may be combined/overlap.[10]
The oldest and best known is thewaterfall model, which uses a linear sequence of steps.[11]Waterfall has different varieties. One variety is as follows:[10][11][14][15]
Conduct with a preliminary analysis, consider alternative solutions, estimate costs and benefits, and submit a preliminary plan with recommendations.
Decompose project goals[clarification needed]into defined functions and operations. This involves gathering and interpreting facts, diagnosing problems, and recommending changes. Analyze end-user information needs and resolve inconsistencies and incompleteness:[16]
At this step, desired features and operations are detailed, including screen layouts,business rules,process diagrams,pseudocode, and other deliverables.
Write the code.
Assemble the modules in a testing environment. Check for errors, bugs, and interoperability.
Put the system into production. This may involve training users, deploying hardware, and loading information from the prior system.
Monitor the system to assess its ongoing fitness. Make modest changes and fixes as needed. To maintain the quality of the system. Continual monitoring and updates ensure the system remains effective and high-quality.[17]
The system and the process are reviewed. Relevant questions include whether the newly implemented system meets requirements and achieves project goals, whether the system is usable, reliable/available, properly scaled and fault-tolerant. Process checks include review of timelines and expenses, as well as user acceptance.
At end of life, plans are developed for discontinuing the system and transitioning to its replacement. Related information and infrastructure must be repurposed, archived, discarded, or destroyed, while appropriately protecting security.[18]
In the following diagram, these stages are divided into ten steps, from definition to creation and modification of IT work products:
Systems analysis and design(SAD) can be considered a meta-development activity, which serves to set the stage and bound the problem. SAD can help balance competing high-level requirements. SAD interacts with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
Object-oriented analysis and design(OOAD) is the process of analyzing a problem domain to develop a conceptualmodelthat can then be used to guide development. During the analysis phase, a programmer develops written requirements and a formal vision document via interviews with stakeholders.
The conceptual model that results from OOAD typically consists ofuse cases, andclassandinteraction diagrams. It may also include auser interfacemock-up.
An outputartifactdoes not need to be completely defined to serve as input of object-oriented design; analysis and design may occur in parallel. In practice the results of one activity can feed the other in an iterative process.
Some typical input artifacts for OOAD:
The system lifecycle is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out, and disposal.[19]
Theconceptual designstage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated, and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptualdesign reviewhas determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements.[20]At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast-growing bank inFiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers' needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment, and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications, and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle.[21]Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
During this step, current priorities that would be affected and how they should be handled are considered. Afeasibility studydetermines whether creating a new or improved system is appropriate. This helps to estimate costs, benefits, resource requirements, and specific user needs.
The feasibility study should addressoperational,financial,technical, human factors, andlegal/politicalconcerns.
The goal ofanalysisis to determine where the problem is. This step involves decomposing the system into pieces, analyzing project goals, breaking down what needs to be created, and engaging users to define requirements.
Insystems design, functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. Modular design reduces complexity and allows the outputs to describe the system as a collection of subsystems.
The design stage takes as its input the requirements already defined. For each requirement, a set of design elements is produced.
Design documents typically include functional hierarchy diagrams, screen layouts, business rules, process diagrams, pseudo-code, and a completedata modelwith adata dictionary. These elements describe the system in sufficient detail that developers and engineers can develop and deliver the system with minimal additional input.
The code is tested at various levels insoftware testing. Unit, system, and user acceptance tests are typically performed. Many approaches to testing have been adopted.
The following types of testing may be relevant:
Once a system has been stabilized through testing, SDLC ensures that proper training is prepared and performed before transitioning the system to support staff and end users. Training usually covers operational training for support staff as well as end-user training.
After training, systems engineers and developers transition the system to its production environment.
Maintenanceincludes changes, fixes, and enhancements.
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives while executing projects. Control objectives are clear statements of the desired result or purpose and should be defined and monitored throughout a project. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.[22]
To manage and control a substantial SDLC initiative, awork breakdown structure(WBS) captures and schedules the work. The WBS and all programmatic material should be kept in the "project description" section of the project notebook.[clarification needed]The project manager chooses a WBS format that best describes the project.
The diagram shows that coverage spans numerous phases of the SDLC but the associated MCD (Management Control Domains) shows mappings to SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain, and System Build and Prototype is primarily performed as part of delivery and support.[22]
The upper section of the WBS provides an overview of the project scope and timeline. It should also summarize the major phases and milestones. The middle section is based on the SDLC phases. WBS elements consist of milestones and tasks to be completed rather than activities to be undertaken and have a deadline. Each task has a measurable output (e.g., analysis document). A WBS task may rely on one or more activities (e.g. coding). Parts of the project needing support from contractors should have astatement of work(SOW). The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by contractors.[22]
Baselines[clarification needed]are established after four of the five phases of the SDLC, and are critical to theiterativenature of the model.[23]Baselines become milestones.
Alternativesoftware development methodsto systems development life cycle are:
–
Fundamentally, SDLC trades flexibility for control by imposing structure. It is more commonly used for large scale projects with many developers.
|
https://en.wikipedia.org/wiki/Systems_development_lifecycle
|
Avicious circle(orcycle) is a complexchain of eventsthat reinforces itself through afeedback loop, with detrimental results.[1]It is a system with no tendency towardequilibrium(social,economic,ecological, etc.), at least in the short run. Each iteration of the cycle reinforces the previous one, in an example ofpositive feedback. A vicious circle will continue in the direction of its momentum until an external factor intervenes to break the cycle. A well-known example of a vicious circle in economics ishyperinflation.
When the results are not detrimental but beneficial, the termvirtuous cycleis used instead.
The contemporarysubprime mortgage crisisis a complex group of vicious circles, both in its genesis and in its manifold outcomes, most notably thelate 2000s recession. A specific example is the circle related to housing. As housing prices decline, more homeowners go "underwater", when the market value of a home drops below that of the mortgage on it. This provides an incentive to walk away from the home, increasing defaults and foreclosures. This, in turn, lowers housing values further from over-supply, reinforcing the cycle.[2]
The foreclosures reduce the cash flowing into banks and the value of mortgage-backed securities (MBS) widely held by banks. Banks incur losses and require additional funds, also called "recapitalization". If banks are not capitalized sufficiently to lend, economic activity slows andunemploymentincreases, which further increase the number of foreclosures. EconomistNouriel Roubinidiscussed vicious circles in the housing and financial markets in interviews withCharlie Rosein September and October 2008.[3][4][5]
By involving all stakeholders in managing ecological areas, a virtuous circle can be created where improved ecology encourages the actions that maintain and improve the area.[6]
Other examples include thepoverty cycle,sharecropping, and the intensification ofdrought. In 2021, Austrian ChancellorAlexander Schallenbergdescribed the recurring need for lockdowns in theCOVID-19 pandemicas a vicious circle that could only be broken by a legally-required vaccination program.[7]
|
https://en.wikipedia.org/wiki/Virtuous_circle_and_vicious_circle
|
Theintelligence cycleis an idealized model of howintelligenceis processed in civilian and militaryintelligence agencies, and law enforcement organizations. It is a closedpathconsisting of repeatingnodes, which (if followed) will result infinished intelligence. The stages of the intelligence cycle include the issuance of requirements by decision makers, collection, processing, analysis, and publication (i.e., dissemination) of intelligence.[1]The circuit is completed when decision makers provide feedback and revised requirements. The intelligence cycle is also calledintelligence processby the U.S. Department of Defense (DoD) and the uniformed services.[2]
Intelligence requirementsare determined by a decision maker to meet their objectives. In thefederal government of the United States, requirements (or priorities) can be issued from theWhite Houseor theCongress.[citation needed]InNATO, acommanderuses requirements (sometimes calledEssential elements of information(EEIs)) to initiate the intelligence cycle.
In response to requirements, an intelligence staff develops anintelligence collection planapplying available sources and methods and seeking intelligence from other agencies. Collection includes inputs from severalintelligence gathering disciplines, such asHUMINT(human intelligence),IMINT(imagery intelligence),ELINT(electronic intelligence),SIGINT(Signals Intelligence),OSINT(open source, or publicly available intelligence), etc.
Once the collection plan is executed and the data arrives, it is processed for exploitation. This involves thetranslationof raw intelligence materials from a foreign language,evaluationof relevance and reliability, andcollationof the raw data in preparation for exploitation.
Analysis establishes the significance and implications of processed intelligence, integrates it by combining disparate pieces of information to identify collateral information and patterns, then interprets the significance of any newly developed knowledge.
Finished intelligence products take many forms depending on the needs of the decision maker and reporting requirements. The level of urgency of various types of intelligence is typically established by an intelligence organization or community. For example, an indications and warning (I&W) bulletin would require higher precedence than an annual report.
The intelligence cycle is a closed loop; feedback is received from the decision maker and revised requirements issued.
The intelligence information cycle leverages secrecy theory and U.S. regulation of classified intelligence to re-conceptualize the traditional intelligence cycle under the following four assumptions:
Information is transformed from privately held to secretly held to public based on who has control over it. For example, the private information of a source becomes secret information (intelligence) when control over its dissemination is shared with an intelligence officer, and then becomes public information when the intelligence officer further disseminates it to the public by any number of means, including formal reporting, threat warning, and others. The fourth assumption, intelligence is hoarded, causes conflict points where information transitions from one type to another. The first conflict point, collection, occurs when private transitions to secret information (intelligence). The second conflict point, dissemination, occurs when secret transitions to public information. Thus, conceiving of intelligence using these assumptions demonstrates the cause of collection techniques (to ease the private-secret transition) and dissemination conflicts, and can inform ethical standards of conduct among all agents in the intelligence process.[3][4]
|
https://en.wikipedia.org/wiki/Intelligence_cycle
|
Abelief structureis a distributed assessment withbeliefs.
A belief structure is used in theevidential reasoning (ER) approachformultiple-criteria decision analysis (MCDA)to represent the performance of an alternative option on a criterion.
In the ER approach, an MCDA problem is modelled by abelief decision matrixinstead of a conventionaldecision matrix. The difference between the two is that, in the former, each element is a belief structure; in the latter, conversely, each element is a single value (either numerical or textual).
For example, the quality of a car engine may be assessed to be “excellent” with a high degree of belief (e.g. 0.6) due to its lowfuel consumption, low vibration and high responsiveness. At the same time, the quality may be assessed to be only “good” with a lower degree of belief (e.g. 0.4 or less) because its quietness and starting can still be improved. Such an assessment can be modeled by a belief structure:Si(engine)={(excellent, 0.6), (good, 0.4)}, whereSistands for the assessment of engine on theith criterion (quality). In the belief structure, “excellent” and “good” are assessment standards, whilst “0.6” and “0.4” are degrees of belief.
|
https://en.wikipedia.org/wiki/Belief_structure
|
Thedecision-matrix method, alsoPugh methodorPugh concept selection, invented byStuart Pugh,[1]is a qualitative technique used to rank the multi-dimensional options of an option set. It is frequently used inengineeringfor making design decisions but can also be used to rank investment options, vendor options, product options or any other set of multidimensional entities.
A basicdecision matrixconsists of establishing a set of criteria and a group of potential candidate designs. One of these is a reference candidate design. The other designs are then compared to this reference design and being ranked as better, worse, or same based on each criterion. The number of times "better" and "worse" appeared for each design is then displayed, but not summed up.
A weighted decision matrix operates in the same way as the basic decision matrix but introduces the concept of weighting the criteria in order of importance. The more important the criterion the higher the weighting it should be given.[2]
The advantage of the decision-making matrix is that it encourages self-reflection amongst the members of a design team to analyze each candidate with a minimized bias (for team members can be biased towards certain designs, such as their own). Another advantage of this method is that sensitivity studies can be performed. An example of this might be to see how much your opinion would have to change in order for a lower ranked alternative to outrank a competing alternative.
However, there are some important disadvantages of the decision-matrix method:
Morphological analysisis another form of a decision matrix employing a multi-dimensional configuration space linked by way of logical relationships.
This engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Decision-matrix_method
|
Case-based reasoning(CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.[1][2]
In everyday life, an automechanicwho fixes anengineby recalling anothercarthat exhibited similar symptoms is using case-based reasoning. Alawyerwho advocates a particular outcome in atrialbased onlegalprecedentsor a judge who createscase lawis using case-based reasoning. So, too, anengineercopying working elements of nature (practicingbiomimicry) is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type ofanalogysolution making.
It has been argued[by whom?]that case-based reasoning is not only a powerful method forcomputer reasoning, but also a pervasive behavior in everyday humanproblem solving; or, more radically, that all reasoning is based on past cases personally experienced. This view is related toprototype theory, which is most deeply explored incognitive science.
Case-based reasoning has been formalized[clarification needed]for purposes ofcomputer reasoningas a four-step process:[3]
At first glance, CBR may seem similar to therule inductionalgorithms[note 1]ofmachine learning. Like a rule-induction algorithm, CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieved case and the target problem.[4]
If for instance a procedure for plain pancakes is mapped to blueberry pancakes, a decision is made to use the same basic batter and frying method, thus implicitly generalizing the set of situations under which the batter and frying method can be used. The key difference, however, between the implicit generalization in CBR and the generalization in rule induction lies in when the generalization is made. A rule-induction algorithm draws its generalizations from a set of training examples before the target problem is even known; that is, it performs eager generalization.
For instance, if a rule-induction algorithm were given recipes for plain pancakes, Dutch apple pancakes, and banana pancakes as its training examples, it would have to derive, at training time, a set of general rules for making all types of pancakes. It would not be until testing time that it would be given, say, the task of cooking blueberry pancakes. The difficulty for the rule-induction algorithm is in anticipating the different directions in which it should attempt to generalize its training examples. This is in contrast to CBR, which delays (implicit) generalization of its cases until testing time – a strategy of lazy generalization. In the pancake example, CBR has already been given the target problem of cooking blueberry pancakes; thus it can generalize its cases exactly as needed to cover this situation. CBR therefore tends to be a good approach for rich, complex domains in which there are myriad ways to generalize a case.
In law, there is often explicit delegation of CBR to courts, recognizing the limits of rule based reasons: limiting delay, limited knowledge of future context, limit of negotiated agreement, etc. While CBR in law and cognitively inspired CBR have long been associated, the former is more clearly an interpolation of rule based reasoning, and judgment, while the latter is more closely tied to recall and process adaptation. The difference is clear in their attitude toward error and appellate review.
Another name for cased based reasoning in problem solving is symptomatic strategies. It does require à priori domain knowledge that is gleaned from past experience which established connections between symptoms and causes. This knowledge is referred to as shallow, compiled, evidential, history-based as well as case-based knowledge. This is the strategy most associated with diagnosis by experts. Diagnosis of a problem transpires as a rapid recognition process in which symptoms evoke appropriate situation categories.[5]An expert knows the cause by virtue of having previously encountered similar cases. Cased based reasoning is the most powerful strategy, and that used most commonly. However, the strategy won't work independently with truly novel problems, or where deeper understanding of whatever is taking place is sought.
An alternative approach to problem solving is the topographic strategy which falls into the category of deep reasoning. With deep reasoning, in-depth knowledge of a system is used. Topography in this context means a description or an analysis of a structured entity, showing the relations among its elements.[6]
Also known as reasoning from first principles,[7]deep reasoning is applied to novel faults when experience-based approaches aren't viable. The topographic strategy is therefore linked to à priori domain knowledge that is developed from a more a fundamental understanding of a system, possibly using first-principles knowledge. Such knowledge is referred to as deep, causal or model-based knowledge.[8]Hoc and Carlier[9]noted that symptomatic approaches may need to be supported by topographic approaches because symptoms can be defined in diverse terms. The converse is also true – shallow reasoning can be used abductively to generate causal hypotheses, and deductively to evaluate those hypotheses, in a topographical search.
Critics of CBR[who?]argue that it is an approach that acceptsanecdotal evidenceas its main operating principle. Without statistically relevant data for backing and implicit generalization, there is no guarantee that the generalization is correct. However, allinductive reasoningwhere data is too scarce for statistical relevance is inherently based on anecdotal evidence.
CBR traces its roots to the work ofRoger Schankand his students atYale Universityin the early 1980s. Schank's model of dynamic memory[10]was the basis for the earliest CBR systems:Janet Kolodner'sCYRUS[11]and Michael Lebowitz's IPP.[12]
Other schools of CBR and closely allied fields emerged in the 1980s, which directed at topics such as legal reasoning, memory-based reasoning (a way of reasoning from examples on massively parallel machines), and combinations of CBR with other reasoning methods. In the 1990s, interest in CBR grew internationally, as evidenced by the establishment of an International Conference on Case-Based Reasoning in 1995, as well as European, German, British, Italian, and other CBR workshops[which?].
CBR technology has resulted in the deployment of a number of successful systems, the earliest being Lockheed's CLAVIER,[13]a system for laying out composite parts to be baked in an industrial convection oven. CBR has been used extensively in applications such as the Compaq SMART system[14]and has found a major application area in the health sciences,[15]as well as in structural safety management.
There is recent work[which?][when?]that develops CBR within a statistical framework and formalizes case-based inference as a specific type of probabilistic inference. Thus, it becomes possible to produce case-based predictions equipped with a certain level of confidence.[16]One description of the difference between CBR and induction from instances is thatstatistical inferenceaims to find what tends to make cases similar while CBR aims to encode what suffices to claim similarly.[17][full citation needed]
Anearlier versionof the above article was posted onNupedia.
|
https://en.wikipedia.org/wiki/Case_based_reasoning
|
Acausal mapcan be defined as a network consisting of links or arcs between nodes or factors, such that a link between C and E means, in some sense, that someone believes or claims C has or had some causal influence on E.
This definition could cover diagrams representing causal connections between variables which are measured in a strictly quantitative way and would therefore also include closely related statistical models likeStructural Equation Models[1]andDirected Acyclic Graphs(DAGs).[2]However the phrase “causal map” is usually reserved for qualitative or merely semi-quantitative maps. In this sense, causal maps can be seen as a type of concept map. Systems diagrams and Fuzzy Cognitive Maps[3]also fall under this definition. Causal maps have been used since the 1970’s by researchers and practitioners in a range of disciplines from management science[4]to ecology,[5]employing a variety of methods. They are used for many purposes, for example:
Different kinds of causal maps can be distinguished particularly by the kind of information which can be encoded by the links and nodes. One important distinction is to what extent the links are intended to encode causation or (somebody’s) belief about causation.
Causal mapping is the process of constructing, summarising and drawing inferences from a causal map, and more broadly can refer to sets of techniques for doing this. While one group of such methods is actually called “causal mapping”, there are many similar methods which go by a wide variety of names.
The phrase “causal mapping” goes back at least to Robert Axelrod,[7]based in turn on Kelly’s personal construct theory .[14]The idea of wanting to understand the behaviour of actors in terms of internal ‘maps’ of the word which they carry around with them goes back further, to Kurt Lewin[15]and the field theorists.[16]Causal mapping in this sense is loosely based on "concept mapping" and “cognitive mapping”, and sometimes the three terms are used interchangeably, though the latter two are usually understood to be broader, including maps in which the links between factors are not necessarily causal and are therefore not causal maps.
Literature on the theory and practice of causal mapping includes a few canonical works[7]as well as book-length interdisciplinary overviews,[17][18]and guides to particular approaches.[19]
Insoftware testing, acause–effect graphis adirected graphthat maps a set of causes to a set of effects. The causes may be thought of as the input to the program, and the effects may be thought of as the output. Usually the graph shows the nodes representing the causes on the left side and the nodes representing the effects on the right side. There may be intermediate nodes in between that combine inputs using logical operators such as AND and OR.
Constraints may be added to the causes and effects. These are represented as edges labeled with the constraint symbol using a dashed line. For causes, valid constraint symbols are E (exclusive), O (one and only one), I (at least one), and R (Requires). The exclusive constraint states that at most one of the causes 1 and 2 can be true, i.e. both cannot be true simultaneously. The Inclusive (at least one) constraint states that at least one of the causes 1, 2 or 3 must be true, i.e. all cannot be false simultaneously. The one and only one (OaOO or simply O) constraint states that only one of the causes 1, 2 or 3 must be true. The Requires constraint states that if cause 1 is true, then cause 2 must be true, and it is impossible for 1 to be true and 2 to be false.
For effects, valid constraint symbol is M (Mask). The mask constraint states that if effect 1 is true then effect 2 is false. Note that the mask constraint relates to the effects and not the causes like the other constraints.
The graph's direction is as follows:
The graph can always be rearranged so there is only one node between any input and any output. Seeconjunctive normal formanddisjunctive normal form.
A cause–effect graph is useful for generating a reduceddecision table.
•List of Causal Mapping Software
|
https://en.wikipedia.org/wiki/Cause%E2%80%93effect_graph
|
Thedominance-based rough set approach(DRSA) is an extension ofrough set theoryformulti-criteria decision analysis(MCDA), introduced by Greco, Matarazzo and Słowiński.[1][2][3]The main change compared to the classicalrough setsis the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration ofcriteriaandpreference-ordered decision classes.
Multicriteria classification(sorting) is one of the problems considered withinMCDAand can be stated as follows: given a set of objects evaluated by a set ofcriteria(attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem ofclassification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to asordinal classification problem with monotonicity constraintsand often appears in real-life application whenordinalandmonotoneproperties follow from the domain knowledge about the problem.
As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes:bad,mediumandgood(notice that classgoodis preferred tomediumandmediumis preferred tobad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible valuesbad,mediumandgood. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class).
As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classessafeandrisky. This may involve such characteristics as "return on equity(ROE)", "return on investment(ROI)" and "return on sales(ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information inknowledge discoverymay lead to wrong conclusions.
In DRSA, data are often presented using a particular form ofdecision table. Formally, a DRSA decision table is a 4-tupleS=⟨U,Q,V,f⟩{\displaystyle S=\langle U,Q,V,f\rangle }, whereU{\displaystyle U\,\!}is a finite set of objects,Q{\displaystyle Q\,\!}is a finite set of criteria,V=⋃q∈QVq{\displaystyle V=\bigcup {}_{q\in Q}V_{q}}whereVq{\displaystyle V_{q}\,\!}is the domain of the criterionq{\displaystyle q\,\!}andf:U×Q→V{\displaystyle f\colon U\times Q\to V}is aninformation functionsuch thatf(x,q)∈Vq{\displaystyle f(x,q)\in V_{q}}for every(x,q)∈U×Q{\displaystyle (x,q)\in U\times Q}. The setQ{\displaystyle Q\,\!}is divided intocondition criteria(setC≠∅{\displaystyle C\neq \emptyset }) and thedecision criterion(class)d{\displaystyle d\,\!}. Notice, thatf(x,q){\displaystyle f(x,q)\,\!}is an evaluation of objectx{\displaystyle x\,\!}on criterionq∈C{\displaystyle q\in C}, whilef(x,d){\displaystyle f(x,d)\,\!}is the class assignment (decision value) of the object. An example of decision table is shown in Table 1 below.
It is assumed that the domain of a criterionq∈Q{\displaystyle q\in Q}is completelypreorderedby anoutranking relation⪰q{\displaystyle \succeq _{q}};x⪰qy{\displaystyle x\succeq _{q}y}means thatx{\displaystyle x\,\!}is at least as good as (outranks)y{\displaystyle y\,\!}with respect to the criterionq{\displaystyle q\,\!}. Without loss of generality, we assume that the domain ofq{\displaystyle q\,\!}is a subset ofreals,Vq⊆R{\displaystyle V_{q}\subseteq \mathbb {R} }, and that the outranking relation is a simple order between real numbers≥{\displaystyle \geq \,\!}such that the following relation holds:x⪰qy⟺f(x,q)≥f(y,q){\displaystyle x\succeq _{q}y\iff f(x,q)\geq f(y,q)}. This relation is straightforward for gain-type ("the more, the better") criterion, e.g.company profit. For cost-type ("the less, the better") criterion, e.g.product price, this relation can be satisfied by negating the values fromVq{\displaystyle V_{q}\,\!}.
LetT={1,…,n}{\displaystyle T=\{1,\ldots ,n\}\,\!}. The domain of decision criterion,Vd{\displaystyle V_{d}\,\!}consist ofn{\displaystyle n\,\!}elements (without loss of generality we assumeVd=T{\displaystyle V_{d}=T\,\!}) and induces a partition ofU{\displaystyle U\,\!}inton{\displaystyle n\,\!}classesCl={Clt,t∈T}{\displaystyle {\textbf {Cl}}=\{Cl_{t},t\in T\}}, whereClt={x∈U:f(x,d)=t}{\displaystyle Cl_{t}=\{x\in U\colon f(x,d)=t\}}. Each objectx∈U{\displaystyle x\in U}is assigned to one and only one classClt,t∈T{\displaystyle Cl_{t},t\in T}. The classes are preference-ordered according to an increasing order of class indices, i.e. for allr,s∈T{\displaystyle r,s\in T}such thatr≥s{\displaystyle r\geq s\,\!}, the objects fromClr{\displaystyle Cl_{r}\,\!}are strictly preferred to the objects fromCls{\displaystyle Cl_{s}\,\!}. For this reason, we can consider theupward and downward unions of classes, defined respectively, as:
We say thatx{\displaystyle x\,\!}dominatesy{\displaystyle y\,\!}with respect toP⊆C{\displaystyle P\subseteq C}, denoted byxDpy{\displaystyle xD_{p}y\,\!}, ifx{\displaystyle x\,\!}is better thany{\displaystyle y\,\!}on every criterion fromP{\displaystyle P\,\!},x⪰qy,∀q∈P{\displaystyle x\succeq _{q}y,\,\forall q\in P}. For eachP⊆C{\displaystyle P\subseteq C}, the dominance relationDP{\displaystyle D_{P}\,\!}isreflexiveandtransitive, i.e. it is apartial pre-order. GivenP⊆C{\displaystyle P\subseteq C}andx∈U{\displaystyle x\in U}, let
representP-dominatingset andP-dominatedset with respect tox∈U{\displaystyle x\in U}, respectively.
The key idea of therough setphilosophy is approximation of one knowledge by another knowledge. In DRSA, the knowledge being approximated is a collection of upward and downward unions of decision classes and the "granules of knowledge" used for approximation areP-dominating andP-dominated sets.
TheP-lowerand theP-upper approximationofClt≥,t∈T{\displaystyle Cl_{t}^{\geq },t\in T}with respect toP⊆C{\displaystyle P\subseteq C}, denoted asP_(Clt≥){\displaystyle {\underline {P}}(Cl_{t}^{\geq })}andP¯(Clt≥){\displaystyle {\overline {P}}(Cl_{t}^{\geq })}, respectively, are defined as:
Analogously, theP-lower and theP-upper approximation ofClt≤,t∈T{\displaystyle Cl_{t}^{\leq },t\in T}with respect toP⊆C{\displaystyle P\subseteq C}, denoted asP_(Clt≤){\displaystyle {\underline {P}}(Cl_{t}^{\leq })}andP¯(Clt≤){\displaystyle {\overline {P}}(Cl_{t}^{\leq })}, respectively, are defined as:
Lower approximations group the objects whichcertainlybelong to class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}). This certainty comes from the fact, that objectx∈U{\displaystyle x\in U}belongs to the lower approximationP_(Clt≥){\displaystyle {\underline {P}}(Cl_{t}^{\geq })}(respectivelyP_(Clt≤){\displaystyle {\underline {P}}(Cl_{t}^{\leq })}), if no other object inU{\displaystyle U\,\!}contradicts this claim, i.e. every objecty∈U{\displaystyle y\in U}whichP-dominatesx{\displaystyle x\,\!}, also belong to the class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}). Upper approximations group the objects whichcould belongtoClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}), since objectx∈U{\displaystyle x\in U}belongs to the upper approximationP¯(Clt≥){\displaystyle {\overline {P}}(Cl_{t}^{\geq })}(respectivelyP¯(Clt≤){\displaystyle {\overline {P}}(Cl_{t}^{\leq })}), if there exist another objecty∈U{\displaystyle y\in U}P-dominated byx{\displaystyle x\,\!}from class unionClt≥{\displaystyle Cl_{t}^{\geq }}(respectivelyClt≤{\displaystyle Cl_{t}^{\leq }}).
TheP-lower andP-upper approximations defined as above satisfy the following properties for allt∈T{\displaystyle t\in T}and for anyP⊆C{\displaystyle P\subseteq C}:
TheP-boundaries(P-doubtful regions) ofClt≥{\displaystyle Cl_{t}^{\geq }}andClt≤{\displaystyle Cl_{t}^{\leq }}are defined as:
The ratio
defines thequality of approximationof the partitionCl{\displaystyle {\textbf {Cl}}\,\!}into classes by means of the set of criteriaP{\displaystyle P\,\!}. This ratio express the relation between all theP-correctly classified objects and all the objects in the table.
Every minimal subsetP⊆C{\displaystyle P\subseteq C}such thatγP(Cl)=γC(Cl){\displaystyle \gamma _{P}(\mathbf {Cl} )=\gamma _{C}(\mathbf {Cl} )\,\!}is called areductofC{\displaystyle C\,\!}and is denoted byREDCl(P){\displaystyle RED_{\mathbf {Cl} }(P)}. A decision table may have more than one reduct. The intersection of all reducts is known as thecore.
On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms ofdecision rules. The decision rules are expressions of the formif[condition]then[consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions.
Certain rules has the following form:
iff(x,q1)≥r1{\displaystyle f(x,q_{1})\geq r_{1}\,\!}andf(x,q2)≥r2{\displaystyle f(x,q_{2})\geq r_{2}\,\!}and…f(x,qp)≥rp{\displaystyle \ldots f(x,q_{p})\geq r_{p}\,\!}thenx∈Clt≥{\displaystyle x\in Cl_{t}^{\geq }}
iff(x,q1)≤r1{\displaystyle f(x,q_{1})\leq r_{1}\,\!}andf(x,q2)≤r2{\displaystyle f(x,q_{2})\leq r_{2}\,\!}and…f(x,qp)≤rp{\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!}thenx∈Clt≤{\displaystyle x\in Cl_{t}^{\leq }}
Possible rules has a similar syntax, however theconsequentpart of the rule has the form:x{\displaystyle x\,\!}could belong toClt≥{\displaystyle Cl_{t}^{\geq }}or the form:x{\displaystyle x\,\!}could belong toClt≤{\displaystyle Cl_{t}^{\leq }}.
Finally, approximate rules has the syntax:
iff(x,q1)≥r1{\displaystyle f(x,q_{1})\geq r_{1}\,\!}andf(x,q2)≥r2{\displaystyle f(x,q_{2})\geq r_{2}\,\!}and…f(x,qk)≥rk{\displaystyle \ldots f(x,q_{k})\geq r_{k}\,\!}andf(x,qk+1)≤rk+1{\displaystyle f(x,q_{k+1})\leq r_{k+1}\,\!}andf(x,qk+2)≤rk+2{\displaystyle f(x,q_{k+2})\leq r_{k+2}\,\!}and…f(x,qp)≤rp{\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!}thenx∈Cls∪Cls+1∪Clt{\displaystyle x\in Cl_{s}\cup Cl_{s+1}\cup Cl_{t}}
The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table.
Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes).
A set of decision rules iscompleteif it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We callminimaleach set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete.
One of three induction strategies can be adopted to obtain a set of decision rules:[4]
The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM,[5]which generates minimal set of rules.
Consider the following problem of high school students’ evaluations:
Each object (student) is described by three criteriaq1,q2,q3{\displaystyle q_{1},q_{2},q_{3}\,\!}, related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes:Cl1={bad}{\displaystyle Cl_{1}=\{bad\}},Cl2={medium}{\displaystyle Cl_{2}=\{medium\}}andCl3={good}{\displaystyle Cl_{3}=\{good\}}. Thus, the following unions of classes were approximated:
Notice that evaluations of objectsx4{\displaystyle x_{4}\,\!}andx6{\displaystyle x_{6}\,\!}are inconsistent, becausex4{\displaystyle x_{4}\,\!}has better evaluations on all three criteria thanx6{\displaystyle x_{6}\,\!}but worse global score.
Therefore, lower approximations of class unions consist of the following objects:
Thus, only classesCl1≤{\displaystyle Cl_{1}^{\leq }}andCl2≥{\displaystyle Cl_{2}^{\geq }}cannot be approximated precisely. Their upper approximations are as follows:
while their boundary regions are:
Of course, sinceCl2≤{\displaystyle Cl_{2}^{\leq }}andCl3≥{\displaystyle Cl_{3}^{\geq }}are approximated precisely, we haveP¯(Cl2≤)=Cl2≤{\displaystyle {\overline {P}}(Cl_{2}^{\leq })=Cl_{2}^{\leq }},P¯(Cl3≥)=Cl3≥{\displaystyle {\overline {P}}(Cl_{3}^{\geq })=Cl_{3}^{\geq }}andBnP(Cl2≤)=BnP(Cl3≥)=∅{\displaystyle Bn_{P}(Cl_{2}^{\leq })=Bn_{P}(Cl_{3}^{\geq })=\emptyset }
The following minimal set of 10 rules can be induced from the decision table:
The last rule is approximate, while the rest are certain.
The other two problems considered withinmulti-criteria decision analysis,multicriteria choiceandrankingproblems, can also be solved using dominance-based rough set approach. This is done by converting the decision table intopairwise comparison table(PCT).[1]
The definitions of rough approximations are based on a strict application of the dominance principle. However, when defining non-ambiguous objects, it is reasonable to accept a limited proportion of negative examples, particularly for large decision tables. Such extended version of DRSA is calledVariable-Consistency DRSAmodel (VC-DRSA)[6]
In real-life data, particularly for large datasets, the notions of rough approximations were found to be excessively restrictive. Therefore, an extension of DRSA, based on stochastic model (Stochastic DRSA), which allows inconsistencies to some degree, has been introduced.[7]Having stated the probabilistic model for ordinal classification problems with monotonicity constraints, the concepts of lower approximations are extended to the
stochastic case. The method is based on estimating the conditional probabilities using the nonparametricmaximum likelihoodmethod which leads
to the problem ofisotonic regression.
Stochastic dominance-based rough sets can also be regarded as a sort of variable-consistency model.
4eMka2Archived2007-09-09 at theWayback Machineis adecision support systemfor multiple criteria classification problems based on dominance-based rough sets (DRSA).JAMMArchived2007-07-19 at theWayback Machineis a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on theLaboratory of Intelligent Decision Support Systems (IDSS)website.
|
https://en.wikipedia.org/wiki/Dominance-based_rough_set_approach
|
AKarnaugh map(KMorK-map) is a diagram that can be used to simplify aBoolean algebraexpression.Maurice Karnaughintroduced the technique in 1953[1][2]as a refinement ofEdward W. Veitch's 1952Veitch chart,[3][4]which itself was a rediscovery ofAllan Marquand's 1881logical diagram[5][6]orMarquand diagram.[4]They are also known asMarquand–Veitch diagrams,[4]Karnaugh–Veitch (KV) maps, and (rarely)Svoboda charts.[7]An early advance in the history offormal logicmethodology, Karnaugh maps remain relevant in the digital age, especially in the fields oflogical circuitdesign anddigital engineering.[4]
A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability.[1]It also permits the rapid identification and elimination of potentialrace conditions.[clarification needed]
The required Boolean results are transferred from atruth tableonto a two-dimensional grid where, in Karnaugh maps, the cells are ordered inGray code,[8][4]and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of acanonical formof the logic in the original truth table.[9]These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number oflogic gates. Asum-of-products expression(SOP) can always be implemented usingAND gatesfeeding into anOR gate, and aproduct-of-sums expression(POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F').[10]Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example inconditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.[11]
Karnaugh maps are used to facilitate the simplification ofBoolean algebrafunctions. For example, consider the Boolean function described by the followingtruth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variablesA,B,C,Dand their inverses.
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered inGray coderather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — acanonical form— for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example,ADwould mean a cell which covers the 2x2 area whereAandDare true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand,ADwould mean the cells whereAis true andDis false (that is,Dis true).
The grid istoroidallyconnected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore,ADcan be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as isBD, which includes the four corners.
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
Thus the first minterm in the Boolean sum-of-products expression isAC.
For the green grouping,AandBmaintain the same state, whileCandDchange.Bis 0 and has to be negated before it can be included. The second term is thereforeAB. Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the termBCD.
The solutions of each grouping are combined: the normal form of the circuit isAC¯+AB¯+BCD¯{\displaystyle A{\overline {C}}+A{\overline {B}}+BC{\overline {D}}}.
Thus the Karnaugh map has guided a simplification of
It would also have been possible to derive this simplification by carefully applying theaxioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
The inverse of a function is solved in the same way by grouping the 0s instead.[nb 1]
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
This yields the inverse:
Through the use ofDe Morgan's laws, theproduct of sumscan be determined:
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value off(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is justA, notAC. In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
Through the use ofDe Morgan's laws, theproduct of sumscan be determined:
Karnaugh maps are useful for detecting and eliminatingrace conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding,adjacenthas a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term ofAD¯{\displaystyle A{\overline {D}}}would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term isredundantin terms of the static logic of the system, but such redundant, orconsensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term ofA¯D{\displaystyle {\overline {A}}D}must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression forf, but with a new factor of(A+D¯){\displaystyle \left(A+{\overline {D}}\right)}.
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of∑m(){\textstyle \sum m()}and the race hazard free (seeprevious section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical minimization methods include:
|
https://en.wikipedia.org/wiki/Karnaugh-Veitch_diagram
|
Many-valued logic(alsomulti-ormultiple-valued logic) is apropositional calculusin which there are more than twotruth values. Traditionally, inAristotle'slogical calculus, there were only two possible values (i.e., "true" and "false") for anyproposition. Classicaltwo-valued logicmay be extended ton-valued logicforngreater than 2. Those most popular in the literature arethree-valued(e.g.,Łukasiewicz'sandKleene's, which accept the values "true", "false", and "unknown"),four-valued,nine-valued, thefinite-valued(finitely-many valued) with more than three values, and theinfinite-valued(infinitely-many-valued), such asfuzzy logicandprobability logic.
It iswrongthat the first known classical logician who did not fully accept thelaw of excluded middlewasAristotle(who, ironically, is also generally considered to be the first classical logician and the "father of [two-valued] logic"[1]). In fact, Aristotle didnotcontest the universality of the law of excluded middle, but the universality of the bivalence principle: he admitted that this principle did not all apply to future events (De Interpretatione,ch. IX),[2]but he didn't create a system of multi-valued logic to explain this isolated remark. Until the coming of the 20th century, later logicians followedAristotelian logic, which includes or assumes thelaw of the excluded middle.
The 20th century brought back the idea of multi-valued logic. The Polish logician and philosopherJan Łukasiewiczbegan to create systems of many-valued logic in 1920, using a third value, "possible", to deal with Aristotle'sparadox of the sea battle. Meanwhile, the American mathematician,Emil L. Post(1921), also introduced the formulation of additional truth degrees withn≥ 2, wherenare the truth values. Later, Jan Łukasiewicz andAlfred Tarskitogether formulated a logic onntruth values wheren≥ 2. In 1932,Hans Reichenbachformulated a logic of many truth values wheren→∞.Kurt Gödelin 1932 showed thatintuitionistic logicis not afinitely-many valued logic, and defined a system ofGödel logicsintermediate betweenclassicaland intuitionistic logic; such logics are known asintermediate logics.
Kleene's "(strong) logic of indeterminacy"K3(sometimesK3S{\displaystyle K_{3}^{S}}) andPriest's "logic of paradox" add a third "undefined" or "indeterminate" truth valueI. The truth functions fornegation(¬),conjunction(∧),disjunction(∨),implication(→K), andbiconditional(↔K) are given by:[3]
The difference between the two logics lies in howtautologiesare defined. InK3onlyTis adesignated truth value, while inP3bothTandIare (a logical formula is considered a tautology if it evaluates to a designated truth value). In Kleene's logicIcan be interpreted as being "underdetermined", being neither true nor false, while in Priest's logicIcan be interpreted as being "overdetermined", being both true and false.K3does not have any tautologies, whileP3has the same tautologies as classical two-valued logic.[4]
Another logic is Dmitry Bochvar's "internal" three-valued logicB3I{\displaystyle B_{3}^{I}}, also called Kleene's weak three-valued logic. Except for negation and biconditional, its truth tables are all different from the above.[5]
The intermediate truth value in Bochvar's "internal" logic can be described as "contagious" because it propagates in a formula regardless of the value of any other variable.[5]
Belnap's logicB4combinesK3andP3. The overdetermined truth value is here denoted asBand the underdetermined truth value asN.
In 1932Gödeldefined[6]a familyGk{\displaystyle G_{k}}of many-valued logics, with finitely many truth values0,1k−1,2k−1,…,k−2k−1,1{\displaystyle 0,{\tfrac {1}{k-1}},{\tfrac {2}{k-1}},\ldots ,{\tfrac {k-2}{k-1}},1}, for exampleG3{\displaystyle G_{3}}has the truth values0,12,1{\displaystyle 0,{\tfrac {1}{2}},1}andG4{\displaystyle G_{4}}has0,13,23,1{\displaystyle 0,{\tfrac {1}{3}},{\tfrac {2}{3}},1}. In a similar manner he defined a logic with infinitely many truth values,G∞{\displaystyle G_{\infty }}, in which the truth values are all thereal numbersin the interval[0,1]{\displaystyle [0,1]}. The designated truth value in these logics is 1.
The conjunction∧{\displaystyle \wedge }and the disjunction∨{\displaystyle \vee }are defined respectively as theminimumandmaximumof the operands:
Negation¬G{\displaystyle \neg _{G}}and implication→G{\displaystyle {\xrightarrow[{G}]{}}}are defined as follows:
Gödel logics are completely axiomatisable, that is to say it is possible to define a logical calculus in which all tautologies are provable. The implication above is the uniqueHeyting implicationdefined by the fact that the suprema and minima operations form a complete lattice with an infinite distributive law, which defines a uniquecomplete Heyting algebrastructure on the lattice.
Implication→L{\displaystyle {\xrightarrow[{L}]{}}}and negation¬L{\displaystyle {\underset {L}{\neg }}}were defined byJan Łukasiewiczthrough the following functions:
At first Łukasiewicz used these definitions in 1920 for his three-valued logicL3{\displaystyle L_{3}}, with truth values0,12,1{\displaystyle 0,{\frac {1}{2}},1}. In 1922 he developed a logic with infinitely many valuesL∞{\displaystyle L_{\infty }}, in which the truth values spanned the real numbers in the interval[0,1]{\displaystyle [0,1]}. In both cases the designated truth value was 1.[7]
By adopting truth values defined in the same way as for Gödel logics0,1v−1,2v−1,…,v−2v−1,1{\displaystyle 0,{\tfrac {1}{v-1}},{\tfrac {2}{v-1}},\ldots ,{\tfrac {v-2}{v-1}},1}, it is possible to create a finitely-valued family of logicsLv{\displaystyle L_{v}}, the abovementionedL∞{\displaystyle L_{\infty }}and the logicLℵ0{\displaystyle L_{\aleph _{0}}}, in which the truth values are given by therational numbersin the interval[0,1]{\displaystyle [0,1]}. The set of tautologies inL∞{\displaystyle L_{\infty }}andLℵ0{\displaystyle L_{\aleph _{0}}}is identical.
In product logic we have truth values in the interval[0,1]{\displaystyle [0,1]}, a conjunction⊙{\displaystyle \odot }and an implication→Π{\displaystyle {\xrightarrow[{\Pi }]{}}}, defined as follows[8]
Additionally there is a negative designated value0¯{\displaystyle {\overline {0}}}that denotes the concept offalse. Through this value it is possible to define a negation¬Π{\displaystyle {\underset {\Pi }{\neg }}}and an additional conjunction∧Π{\displaystyle {\underset {\Pi }{\wedge }}}as follows:
and thenu∧Πv=min{u,v}{\displaystyle u\mathbin {\underset {\Pi }{\wedge }} v=\min\{u,v\}}.
In 1921Postdefined a family of logicsPm{\displaystyle P_{m}}with (as inLv{\displaystyle L_{v}}andGk{\displaystyle G_{k}}) the truth values0,1m−1,2m−1,…,m−2m−1,1{\displaystyle 0,{\tfrac {1}{m-1}},{\tfrac {2}{m-1}},\ldots ,{\tfrac {m-2}{m-1}},1}. Negation¬P{\displaystyle {\underset {P}{\neg }}}and conjunction∧P{\displaystyle {\underset {P}{\wedge }}}and disjunction∨P{\displaystyle {\underset {P}{\vee }}}are defined as follows:
In 1951, Alan Rose defined another family of logics for systems whose truth-values formlattices.[9]
Logics are usually systems intended to codify rules for preserving somesemanticproperty of propositions across transformations. In classicallogic, this property is "truth." In a valid argument, the truth of the derived proposition is guaranteed if the premises are jointly true, because the application of valid steps preserves the property. However, that property doesn't have to be that of "truth"; instead, it can be some other concept.
Multi-valued logics are intended to preserve the property of designationhood (or being designated). Since there are more than two truth values, rules of inference may be intended to preserve more than just whichever corresponds (in the relevant sense) to truth. For example, in a three-valued logic, sometimes the two greatest truth-values (when they are represented as e.g. positive integers) are designated and the rules of inference preserve these values. Precisely, a valid argument will be such that the value of the premises taken jointly will always be less than or equal to the conclusion.
For example, the preserved property could bejustification, the foundational concept ofintuitionistic logic. Thus, a proposition is not true or false; instead, it is justified or flawed. A key difference between justification and truth, in this case, is that thelaw of excluded middledoesn't hold: a proposition that is not flawed is not necessarily justified; instead, it's only not proven that it's flawed. The key difference is the determinacy of the preserved property: One may prove thatPis justified, thatPis flawed, or be unable to prove either. A valid argument preserves justification across transformations, so a proposition derived from justified propositions is still justified. However, there are proofs in classical logic that depend upon the law of excluded middle; since that law is not usable under this scheme, there are propositions that cannot be proven that way.
Functional completenessis a term used to describe a special property of finite logics and algebras. A logic's set of connectives is said to befunctionally completeoradequateif and only if its set of connectives can be used to construct a formula corresponding to every possibletruth function.[10]An adequate algebra is one in which every finite mapping of variables can be expressed by some composition of its operations.[11]
Classical logic: CL = ({0,1},¬, →, ∨, ∧, ↔) is functionally complete, whereas noŁukasiewicz logicor infinitely many-valued logics has this property.[11][12]
We can define a finitely many-valued logic as being Ln({1, 2, ...,n} ƒ1, ..., ƒm) wheren≥ 2 is a given natural number.Post(1921) proves that assuming a logic is able to produce a function of anymthorder model, there is some corresponding combination of connectives in an adequate logic Lnthat can produce a model of orderm+1.[13]
Known applications of many-valued logic can be roughly classified into two groups.[14]The first group uses many-valued logic to solve binary problems more efficiently. For example, a well-known approach to represent a multiple-output Boolean function is to treat its output part as a single many-valued variable and convert it to a single-outputcharacteristic function(specifically, theindicator function). Other applications of many-valued logic include design ofprogrammable logic arrays(PLAs) with input decoders, optimization offinite-state machines, testing, and verification.
The second group targets the design of electronic circuits that employ more than two discrete levels of signals, such as many-valued memories, arithmetic circuits, andfield programmable gate arrays(FPGAs). Many-valued circuits have a number of theoretical advantages over standard binary circuits. For example, the interconnect on and off chip can be reduced if signals in the circuit assume four or more levels rather than only two. In memory design, storing two instead of one bit of information per memory cell doubles the density of the memory in the samediesize. Applications using arithmetic circuits often benefit from using alternatives to binary number systems. For example,residueandredundant number systems[15]can reduce or eliminate theripple-through carriesthat are involved in normal binary addition or subtraction, resulting in high-speed arithmetic operations. These number systems have a natural implementation using many-valued circuits. However, the practicality of these potential advantages heavily depends on the availability of circuit realizations, which must be compatible or competitive with present-day standard technologies. In addition to aiding in the design of electronic circuits, many-valued logic is used extensively to test circuits for faults and defects. Basically all knownautomatic test pattern generation(ATG) algorithms used for digital circuit testing require a simulator that can resolve 5-valued logic (0, 1, x, D, D').[16]The additional values—x, D, and D'—represent (1) unknown/uninitialized, (2) a 0 instead of a 1, and (3) a 1 instead of a 0.
AnIEEEInternational Symposium on Multiple-Valued Logic(ISMVL) has been held annually since 1970. It mostly caters to applications in digital design and verification.[17]There is also aJournal of Multiple-Valued Logic and Soft Computing.[18]
General
Specific
|
https://en.wikipedia.org/wiki/Many-valued_logic
|
Asemantic decision tableuses modernontology engineeringtechnologies to enhance traditional adecision table. The term "semantic decision table" was coined by Yan Tang and Prof. Robert Meersman from VUB STARLab (Free University of Brussels) in 2006.[1]A semantic decision table is a set of decision tables properly annotated with an ontology. It provides a means to capture and examine decision makers’ concepts, as well as a tool for refining their decision knowledge and facilitating knowledge sharing in a scalable manner.
A decision table is defined as a "tabular method of showing the relationship between a series of conditions and the resultant actions to be executed".[2]Following the de facto international standard (CSA, 1970), a decision table contains three building blocks: the conditions, the actions (or decisions), and the rules.
Adecision conditionis constructed with acondition stuband acondition entry. Acondition stubis declared as a statement of a condition. Acondition entryprovides a value assigned to the condition stub. Similarly, anaction(ordecision) composes two elements: anaction stuband anaction entry. One states an action with an action stub. An action entry specifies whether (or in what order) the action is to be performed.
A decision table separates the data (that is the condition entries and decision/action entries) from the decision templates (that are the condition stubs, decision/action stubs, and the relations between them). Or rather, a decision table can be a tabular result of its meta-rules.
Traditional decision tables have many advantages compared to other decision support manners, such asif-then-elseprogramming statements,decision treesandBayesian networks. A traditional decision table is compact and easily understandable. However, it still has several limitations. For instance, a decision table often faces the problems ofconceptual ambiguityandconceptual duplication[citation needed]; and it istime consumingto create and maintainlargedecision tables[citation needed]. Semantic decision tables are an attempt to solve these problems.
A semantic decision table is modeled based on the framework of Developing Ontology-Grounded Methods and Applications (DOGMA[3]). The separation of anontologyinto extremely simplelinguisticstructures (also known as lexons) and a layer of lexon constraints used by applications (also known as ontological commitments), aiming to achieve a degree ofscalability.
According to the DOGMA framework, a semantic decision table consists of a layer of the decision binary fact types called semantic decision tablelexonsand a semantic decision table commitment layer that consists of the constraints and axioms of these fact types.
Alexonl is a quintuple<y,t1,r1,r2,t1>{\displaystyle <y,t_{1},r_{1},r_{2},t_{1}>}wheret1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}represent two concepts in a natural language (e.g., English);r1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}(in,r1{\displaystyle r_{1}}corresponds to "role andr2{\displaystyle r_{2}}– refer to the relationships that the concepts share with respect to one another;γ{\displaystyle \gamma }is a context identifier refers to a context, which serves to disambiguate the termst1,t2{\displaystyle t_{1},t_{2}}into the intended concepts, and in which they become meaningful.
For example, a lexon <γ, driver's license, is issued to, has, driver> explains a fact that “a driver’s license is issued to a driver”, and “a driver has a driver’s license”.
Theontological commitmentlayer formally defines selected rules and constraints by which an application (or "agent") may make use of lexons. A commitment can contain various constraints, rules and axiomatized binary facts based on needs. It can be modeled in different modeling tools, such asobject-role modeling,conceptual graph, andUnified Modeling Language.
A semantic decision table contains richer decision rules than a decision table. During the annotation process, the decision makers need to specify all the implicit rules, including the hidden decision rules and the meta-rules of a set of decision tables. The semantics of these rules is derived from an agreement between the decision makers observing the real-world decision problems. The process of capturing semantics within a community is a process of knowledge acquisition.
|
https://en.wikipedia.org/wiki/Semantic_decision_table
|
Inbusiness analysis, theDecision Model and Notation(DMN) is a standard published by theObject Management Group.[1]It is a standard approach for describing and modeling repeatable decisions within organizations to ensure that decision models are interchangeable across organizations.
The DMN standard provides the industry with a modeling notation for decisions that will supportdecision managementandbusiness rules. The notation is designed to be readable by business andITusers alike. This enables various groups to effectively collaborate in defining adecision model:
The DMN standard can be effectively used standalone but it is also complementary to theBPMNandCMMNstandards. BPMN defines a special kind of activity, the Business Rule Task, which "provides a mechanism for the process to provide input to a business rule engine and to get the output of calculations that the business rule engine might provide"[2][3]that can be used to show where in a BPMN process a decision defined using DMN should be used.
DMN has been made a standard for Business Analysis according to BABOK v3.[4][5]
The standard includes three main elements
The standard identifies three main use cases for DMN
Using the DMN standard will improve business analysis and business process management, since
DMN has been designed to work withBPMN.Business process modelscan be simplified by moving process logic into decision services. DMN is a separate domain within the OMG that provides an explicit way to connect to processes in BPMN. Decisions in DMN can be explicitly linked to processes and tasks that use the decisions. This integration of DMN and BPMN has been studied extensively.[9]DMN expects that the logic of a decision will be deployed as a stateless, side-effect free Decision Service. Such a service can be invoked from a business process and the data in the process can be mapped to the inputs and outputs of the decision service.[10]
As mentioned,BPMNis a relatedOMG Standardfor process modeling. DMN complementsBPMN, providing a separation of concerns between the decision and the process. The example here describes a BPMN process and DMN DRD (Decision Requirements Diagram) for onboarding a bank customer. Several decisions are modeled and these decisions will direct the processes response.
In the BPMN process model shown in the figure, a customer makes a request to open a new bank account. The account application provides the account representative with all the information needed to create an account and provide the requested services. This includes the name, address and various forms of identification. In the next steps of the work flow, the 'Know Your Customer' (KYC) services are called.
In the 'KYC' services, the name and address are validated; followed by a check against the international criminal database (Interpol) and the database of persons that are 'Politically exposed persons (PEP)'. The PEP is a person who is either entrusted with a prominent political position or a close relative thereof. Deposits from persons on the PEP list are potentially corrupt. This is shown as two services on the process model. Anti-money-laundering (AML) regulations require these checks before the customer account is certified.
The results of these services plus the forms of identification are sent to the Certify New Account decision. This is shown as a 'rule' activity, verify account, on the process diagram. If the new customer passes certification, then the account is classified into onboarding for Business Retail, Retail, Wealth Management and High Value Business. Otherwise the customer application is declined. The Classify New Customer Decision classifies the customer.
If the verify-account process returns a result of 'Manual' then the PEP or the Interpol check returned a close match. The account representative must visually inspect the name and the application to determine if the match is valid and accept or decline the application.
An account is certified for opening if the individual's' address is verified, and if valid identification is provided, and if the applicant is not on a list of criminals or politically exposed persons. These are shown as sub-decisions below the 'certify new account' decision. The account verification services provides a 100% match of the applicants address.
For identification to be valid, the customer must provide a driver's license, passport or government issued ID.
The checks against PEP and Interpol are 'Fuzzy' matches and return matching score values. Scores above 85 are considered a 'match' and scores between 65 and 85 would require a 'manual' screening process. People who match either of these lists are rejected by the account application process. If there is a partial match with a score between 65 and 85, against the Interpol or PEP list then the certification is set to manual and an account representative performs a manual verification of the applicant's data. These rules are reflected in the figure below, which presents the decision table for whether to pass the provided name for the lists checks.
The client's on-boarding process is driven by what category they fall in. The category is decided by the:
This decision is shown below:
There are 6 business rules that determine the client's category and these are shown in the decision table here:
In this example, the outcome of the 'Verify Account' decision directed the responses of the new account process. The same is true for the 'Classify Customer' decision. By adding or changing the business rules in the tables, one can easily change the criteria for these decisions and control the process differently.
Modeling is a critical aspect of improving an existing process or business challenge. Modeling is generally done by a team of business analysts, IT personnel, and modeling experts. The expressive modeling capabilities of BPMN allows business analyst to understand the functions of the activities of the process. Now with the addition of DMN, business analysts can construct an understandable model of complex decisions. Combining BPMN and DMN yields a very powerful combination of models that work synergistically to simplify processes.
Automated discovery techniques that infer decision models from process execution data have been proposed as well.[11]Here, a DMN decision model is derived from a data-enrichedevent log, along with the process that uses the decisions. In doing so, decision mining complementsprocess miningwith traditionaldata miningapproaches.
Constraint Decision Model and Notation (cDMN) is a formal notation for expressing knowledge in a tabular, intuitive format.[12]It extends DMN with constraint reasoning and related concepts while aiming to retain the user-friendliness of the original.
cDMN is also meant to express other problems besides business modeling, such as complex component design.[13]
It extends DMN in four ways:
Due to these additions, cDMN models can express more complex problems.[12]Furthermore, they can also express some DMN models in more compact, less-convoluted ways.[12]Unlike DMN, cDMN is not deterministic, in the sense that a set of input values could have multiple different solutions.
Indeed, where a DMN model always defines a single solution, a cDMN model defines asolution space.
Usage of cDMN models can also be integrated inBusiness Process Model and Notationprocess models, just like DMN.
As an example, consider the well-known map coloring orGraph coloringproblem.
Here, we wish to color a map in such a way that no bordering countries share the same color.
The constraint table shown in the figure (as denoted by itsE*hit policy in the top-left corner) expresses this logic.
It is read as "For each country c1, country c2 holds thatifthey are different countries which border,thenthe color of c1 is not the color of c2.
Here, the first two columns introduce two quantifiers, both of type country, which serve asuniversal quantifier.
In the third column, the 2-ary predicatebordersis used to express when two countries have a shared border.
Finally, the last column uses the 1-ary functioncolor of, which maps each country on a color.
|
https://en.wikipedia.org/wiki/Decision_Model_and_Notation
|
Acomparison sortis a type ofsorting algorithmthat only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or athree-way comparison) that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms atotal preorderover the data, with:
It is possible that botha≤bandb≤a; in this case either may come first in the sorted list. In astable sort, the input order determines the sorted order in this case.
Comparison sorts studied in the literature are "comparison-based".[1]Elementsaandbcan be swapped or otherwise re-arranged by the algorithm only when the order between these elements has been established based on the outcomes of prior comparisons. This is the case when the order betweenaandbcan be derived via thetransitive closureof these prior comparison outcomes.
For comparison-based sorts the decision to execute basic operations other than comparisons is based on the outcome of comparisons. Hence in a time analysis the number of executed comparisons is used to determine upper bound estimates for the number of executed basic operations such as swaps or assignments.[1]
A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and abalance scale. Their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one is heavier (or if they weigh the same).
Some of the most well-known comparison sorts include:
There are fundamental limits on the performance of comparison sorts. A comparison sort must have an average-case lower bound ofΩ(nlogn) comparison operations,[2]which is known aslinearithmictime. This is a consequence of the limited information available through comparisons alone — or, to put it differently, of the vague algebraic structure of totally ordered sets. In this sense, mergesort, heapsort, and introsort areasymptotically optimalin terms of the number of comparisons they must perform, although this metric neglects other operations. Non-comparison sorts (such as the examples discussed below) can achieveO(n) performance by using operations other than comparisons, allowing them to sidestep this lower bound (assuming elements are constant-sized).
Comparison sorts may run faster on some lists; manyadaptive sortssuch asinsertion sortrun in O(n) time on an already-sorted or nearly-sorted list. TheΩ(nlogn) lower bound applies only to the case in which the input list can be in any possible order.
Real-world measures of sorting speed may need to take into account the ability of some algorithms to optimally use relatively fast cachedcomputer memory, or the application may benefit from sorting methods where sorted data begins to appear to the user quickly (and then user's speed of reading will be the limiting factor) as opposed to sorting methods where no output is available until the whole list is sorted.
Despite these limitations, comparison sorts offer the notable practical advantage that control over the comparison function allows sorting of many different datatypes and fine control over how the list is sorted. For example, reversing the result of the comparison function allows the list to be sorted in reverse; and one can sort a list oftuplesinlexicographic orderby just creating a comparison function that compares each part in sequence:
Comparison sorts generally adapt more easily to complex orders such as the order offloating-point numbers. Additionally, once a comparison function is written, any comparison sort can be used without modification; non-comparison sorts typically require specialized versions for each datatype.
This flexibility, together with the efficiency of the above comparison sorting algorithms on modern computers, has led to widespread preference for comparison sorts in most practical work.
Some sorting problems admit a strictly faster solution than theΩ(nlogn)bound for comparison sorting by usingnon-comparison sorts; an example isinteger sorting, where all keys are integers. When the keys form a small (compared ton) range,counting sortis an example algorithm that runs in linear time. Other integer sorting algorithms, such asradix sort, are not asymptotically faster than comparison sorting, but can be faster in practice.
The problem ofsorting pairs of numbers by their sumis not subject to theΩ(n² logn)bound either (the square resulting from the pairing up); the best known algorithm still takesO(n² logn)time, but onlyO(n²)comparisons.
The number of comparisons that a comparison sort algorithm requires increases in proportion tonlog(n){\displaystyle n\log(n)}, wherenis the number of elements to sort. This bound isasymptotically tight.
Given a list of distinct numbers (we can assume this because this is a worst-case analysis), there arenfactorialpermutations exactly one of which is the list in sorted order. The sort algorithm must gain enough information from the comparisons to identify the correct permutation. If the algorithm always completes after at mostf(n){\displaystyle f(n)}steps, it cannot distinguish more than2f(n){\displaystyle 2^{f(n)}}cases because the keys are distinct and each comparison has only two possible outcomes. Therefore,
By looking at the firstn/2{\displaystyle n/2}factors ofn!=n(n−1)⋯1{\displaystyle n!=n(n-1)\cdots 1}, we obtain
This provides the lower-bound part of the claim. A more precise bound can be given viaStirling's approximation. An upper bound of the same form, with the same leading term as the bound obtained from Stirling's approximation, follows from the existence of the algorithms that attain this bound in the worst case, likemerge sort.
The above argument provides anabsolute, rather than only asymptotic lower bound on the number of comparisons, namely⌈log2(n!)⌉{\displaystyle \left\lceil \log _{2}(n!)\right\rceil }comparisons. This lower bound is fairly good (it can be approached within a linear tolerance by a simple merge sort), but it is known to be inexact. For example,⌈log2(13!)⌉=33{\displaystyle \left\lceil \log _{2}(13!)\right\rceil =33}, but the minimal number of comparisons to sort 13 elements has been proved to be 34.
Determining theexactnumber of comparisons needed to sort a given number of entries is a computationally hard problem even for smalln, and no simple formula for the solution is known. For some of the few concrete values that have been computed, seeOEIS:A036604.
A similar bound applies to the average number of comparisons. Assuming that
it is impossible to determine which order the input is in with fewer thanlog2(n!)comparisons on average.
This can be most easily seen using concepts frominformation theory. TheShannon entropyof such a random permutation islog2(n!)bits. Since a comparison can give only two results, the maximum amount of information it provides is 1 bit. Therefore, afterkcomparisons the remaining entropy of the permutation, given the results of those comparisons, is at leastlog2(n!) −kbits on average. To perform the sort, complete information is needed, so the remaining entropy must be 0. It follows thatkmust be at leastlog2(n!)on average.
The lower bound derived via information theory is phrased as 'information-theoretic lower bound'. Information-theoretic lower bound is correct but is not necessarily the strongest lower bound. And in some cases, the information-theoretic lower bound of a problem may even be far from the true lower bound. For example, the information-theoretic lower bound of selection is⌈log2(n)⌉{\displaystyle \left\lceil \log _{2}(n)\right\rceil }whereasn−1{\displaystyle n-1}comparisons are needed by an adversarial argument. The interplay between information-theoretic lower bound and the true lower bound are much like a real-valued function lower-bounding an integer function. However, this is not exactly correct when the average case is considered.
To unearth what happens while analyzing the average case, the key is that what does 'average' refer to? Averaging over what? With some knowledge of information theory, the information-theoretic lower bound averages over the set of all permutations as a whole. But any computer algorithms (under what are believed currently) must treat each permutation as an individual instance of the problem. Hence, the average lower bound we're searching for is averaged over all individual cases.
To search for the lower bound relating to the non-achievability of computers, we adopt theDecision tree model. Let's rephrase a bit of what our objective is. In theDecision tree model, the lower bound to be shown is the lower bound of the average length of root-to-leaf paths of ann!{\displaystyle n!}-leaf binary tree (in which each leaf corresponds to a permutation). The minimum average length of a binary tree with a given number of leaves is achieved by a balanced full binary tree, because any other binary tree can have its path length reduced by moving a pair of leaves to a higher position. With some careful calculations, for a balanced full binary tree withn!{\displaystyle n!}leaves, the average length of root-to-leaf paths is given by
For example, forn= 3, the information-theoretic lower bound for the average case is approximately 2.58, while the average lower bound derived viaDecision tree modelis 8/3, approximately 2.67.
In the case that multiple items may have the same key, there is no obvious statistical interpretation for the term "average case", so an argument like the above cannot be applied without making specific assumptions about the distribution of keys.
Can easy compute for real algorithm sorted-list-merging (array are sorted n-blocks with size 1, merge to 1–1 to 2, merge 2–2 to 4...).
If a list is already close to sorted, according to some measure of sortedness, the number of comparisons required to sort it can be smaller. Anadaptive sorttakes advantage of this "presortedness" and runs more quickly on nearly-sorted inputs, often while still maintaining anO(nlogn){\displaystyle O(n\log n)}worst case time bound. An example isadaptive heap sort, a sorting algorithm based onCartesian trees. It takes timeO(nlogk){\displaystyle O(n\log k)}, wherekis the average, over all valuesxin the sequence, of the number of times the sequence jumps from belowxto abovexor vice versa.[12]
|
https://en.wikipedia.org/wiki/Comparison_sort
|
Intheoretical computer science, theAanderaa–Karp–Rosenberg conjecture(also known as theAanderaa–Rosenberg conjectureor theevasiveness conjecture) is a group of relatedconjecturesabout the number of questions of the form "Is there an edge between vertexu{\displaystyle u}and vertexv{\displaystyle v}?" that have to be answered to determine whether or not anundirected graphhas a particular property such asplanarityorbipartiteness. They are named afterStål Aanderaa,Richard M. Karp, andArnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: anyalgorithmfor determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is calledevasive.
More precisely, the Aanderaa–Rosenberg conjecture states that anydeterministic algorithmmust test at least a constant fraction of all possible pairs of vertices, in theworst case, to determine any non-trivial monotone graph property. In this context, a property is monotone if it remains true when edges are added; for example, planarity is not monotone, but non-planarity is monotone. A stronger version of this conjecture, called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture, states that exactly(n2)=n(n−1)/2{\displaystyle {\tbinom {n}{2}}=n(n-1)/2}tests are needed for a graph withn{\displaystyle n}vertices. Versions of the problem forrandomized algorithmsandquantum algorithmshave also been formulated and studied.
The deterministic Aanderaa–Rosenberg conjecture was proven byRivest & Vuillemin (1975), but the stronger Aanderaa–Karp–Rosenberg conjecture remains unproven. Additionally, there is a large gap between the conjectured lower bound and the best proven lower bound for randomized and quantum query complexity.
The property of being non-empty (that is, having at least one edge) is monotone, because adding another edge to a non-empty graph produces another non-empty graph. There is a simple algorithm for testing whether a graph is non-empty: loop through all of the pairs of vertices, testing whether each pair is connected by an edge. If an edge is ever found in this way, break out of the loop, and report that the graph is non-empty, and if the loop completes without finding any edges, then report that the graph is empty. On some graphs (for instance thecomplete graphs) this algorithm will terminate quickly, without testing every pair of vertices, but on theempty graphit tests all possible pairs before terminating. Therefore, the query complexity of this algorithm is(n2)=n(n−1)/2{\displaystyle {\tbinom {n}{2}}=n(n-1)/2}: in the worst case, the algorithm performsn(n−1)/2{\displaystyle n(n-1)/2}tests.
The algorithm described above is not the only possible method of testing for non-emptiness, but
the Aanderaa–Karp–Rosenberg conjecture implies that every deterministic algorithm for testing non-emptiness has the same worst-case query complexity,n(n−1)/2{\displaystyle n(n-1)/2}. That is, the property of being non-empty isevasive. For this property, the result is easy to prove directly: if an algorithm does not performn(n−1)/2{\displaystyle n(n-1)/2}tests, it cannot distinguish the empty graph from a graph that has one edge connecting one of the untested pairs of vertices, and must give an incorrect answer on one of these two graphs.
In the context of this article, allgraphswill besimpleandundirected, unless stated otherwise. This means that the edges of the graph form a set (and not amultiset) and each edge is a pair of distinct vertices. Graphs are assumed to have animplicit representationin which each vertex has a unique identifier or label and in which it is possible to test the adjacency of any two vertices, but for which adjacency testing is the only allowed primitive operation.
Informally, agraph propertyis a property of a graph that is independent of labeling. More formally, a graph property is a mapping from the class of all graphs to{0,1}{\displaystyle \{0,1\}}such that isomorphic graphs are mapped to the same value. For example, the property of containing at least one vertex of degree two is a graph property, but the property that the first vertex has degree two is not, because it depends on the labeling of the graph (in particular, it depends on which vertex is the "first" vertex). A graph property is called non-trivial if it does not assign the same value to all graphs. For instance, the property of being a graph is a trivial property, since all graphs possess this property. On the other hand, the property of being empty is non-trivial, because theempty graphpossesses this property, but non-empty graphs do not. A graph property is said to bemonotoneif the addition of edges does not destroy the property. Alternately, if a graph possesses a monotone property, then everysupergraphof this graph on the same vertex set also possesses it. For instance, the property of beingnonplanaris monotone: a supergraph of a nonplanar graph is itself nonplanar. However, the property of beingregularis not monotone.
Thebig O notationis often used for query complexity. In short,f(n){\displaystyle f(n)}isO(g(n)){\displaystyle O(g(n))}(read as "of the order ofg(n){\displaystyle g(n)}") if there exist positive constantsc{\displaystyle c}andN{\displaystyle N}such that, for alln≥N{\displaystyle n\geq N},f(n)≤c⋅g(n){\displaystyle f(n)\leq c\cdot g(n)}. Similarly,f(n){\displaystyle f(n)}isΩ(g(n)){\displaystyle \Omega (g(n))}if there exist positive constantsc{\displaystyle c}andN{\displaystyle N}such that, for alln≥N{\displaystyle n\geq N},f(n)≥c⋅g(n){\displaystyle f(n)\geq c\cdot g(n)}. Finally,f(n){\displaystyle f(n)}isΘ(g(n)){\displaystyle \Theta (g(n))}if it is bothO(g(n)){\displaystyle O(g(n))}andΩ(g(n)){\displaystyle \Omega (g(n))}.
The deterministic query complexity of evaluating a function onn{\displaystyle n}bits (where the bits may be labeled asx1,x2,…xn{\displaystyle x_{1},x_{2},\dots x_{n}}) is the number of bitsxi{\displaystyle x_{i}}that have to be read in the worst case by a deterministic algorithm that computes the function. For instance, if the function takes the value 0 when all bits are 0 and takes value 1 otherwise (this is theORfunction), then its deterministic query complexity is exactlyn{\displaystyle n}. In the worst case, regardless of the order it chooses to examine its input, the firstn−1{\displaystyle n-1}bits read could all be 0, and the value of the function now depends on the last bit. If an algorithm doesn't read this bit, it might output an incorrect answer. (Such arguments are known as adversary arguments.) The number of bits read are also called the number of queries made to the input. One can imagine that the algorithm asks (or queries) the input for a particular bit and the input responds to this query.
The randomized query complexity of evaluating a function is defined similarly, except the algorithm is allowed to be randomized. In other words, it can flip coins and use the outcome of these coin flips to decide which bits to query in which order. However, the randomized algorithm must still output the correct answer for all inputs: it is not allowed to make errors. Such algorithms are calledLas Vegas algorithms. (A different class of algorithms,Monte Carlo algorithms, are allowed to make some error.) Randomized query complexity can be defined for both Las Vegas and Monte Carlo algorithms, but the randomized version of the Aanderaa–Karp–Rosenberg conjecture is about the Las Vegas query complexity of graph properties.
Quantum query complexity is the natural generalization of randomized query complexity, of course allowing quantum queries and responses. Quantum query complexity can also be defined with respect to Monte Carlo algorithms or Las Vegas algorithms, but it is usually taken to mean Monte Carlo quantum algorithms.
In the context of this conjecture, the function to be evaluated is the graph property, and the input can be thought of as a string of sizen(n−1)/2{\displaystyle n(n-1)/2}, describing for each pair of vertices whether there is an edge with that pair as its endpoints. The query complexity of any function on this input is at mostn(n−1)/2{\displaystyle n(n-1)/2}, because an algorithm that makesn(n−1)/2{\displaystyle n(n-1)/2}queries can read the whole input and determine the input graph completely.
For deterministic algorithms,Rosenberg (1973)originally conjectured that for all nontrivial graph properties onn{\displaystyle n}vertices, deciding whether a graph possesses this property requiresΩ(n2){\displaystyle \Omega (n^{2})}The non-triviality condition is clearly required because there are trivial properties like "is this a graph?" which can be answered with no queries at all.[1]
The conjecture was disproved by Aanderaa, who exhibited a directed graph property (the property of containing a "sink") which required onlyO(n){\displaystyle O(n)}queries to test. Asink, in a directed graph, is a vertex of indegreen−1{\displaystyle n-1}and outdegree zero. The existence of a sink can be tested with less than3n{\displaystyle 3n}queries.[2]An undirected graph property which can also be tested withO(n){\displaystyle O(n)}queries is the property of being a scorpion graph, first described inBest, van Emde Boas & Lenstra (1974). A scorpion graph is a graph containing a three-vertex path, such that one endpoint of the path is connected to all remaining vertices, while the other two path vertices have no incident edges other than the ones in the path.[2]
Then Aanderaa and Rosenberg formulated a new conjecture (theAanderaa–Rosenberg conjecture) which says that deciding whether a graph possesses a non-trivial monotone graph property requiresΩ(n2){\displaystyle \Omega (n^{2})}queries.[3]This conjecture was resolved byRivest & Vuillemin (1975)by showing that at least116n2{\displaystyle {\tfrac {1}{16}}n^{2}}queries are needed to test for any nontrivial monotone graph property.[4]Through successive improvements this bound was further increased to(13−ε)n2{\displaystyle {\bigl (}{\tfrac {1}{3}}-\varepsilon {\bigr )}n^{2}}.[5]
Richard Karpconjectured the stronger statement (which is now called theevasiveness conjectureor theAanderaa–Karp–Rosenberg conjecture) that "every nontrivial monotone graph property for graphs onn{\displaystyle n}vertices is evasive."[6]A property is calledevasiveif determining whether a given graph has this property sometimes requires alln(n−1)/2{\displaystyle n(n-1)/2}possible queries.[7]This conjecture says that the best algorithm for testing any nontrivial monotone property must (in the worst case) query all possible edges. This conjecture is still open, although several special graph properties have shown to be evasive for alln{\displaystyle n}. The conjecture has been resolved for the case wheren{\displaystyle n}is aprime powerusing atopologicalapproach.[8]The conjecture has also been resolved for all non-trivial monotone properties on bipartite graphs.[9]Minor-closed properties have also been shown to be evasive for largen{\displaystyle n}.[10]
InKahn, Saks & Sturtevant (1984)the conjecture was generalized to properties of other (non-graph) functions too, conjecturing that any non-trivial monotone function that is weakly symmetric is evasive. This case is also solved whenn{\displaystyle n}is a prime power.[11]
Richard Karp also conjectured thatΩ(n2){\displaystyle \Omega (n^{2})}queries are required for testing nontrivial monotone properties even if randomized algorithms are permitted. No nontrivial monotone property is known which requires less than14n2{\displaystyle {\tfrac {1}{4}}n^{2}}queries to test. A linear lower bound (i.e.,Ω(n){\displaystyle \Omega (n)}) on all monotone properties follows from a very generalrelationship between randomized and deterministic query complexities. The first superlinear lower bound for all monotone properties was given byYao (1991)who showed thatΩ(n(logn)1/12){\displaystyle \Omega {\bigl (}n(\log n)^{1/12}{\bigr )}}queries are required. This was further improved byKing (1991)toΩ(n5/4){\displaystyle \Omega (n^{5/4})}, and then byHajnal (1991)toΩ(n4/3){\displaystyle \Omega (n^{4/3})}.This was subsequently improved to the current best known lower bound (among bounds that hold for all monotone properties) ofΩ(n4/3(logn)1/3){\displaystyle \Omega {\bigl (}n^{4/3}(\log n)^{1/3}{\bigr )}}byChakrabarti & Khot (2007).
Some recent results give lower bounds which are determined by the critical probabilityp{\displaystyle p}of the monotone graph property under consideration. The critical probabilityp{\displaystyle p}is defined as the unique numberp{\displaystyle p}in the range[0,1]{\displaystyle [0,1]}such that arandom graphG(n,p){\displaystyle G(n,p)}(obtained by choosing randomly whether each edge exists, independently of the other edges, with probabilityp{\displaystyle p}per edge) possesses this property with probability equal to12{\displaystyle {\tfrac {1}{2}}}.Friedgut, Kahn & Wigderson (2002)showed that any monotone property with critical probabilityp{\displaystyle p}requiresΩ(min{nmin(p,1−p),n2logn}){\displaystyle \Omega \left(\min \left\{{\frac {n}{\min(p,1-p)}},{\frac {n^{2}}{\log n}}\right\}\right)}queries. For the same problem,O'Donnell et al. (2005)showed a lower bound ofΩ(n4/3/p1/3){\displaystyle \Omega (n^{4/3}/p^{1/3})}.
As in the deterministic case, there are many special properties for which anΩ(n2){\displaystyle \Omega (n^{2})}lower bound is known. Moreover, better lower bounds are known for several classes of graph properties. For instance, for testing whether the graph has a subgraph isomorphic to any given graph (the so-calledsubgraph isomorphismproblem), the best known lower bound isΩ(n3/2){\displaystyle \Omega (n^{3/2})}due toGröger (1992).
For bounded-errorquantum query complexity, the best known lower bound isΩ(n2/3(logn)1/6){\displaystyle \Omega {\bigl (}n^{2/3}(\log n)^{1/6}{\bigr )}}as observed by Andrew Yao.[12]It is obtained by combining the randomized lower bound with the quantum adversary method. The best possible lower bound one could hope to achieve isΩ(n){\displaystyle \Omega (n)}, unlike the classical case, due toGrover's algorithmwhich gives anO(n){\displaystyle O(n)}-query algorithm for testing the monotone property of non-emptiness. Similar to the deterministic and randomized case, there are some properties which are known to have anΩ(n){\displaystyle \Omega (n)}lower bound, for example non-emptiness (which follows from the optimality of Grover's algorithm) andthe property of containing a triangle. There are some graph properties which are known to have anΩ(n3/2){\displaystyle \Omega (n^{3/2})}lower bound, and even some properties with anΩ(n2){\displaystyle \Omega (n^{2})}lower bound. For example, the monotone property of nonplanarity requiresΘ(n3/2){\displaystyle \Theta (n^{3/2})}queries,[13]and the monotone property of containing more than half the possible number of edges (also called the majority function) requiresΘ(n2){\displaystyle \Theta (n^{2})}queries.[14]
|
https://en.wikipedia.org/wiki/Aanderaa%E2%80%93Karp%E2%80%93Rosenberg_conjecture
|
Aminimum spanning tree(MST) orminimum weight spanning treeis a subset of the edges of aconnected, edge-weighted undirectedgraphthat connects all theverticestogether, without anycyclesand with the minimum possible total edge weight.[1]That is, it is aspanning treewhose sum of edge weights is as small as possible.[2]More generally, any edge-weighted undirected graph (not necessarily connected) has aminimum spanning forest, which is a union of the minimum spanning trees for itsconnected components.
There are many use cases for minimum spanning trees. One example is a telecommunications company trying to lay cable in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. roads), then there would be a graph containing the points (e.g. houses) connected by those paths. Some of the paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as thetriangle inequality. Aspanning treefor that graph would be a subset of those paths that has no cycles but still connects every house; there might be several spanning trees possible. Aminimum spanning treewould be one with the lowest total cost, representing the least expensive path for laying the cable.
If there arenvertices in the graph, then each spanning tree hasn− 1edges.
There may be several minimum spanning trees of the same weight; in particular, if all the edge weights of a given graph are the same, then every spanning tree of that graph is minimum.
If each edge has a distinct weight then there will be only one, unique minimum spanning tree. This is true in many realistic situations, such as the telecommunications company example above, where it's unlikely any two paths haveexactlythe same cost. This generalizes to spanning forests as well.
Proof:
More generally, if the edge weights are not all distinct then only the (multi-)set of weights in minimum spanning trees is certain to be unique; it is the same for all minimum spanning trees.[3]
If the weights arepositive, then a minimum spanning tree is, in fact, a minimum-costsubgraphconnecting all vertices, since if a subgraph contains acycle, removing any edge along that cycle will decrease its cost and preserve connectivity.
For any cycleCin the graph, if the weight of an edgeeofCis larger than any of the individual weights of all other edges ofC, then this edge cannot belong to an MST.
Proof:Assume the contrary, i.e. thatebelongs to an MSTT1. Then deletingewill breakT1into two subtrees with the two ends ofein different subtrees. The remainder ofCreconnects the subtrees, hence there is an edgefofCwith ends in different subtrees, i.e., it reconnects the subtrees into a treeT2with weight less than that ofT1, because the weight offis less than the weight ofe.
For anycutCof the graph, if the weight of an edgeein the cut-set ofCis strictly smaller than the weights of all other edges of the cut-set ofC, then this edge belongs to all MSTs of the graph.
Proof:Assumethat there is an MSTTthat does not containe. AddingetoTwill produce a cycle, that crosses the cut once ateand crosses back at another edgee'. Deletinge'we get a spanning treeT∖{e'} ∪ {e}of strictly smaller weight thanT. This contradicts the assumption thatTwas a MST.
By a similar argument, if more than one edge is of minimum weight across a cut, then each such edge is contained in some minimum spanning tree.
If the minimum cost edgeeof a graph is unique, then this edge is included in any MST.
Proof: ifewas not included in the MST, removing any of the (larger cost) edges in the cycle formed after addingeto the MST, would yield a spanning tree of smaller weight.
IfTis a tree of MST edges, then we cancontractTinto a single vertex while maintaining the invariant that the MST of the contracted graph plusTgives the MST for the graph before contraction.[4]
In all of the algorithms below,mis the number of edges in the graph andnis the number of vertices.
The first algorithm for finding a minimum spanning tree was developed by Czech scientistOtakar Borůvkain 1926 (seeBorůvka's algorithm). Its purpose was an efficient electrical coverage ofMoravia. The algorithm proceeds in a sequence of stages. In each stage, calledBoruvka step, it identifies a forestFconsisting of the minimum-weight edge incident to each vertex in the graphG, then forms the graphG1=G\Fas the input to the next step. HereG\Fdenotes the graph derived fromGby contracting edges inF(by theCut property, these edges belong to the MST). Each Boruvka step takes linear time. Since the number of vertices is reduced by at least half in each step, Boruvka's algorithm takesO(mlogn)time.[4]
A second algorithm isPrim's algorithm, which was invented byVojtěch Jarníkin 1930 and rediscovered byPrimin 1957 andDijkstrain 1959. Basically, it grows the MST (T) one edge at a time. Initially,Tcontains an arbitrary vertex. In each step,Tis augmented with a least-weight edge(x,y)such thatxis inTandyis not yet inT. By theCut property, all edges added toTare in the MST. Its run-time is eitherO(mlogn)orO(m+nlogn), depending on the data-structures used.
A third algorithm commonly in use isKruskal's algorithm, which also takesO(mlogn)time.
A fourth algorithm, not as commonly used, is thereverse-delete algorithm, which is the reverse of Kruskal's algorithm. Its runtime isO(mlogn(log logn)3).
All four of these aregreedy algorithms. Since they run in polynomial time, the problem of finding such trees is inFP, and relateddecision problemssuch as determining whether a particular edge is in the MST or determining if the minimum total weight exceeds a certain value are inP.
Several researchers have tried to find more computationally-efficient algorithms.
In a comparison model, in which the only allowed operations on edge weights are pairwise comparisons,Karger, Klein & Tarjan (1995)found alinear time randomized algorithmbased on a combination of Borůvka's algorithm and the reverse-delete algorithm.[5][6]
The fastest non-randomized comparison-based algorithm with known complexity, byBernard Chazelle, is based on thesoft heap, an approximate priority queue.[7][8]Its running time isO(mα(m,n)), whereαis the classical functionalinverse of the Ackermann function. The functionαgrows extremely slowly, so that for all practical purposes it may be considered a constant no greater than 4; thus Chazelle's algorithm takes very close to linear time.
If the graph is dense (i.e.m/n≥ log log logn), then a deterministic algorithm by Fredman and Tarjan finds the MST in timeO(m).[9]The algorithm executes a number of phases. Each phase executesPrim's algorithmmany times, each for a limited number of steps. The run-time of each phase isO(m+n). If the number of vertices before a phase isn', the number of vertices remaining after a phase is at mostn′2m/n′{\displaystyle {\tfrac {n'}{2^{m/n'}}}}. Hence, at mostlog*nphases are needed, which gives a linear run-time for dense graphs.[4]
There are other algorithms that work in linear time on dense graphs.[7][10]
If the edge weights are integers represented in binary, then deterministic algorithms are known that solve the problem inO(m+n)integer operations.[11]Whether the problem can be solveddeterministicallyfor ageneral graphinlinear timeby a comparison-based algorithm remains an open question.
Given graphGwhere the nodes and edges are fixed but the weights are unknown, it is possible to construct a binarydecision tree(DT) for calculating the MST for any permutation of weights. Each internal node of the DT contains a comparison between two edges, e.g. "Is the weight of the edge betweenxandylarger than the weight of the edge betweenwandz?". The two children of the node correspond to the two possible answers "yes" or "no". In each leaf of the DT, there is a list of edges fromGthat correspond to an MST. The runtime complexity of a DT is the largest number of queries required to find the MST, which is just the depth of the DT. A DT for a graphGis calledoptimalif it has the smallest depth of all correct DTs forG.
For every integerr, it is possible to find optimal decision trees for all graphs onrvertices bybrute-force search. This search proceeds in two steps.
A. Generating all potential DTs
(r4)(2r2)=r2(r2+2).{\displaystyle {(r^{4})}^{(2^{r^{2}})}=r^{2^{(r^{2}+2)}}.}
B. Identifying the correct DTsTo check if a DT is correct, it should be checked on all possible permutations of the edge weights.
Hence, the total time required for finding an optimal DT forallgraphs withrvertices is:[4]
which is less than
Seth PettieandVijaya Ramachandranhave found a provably optimal deterministic comparison-based minimum spanning tree algorithm.[4]The following is a simplified description of the algorithm.
The runtime of all steps in the algorithm isO(m),except for the step of using the decision trees. The runtime of this step is unknown, but it has been proved that it is optimal - no algorithm can do better than the optimal decision tree. Thus, this algorithm has the peculiar property that it isprovably optimalalthough its runtime complexity isunknown.
Research has also consideredparallel algorithmsfor the minimum spanning tree problem.
With a linear number of processors it is possible to solve the problem inO(logn)time.[12][13]
The problem can also be approached in adistributed manner. If each node is considered a computer and no node knows anything except its own connected links, one can still calculate thedistributed minimum spanning tree.
Alan M. Friezeshowed that given acomplete graphonnvertices, with edge weights that are independent identically distributed random variables with distribution functionF{\displaystyle F}satisfyingF′(0)>0{\displaystyle F'(0)>0}, then asnapproaches+∞the expected weight of the MST approachesζ(3)/F′(0){\displaystyle \zeta (3)/F'(0)}, whereζ{\displaystyle \zeta }is theRiemann zeta function(more specifically isζ(3){\displaystyle \zeta (3)}Apéry's constant). Frieze andSteelealso proved convergence in probability.Svante Jansonproved acentral limit theoremfor weight of the MST.
For uniform random weights in[0,1]{\displaystyle [0,1]}, the exact expected size of the minimum spanning tree has been computed for small complete graphs.[14]
There is a fractional variant of the MST, in which each edge is allowed to appear "fractionally". Formally, afractional spanning setof a graph (V,E) is a nonnegative functionfonEsuch that, for every non-trivial subsetWofV(i.e.,Wis neither empty nor equal toV), the sum off(e) over all edges connecting a node ofWwith a node ofV\Wis at least 1. Intuitively,f(e) represents the fraction of e that is contained in the spanning set. Aminimum fractional spanning setis a fractional spanning set for which the sum∑e∈Ef(e)⋅w(e){\displaystyle \sum _{e\in E}f(e)\cdot w(e)}is as small as possible.
If the fractionsf(e) are forced to be in {0,1}, then the setTof edges with f(e)=1 are a spanning set, as every node or subset of nodes is connected to the rest of the graph by at least one edge ofT. Moreover, iffminimizes∑e∈Ef(e)⋅w(e){\displaystyle \sum _{e\in E}f(e)\cdot w(e)}, then the resulting spanning set is necessarily a tree, since if it contained a cycle, then an edge could be removed without affecting the spanning condition. So the minimum fractional spanning set problem is a relaxation of the MST problem, and can also be called thefractional MST problem.
The fractional MST problem can be solved in polynomial time using theellipsoid method.[15]: 248However, if we add a requirement thatf(e) must be half-integer (that is,f(e) must be in {0, 1/2, 1}), then the problem becomesNP-hard,[15]: 248since it includes as a special case theHamiltonian cycle problem: in ann{\displaystyle n}-vertex unweighted graph, a half-integer MST of weightn/2{\displaystyle n/2}can only be obtained by assigning weight 1/2 to each edge of a Hamiltonian cycle.
Minimum spanning trees have direct applications in the design of networks, includingcomputer networks,telecommunications networks,transportation networks,water supply networks, andelectrical grids(which they were first invented for, as mentioned above).[29]They are invoked as subroutines in algorithms for other problems, including theChristofides algorithmfor approximating thetraveling salesman problem,[30]approximating the multi-terminal minimum cut problem (which is equivalent in the single-terminal case to themaximum flow problem),[31]and approximating the minimum-cost weighted perfectmatching.[32]
Other practical applications based on minimal spanning trees include:
|
https://en.wikipedia.org/wiki/Minimum_spanning_tree#Decision_trees
|
Goal structuring notation(GSN) is a graphical diagram notation used to show the elements of anargumentand the relationships between those elements in a clearer format than plain text.[1]Often used insafety engineering, GSN was developed at the University of York during the 1990s to presentsafety cases.[2]The notation gained popularity as a method of presenting safety assurances but can be applied to any type of argument and was standardized in 2011.[1]GSN has been used to track safety assurances in industries such as clinical care[3]aviation,[4]automotive, rail,[5]traffic management, and nuclear power[6]and has been used in other contexts such as security cases,patent claims,debate strategy, and legal arguments.[5]
The goal structuring notation was first developed at theUniversity of Yorkduring the ASAM-II (A Safety Argument Manager II) project in the early 1990s, to overcome perceived issues in expressing safety arguments using theToulmin method. The notation was further developed and expanded by Tim Kelly, whose PhD thesis contributed systematic methods for constructing and maintaining GSN diagrams, and the concept of ′safety case patterns′ to promote the re-use of argument fragments.[2]During the late 1990s and early 2000s, the GSN methodology was taught in the Safety Critical Systems Engineering course at York, and various extensions to the GSN methodology were proposed by Kelly and other members of the university's High Integrity Systems Engineering group,[7]led byProf John McDermid.
By 2007, goal structuring notation was sufficiently popular that a group of industry and academic users came together to standardise the notation and its surrounding methodology, resulting in the publication of the GSN Community Standard in 2011. From 2014, maintenance of the GSN standard moved under the auspices of theSCSC'sAssurance Case Working Group.[8]As at 2022, the standard has reached Version 3.[1]
Charles Haddon-Cavein his review of theNimrod accidentcommented that the top goal of a GSN argument can drive a conclusion that is already assumed, such as that a platform is deemed acceptably safe. This could lead to the safety case becoming a "self-fulfilling prophesy", giving a "warm sense of over-confidence" rather than highlighting uncertainties, gaps in knowledge or areas where the mitigation argument was not straightforward.[4]This had already been recognised by Habli and Kelly, who warned that a GSN diagram was just a depiction, not the safety case itself, and likened it to Magritte's paintingThe Treachery of Images.[9]Haddon-Cave also criticised the practice of consultants producing "outsize GSN charts" that could be yards long and became an end in themselves rather than an aid to structured thinking.
|
https://en.wikipedia.org/wiki/Goal_structuring_notation
|
IDEF6orIntegrated Definition for Design Rationale Captureis a method to facilitate the acquisition, representation, and manipulation of thedesign rationaleused in the development ofenterprise systems. This method, that wants to define the motives that drive thedecision-makingprocess, is still in development.[2]Rationaleis thereason,justification, underlyingmotivation, or excuse that moved thedesignerto select a particularstrategyordesignfeature. More simply, rationale is interpreted as the answer to the question, “Why is this design being done in this manner?” Most design methods focus on what the design is (i.e., on the final product, rather than why the design is the way it is).[1]
IDEF6 is part of theIDEFfamily ofmodeling languagesin the field ofsystemsandsoftware engineering.
When explicitly captured,design rationaletypically exists in the form of unstructured textual comments. In addition to making it difficult, if not impossible to find relevant information on demand, lack of a structured method for organizing and providing completeness criteria for design rationale capture makes it unlikely that important information will be documented. Unlike design methods which serve to document WHAT a design is (Design Specification), the IDEF6 Design Rationale Capture Method is targeted at capturing:[3]
IDEF6 was intended to be a method with the representational capability to capture information system design rationale and associate that rationale with the design models and documentation for the end system. Thus, IDEF6 attempts to capture the logic underlying the decisions contributing to, or resulting in, the final design. The explicit capture of design rationale serves to help avoid repeating past mistakes, provides a direct means for determining the impact of proposed design changes, forces the explicit statement of goals and assumptions, and aids in the communication of final system specifications.[3]
IDEF6 will be a method that possesses the conceptual resources and linguistic capabilities needed[4]
The scope of IDEF6 applicability covers all phases of the information system development process, from initial conceptualization through both preliminary and detailed design activities. To the extent that detailed design decisions for software systems are relegated to the coding phase, the IDEF6 technique should be usable during the software construction process as well.[4]
Design rationale becomes important when a design decision is not completely determined by the constraints of the situation. Thus, decision points must be identified, the situations and constraints associated with those decision points must be defined, and if options exist, the rationale for the chosen option and for discarding other options (i.e., those design options not chosen) must be recorded. The task of capturing design rationale serves the following purposes:
Rationale capture is applicable to all phases of the system development process. The intended users of IDEF6 include business system engineers, information systems designers, software designers, systems development project managers, and programmers.
Design rationale (why and how), can be contrasted with the related notions of design specification (what), and design history (steps taken). Design specifications describe what intent should be realized in the final physical artifact. Design rationale describes why the design specification is the way it is. This includes such information as principles and philosophy of operation, models of correct behavior, and models of how the artifact behaves as it fails. The design process history records the steps that were taken, the plans and
expectations that led up to these steps, and the results of each step.[1]
In IDEF6, the rationale capture procedure involves partitioning, classification/ specification, assembly, simulation/execution, and re-partitioning activities. The rationale capture procedure normally applied in the simulation/execution activity of the evolving design uses two phases: Phase I describes the problem and Phase II develops a solution strategy.[1]
Design is an iterative procedure involving partitioning, classification/specification, assembly, simulation, and re-partitioning activities, see Figure. First, the design is partitioned into design artifacts. Each artifact is either classified against existing design artifacts or an external specification is developed for it. The external specification enables the internal specification of the design artifact to be delegated and performed concurrently. After classification/specification, the interfaces between the design artifacts are specified in
the assembly activity (i.e., static, dynamic, and behavioral models detailing different aspects of the interaction between design artifacts are developed). While the models are developed, it is important to simulate use scenarios oruse cases[5]between design artifacts to uncover design flaws. By analyzing these flaws, the designer can re-arrange the existing models and simulate them until the designer is satisfied. The observed design flaws and the actions contemplated and taken for each are the basis of the design rationale capture procedure.[1]
The designer identifies problems in the current design state by stepping through the use cases in the requirements model to validate that the design satisfies requirements and to verify that the design will function as intended. The designer records symptoms or concerns about the current design state. A symptom is an observation of an operational failure or undesirable condition in the existing design. A concern is an observation of an anticipated failure or undesirable condition in the existing design.[1]
The designer then identifies the constraints that the problems violate or potentially violate. These constraints include requirements, goals, physical laws, conventions, assumptions, models, and resources. Because the activities and processes in the use case scenarios map to requirements and goals, the failure of the design in any use case activity or process can be traced directly to requirements statements and goal statements.[1]
The designer then identifies the necessary conditions or needs for solving the problems. A need is a necessary condition that must be met if a particular problem or set of problems is to be solved. It is possible that the needs statement will have to describe the essentiality for relaxing requirements and goal constraints governing the design.[1]
Once the needs for the design transition have been identified, the designer formulates[1]
A requirement is a constraint on either the functional, behavioral, physical, or method of development aspects of a solution. A design goal is a stated aim that the design structure and specifications must support.
Once the requirements and goals have been established, the design team formulates alternative strategies for exploration in the next major transition in the design.[1]
Design strategies can be considered as “meta-plans” for dealing with frequently occurring design situations. They can be viewed as methodizations or organizations of the primitive design activities identified above (i.e., partitioning, classification/specification, assembly, simulation, and re-partitioning). The three types of design strategies considered in the IDEF4 rationale component include:
In summary, design as a cognitive endeavor shares many characteristics with other activities such as planning and diagnosis. But, design is distinguished by the context in which it is performed, the generic activities involved, the strategies employed, and the types of knowledge applied. A major distinguishing characteristic is the focus of the design process on the creation (refinement, analysis, etc.) of a specification of the end product.[1]
|
https://en.wikipedia.org/wiki/IDEF6
|
Problem structuring methods(PSMs) are a group of techniques used tomodelor tomapthe nature or structure of a situation orstate of affairsthat some people want to change.[1]PSMs are usually used by a group of people incollaboration(rather than by a solitary individual) to create aconsensusabout, or at least to facilitatenegotiationsabout, what needs to change.[2]Some widely adopted PSMs[1]include
Unlike someproblem solvingmethods that assume that all the relevant issues and constraints and goals that constitute the problem are defined in advance or are uncontroversial, PSMs assume that there is no single uncontested representation of what constitutes the problem.[6]
PSMs are mostly used with groups of people, but PSMs have also influenced thecoachingandcounselingof individuals.[7]
The term "problem structuring methods" as a label for these techniques began to be used in the 1980s in the field ofoperations research,[8]especially after the publication of the bookRational Analysis for a Problematic World: Problem Structuring Methods for Complexity, Uncertainty and Conflict.[9]Some of the methods that came to be called PSMs had been in use since the 1960s.[2]
Thinkers who later came to be recognized as significant early contributors to the theory and practice of PSMs include:[10]
In discussions of problem structuring methods, it is common to distinguish between two different types of situations that could be considered to be problems.[17]Rittel and Webber's distinction between tame problems andwicked problems(Rittel & Webber 1973) is a well known example of such types.[17]The following table lists similar (but not exactly equivalent) distinctions made by a number of thinkers between two types of "problem" situations, which can be seen as a continuum between a left and right extreme:[18]
Tame problems(or puzzles or technical challenges) have relatively precise, straightforward formulations that are often amenable to solution with some predetermined technical fix or algorithm. It is clear when these situations have changed in such a way that the problem can be called solved.
Wicked problems(or messes or adaptive challenges) have multiple interacting issues with multiplestakeholdersand uncertainties and no definitive formulation. These situations are complex and have nostopping ruleand no ultimate test of a solution.
PSMs were developed for situations that tend toward the wicked or "soft" side, when methods are needed that assistargumentationabout, or that generate mutual understanding of multiple perspectives on, a complex situation.[17]Other problem solving methods are better suited to situations toward the tame or "hard" side where a reliable and optimal solution is needed to a problem that can be clearly and uncontroversially defined.
Problem structuring methods constitute a family of approaches that have differing purposes and techniques, and many of them had been developed independently before people began to notice their family resemblance.[17]Several scholars have noted the common and divergent characteristics among PSMs.
Eden and Ackermann identified four characteristics that problem structuring methods have in common:[19]
Rosenhead provided another list of common characteristics of PSMs, formulated in a more prescriptive style:[20]
An early literature review of problem structuring proposed grouping the texts reviewed into "four streams of thought" that describe some major differences between methods:[21]
Mingers and Rosenhead have noted that there are similarities and differences between PSMs andlarge group methodssuch as Future Search,Open Space Technology, and others.[22]PSMs and large group methods both bring people together to talk about, and to share different perspectives on, a situation or state of affairs that some people want to change. However, PSMs always focus on creating a sufficiently rigorousconceptual modelorcognitive mapof the situation, whereas large group methods do not necessarily emphasize modeling, and PSMs are not necessarily used with large groups of people.[22]
There is significant overlap or shared characteristics between PSMs and some of the techniques used inparticipatory rural appraisal(PRA). Mingers and Rosenhead pointed out that in situations where people have low literacy, the nonliterate (oral and visual) techniques developed in PRA would be a necessary complement to PSMs, and the approaches to modeling in PSMs could be (and have been) used by practitioners of PRA.[23]
In 2004, Mingers and Rosenhead published a literature review of papers that had been published inscholarly journalsand that reported practical applications of PSMs.[24]Their literature survey covered the period up to 1998, which was "relatively early in the development of interest in PSMs",[25]and categorized 51 reported applications under the following application areas: general organizational applications; information systems; technology, resources, planning; health services; and general research. Examples of applications reported included: designing a parliamentary briefing system, modeling theSan Francisco Zoo, developing abusiness strategyandinformation systemstrategy, planning livestock management in Nepal, regional planning in South Africa, modeling hospital outpatient services, and eliciting knowledge about pesticides.[24]
PSMs are a generalmethodologyand are not necessarily dependent on electronicinformation technology,[26]but PSMs do rely on some kind ofshared displayof the models that participants are developing. The shared display could beflip charts, a largewhiteboard,Post-it noteson the meeting room walls, and/or apersonal computerconnected to avideo projector.[26]After PSMs have been used in a group work session, it is normal for a record of the session's display to be shared with participants and with other relevant people.[26]
Software programs for supporting problem structuring include Banxia Decision Explorer and Group Explorer,[27]which implementcognitive mappingfor strategic options development and analysis (SODA), andCompendium, which implementsIBISfordialogue mappingand related methods;[28]a similar program is called Wisdom.[29]Such software can serve a variety of functions, such as simple technical assistance to the group facilitator during a single event, or more long-term online groupdecision support systems.
Some practitioners prefer not to use computers during group work sessions because of the effect they have ongroup dynamics, but such use of computers is standard in some PSMs such as SODA[27]and dialogue mapping,[28]in which computer display of models or maps is intended to guide conversation in the most efficient way.[26]
In some situations additional software that is not used only for PSMs may be incorporated into the problem structuring process; examples includespreadsheetmodeling,system dynamics software[30]orgeographic information systems.[31]Some practitioners, who have focused on buildingsystem dynamicssimulation models with groups of people, have called their workgroup model building(GMB) and have concluded "that GMB is another PSM".[32]GMB has also been used in combination with SODA.[33]
|
https://en.wikipedia.org/wiki/Problem_structuring_methods
|
Justification(also calledepistemic justification) is a property ofbeliefsthat fulfill certain norms about what a person should believe.[1][2]Epistemologistsoften identify justification as a component of knowledge distinguishing it from mere true opinion.[3]They study the reasons why someone holds a belief.[4]Epistemologists are concerned with various features of belief, which include the ideas of warrant (a proper justification for holding a belief),knowledge,rationality, andprobability, among others.
Debates surrounding epistemic justification often involve thestructureof justification, including whether there are foundational justified beliefs or whether merecoherenceis sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might includeperceptual experience(the evidence of the senses),reason, and authoritativetestimony, among others.
"Justification" involves the reasons why someone holds abeliefthat oneshouldhold based on one's current evidence.[4]Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold.
Many philosophers from Plato onward have treated "justified true belief" (JTB) as constituting knowledge. It is particularly associated with a theory discussed in his dialoguesMenoandTheaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end ofTheaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of theGettier problem.[4]
The subject of justification has played a major role in the value of knowledge as "justified true belief".[citation needed]Some contemporary epistemologists, such asJonathan Kvanvig, assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth.[citation needed]
William P. Alstonidentifies two conceptions of justification.[5]: 15–16One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds toepistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds toepistemic externalism.
There are several different views as to what entails justification, mostly focusing on the question "How beliefs are justified?". Differenttheories of justificationrequire different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge.
Notable theories of justification include:
Robert Fogelinclaims to detect a suspicious resemblance between the theories of justification andAgrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes ofPyrrhonian skepticism.[6]
William P. Alstoncriticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition."[5]: 22
|
https://en.wikipedia.org/wiki/Theory_of_justification
|
ERIL(Entity-Relationship and Inheritance Language) is avisual languagefor representing the data structure of a computer system.
As its name suggests, ERIL is based onentity-relationshipdiagrams andclass diagrams.
ERIL combines therelationalandobject-orientedapproaches todata modeling.
ERIL can be seen as a set of guidelines aimed at improving the readability of structure diagrams.
These guidelines were borrowed fromDRAKON, a variant offlowchartscreated within the Russian space program.
ERIL itself was developed by Stepan Mitkin.
The ERIL guidelines for drawing diagrams:
A class (table) in ERIL can have several indexes.
Each index in ERIL can include one or more fields, similar to indexes inrelational databases.
ERIL indexes are logical. They can optionally be implemented by real data structures.
Links between classes (tables) in ERIL are implemented by the so-called "link" fields.
Link fields can be of different types according to the link type:
Example: there is a one-to-many link betweenDocumentsandLines. OneDocumentcan have manyLines. Then theDocument.Linesfield is a collection of references to the lines that belong to the document.Line.Documentis a reference to the document that contains the line.
Link fields are also logical. They may or may not be implemented physically in the system.
ERIL is supposed to model any kind of data regardless of the storage.
The same ERIL diagram can represent data stored in arelational database, in aNoSQLdatabase,XMLfile or in the memory.
ERIL diagrams serve two purposes.
The primary purpose is to explain the data structure of an existing or future system or component.
The secondary purpose is to automatically generate source code from the model.
Code that can be generated includes specialized collection classes, hash and comparison functions, data retrieval and modification procedures,SQL data-definitioncode, etc. Code generated from ERIL diagrams can ensure referential and uniquenessdata integrity.
Serialization code of different kinds can also be automatically generated.
In some ways ERIL can be compared toobject-relational mappingframeworks.
|
https://en.wikipedia.org/wiki/ERIL
|
Dynamics of Markovian particles(DMP) is the basis of atheoryforkineticsofparticlesin openheterogeneous systems. It can be looked upon as an application of the notion ofstochastic processconceived as a physical entity; e.g. the particle moves because there is a transition probability acting on it.
Two particular features of DMP might be noticed: (1) anergodic-like relation between themotion of particleand the correspondingsteady state, and (2) the classic notion of geometricvolumeappears nowhere (e.g. a concept such as flow of "substance" is not expressed aslitersper time unit but as number of particles per time unit).
Although primitive, DMP has been applied for solving a classicparadoxof the absorption ofmercurybyfishand bymollusks. The theory has also been applied for a purelyprobabilisticderivation of the fundamental physical principle:conservation of mass; this might be looked upon as a contribution to the old and ongoing discussion of the relation betweenphysicsandprobability theory.
Thisclassical mechanics–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dynamics_of_Markovian_particles
|
Innumerical methodsforstochastic differential equations, theMarkov chain approximation method (MCAM)belongs to the several numerical (schemes) approaches used instochastic control theory. Regrettably the simple adaptation of the deterministic schemes for matching up to stochastic models such as the Runge–Kutta method does not work at all.
It is a powerful and widely usable set of ideas, due to the current infancy of stochastic control it might be even said 'insights.' for numerical and other approximations problems instochastic processes.[1][2]They represent counterparts from deterministic control theory such asoptimal control theory.[3]
The basic idea of the MCAM is to approximate the originalcontrolled processby a chosencontrolled markov processon afinite state space. In case of need, one must as well approximate thecost functionfor one that matches up theMarkov chainchosen to approximate the original stochastic process.
|
https://en.wikipedia.org/wiki/Markov_chain_approximation_method
|
Markov chain geostatisticsusesMarkov chainspatial models,simulationalgorithmsand associated spatialcorrelationmeasures (e.g.,transiogram) based on the Markov chain random field theory, which extends a singleMarkov chaininto a multi-dimensional random field forgeostatistical modeling. A Markov chain random field is still a single spatial Markov chain. The spatial Markov chain moves or jumps in a space and decides its state at any unobserved location through interactions with its nearest known neighbors in different directions. The data interaction process can be well explained as a local sequential Bayesian updating process within a neighborhood. Because single-step transition probabilitymatricesare difficult to estimate from sparsesampledata and are impractical in representing the complex spatialheterogeneityof states, thetransiogram, which is defined as atransition probabilityfunctionover the distance lag, is proposed as the accompanying spatial measure of Markov chain random fields.
|
https://en.wikipedia.org/wiki/Markov_chain_geostatistics
|
Inprobability theory, themixing timeof aMarkov chainis the time until the Markov chain is "close" to itssteady statedistribution.
More precisely, a fundamental result aboutMarkov chainsis that a finite state irreducible aperiodic chain has a unique stationary distributionπand, regardless of the initial state, the time-tdistribution of the chain converges toπasttends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large musttbe until the time-tdistribution is approximatelyπ? One variant,total variation distance mixing time, is defined as the smallesttsuch that thetotal variation distance of probability measuresis small:
Choosing a differentε{\displaystyle \varepsilon }, as long asε<1/2{\displaystyle \varepsilon <1/2}, can only change the mixing time up to a constant factor (depending onε{\displaystyle \varepsilon }) and so one often fixesε=1/4{\displaystyle \varepsilon =1/4}and simply writestmix{\displaystyle t_{\mathrm {mix} }}.
This is the sense in whichDave BayerandPersi Diaconis(1992) proved that the number of riffleshufflesneeded to mix an ordinary 52 card deck is 7. Mathematical theory focuses on how mixing times change as a function of the size of the structure underlying the chain. For ann{\displaystyle n}-card deck, the number of riffle shuffles needed grows as1.5log2n{\displaystyle 1.5\log _{2}n}. The most developed theory concernsrandomized algorithmsfor#P-completealgorithmic counting problems such as the number ofgraph coloringsof a givenn{\displaystyle n}vertex graph. Such problems can, for sufficiently large number of colors, be answered using theMarkov chain Monte Carlomethod and showing that the mixing time grows only asnlog(n){\displaystyle n\log(n)}(Jerrum 1995). This example and the shuffling example possess therapid mixingproperty, that the mixing time grows at most polynomially fast inlog{\displaystyle \log }(number of states of the chain). Tools for proving rapid mixing include arguments based onconductanceand the method ofcoupling. In broader uses of the Markov chainMonte Carlo method, rigorous justification of simulation results would require a theoretical bound on mixing time, and many interesting practical cases have resisted such theoretical analysis.
|
https://en.wikipedia.org/wiki/Markov_chain_mixing_time
|
In the mathematical theory ofMarkov chains, theMarkov chain tree theoremis an expression for thestationary distributionof a Markov chain with finitely many states. It sums up terms for the rootedspanning treesof the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related toKirchhoff's theoremon counting the spanning trees of a graph, from which it can be derived.[1]It was first stated byHill (1966), for certain Markov chains arising inthermodynamics,[1][2]and proved in full generality byLeighton & Rivest (1986), motivated by an application in limited-memory estimation of the probability of abiased coin.[1][3]
A finite Markov chain consists of a finite set of states, and a transition probabilitypi,j{\displaystyle p_{i,j}}for changing from statei{\displaystyle i}to statej{\displaystyle j}, such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state havegreatest common divisorone. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state.[1]
The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to betrees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from statei{\displaystyle i}to statej{\displaystyle j}has transition probabilitypi,j{\displaystyle p_{i,j}}, then a treeT{\displaystyle T}with edge setE(T){\displaystyle E(T)}is defined to have weight equal to the product of its transition probabilities:w(T)=∏(i,j)∈E(T)pi,j.{\displaystyle w(T)=\prod _{(i,j)\in E(T)}p_{i,j}.}LetTi{\displaystyle {\mathcal {T}}_{i}}denote the set of all spanning trees having statei{\displaystyle i}at their root. Then, according to the Markov chain tree theorem, the stationary probabilityπi{\displaystyle \pi _{i}}for statei{\displaystyle i}is proportional to the sum of the weights of the trees rooted ati{\displaystyle i}. That is,πi=1Z∑T∈Tiw(T),{\displaystyle \pi _{i}={\frac {1}{Z}}\sum _{T\in {\mathcal {T}}_{i}}w(T),}where the normalizing constantZ{\displaystyle Z}is the sum ofw(T){\displaystyle w(T)}over all spanning trees.[1]
|
https://en.wikipedia.org/wiki/Markov_chain_tree_theorem
|
In mathematics, aMarkov odometeris a certain type oftopological dynamical system. It plays a fundamental role inergodic theoryand especially inorbit theory of dynamical systems, since a theorem ofH. Dyeasserts that everyergodicnonsingular transformationis orbit-equivalent to a Markov odometer.[1]
The basic example of such system is the "nonsingular odometer", which is an additivetopological groupdefined on theproduct spaceofdiscrete spaces, induced by addition defined asx↦x+1_{\displaystyle x\mapsto x+{\underline {1}}}, where1_:=(1,0,0,…){\displaystyle {\underline {1}}:=(1,0,0,\dots )}. This group can be endowed with the structure of adynamical system; the result is aconservative dynamical system.
The general form, which is called "Markov odometer", can be constructed throughBratteli–Vershik diagramto defineBratteli–Vershik compactumspace together with a corresponding transformation.
Several kinds of non-singular odometers may be defined.[2]These are sometimes referred to asadding machines.[3]The simplest is illustrated with theBernoulli process. This is the set of all infinite strings in two symbols, here denoted byΩ={0,1}N{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}endowed with theproduct topology. This definition extends naturally to a more general odometer defined on theproduct space
for some sequence of integers(kn){\displaystyle (k_{n})}with eachkn≥2.{\displaystyle k_{n}\geq 2.}
The odometer forkn=2{\displaystyle k_{n}=2}for alln{\displaystyle n}is termed thedyadic odometer, thevon Neumann–Kakutani adding machineor thedyadic adding machine.
Thetopological entropyof every adding machine is zero.[3]Any continuous map of an interval with a topological entropy of zero is topologically conjugate to an adding machine, when restricted to its action on the topologically invariant transitive set, with periodic orbits removed.[3]
The set of all infinite strings in strings in two symbolsΩ={0,1}N{\displaystyle \Omega =\{0,1\}^{\mathbb {N} }}has a natural topology, theproduct topology, generated by thecylinder sets. The product topology extends to a Borelsigma-algebra; letB{\displaystyle {\mathcal {B}}}denote that algebra. Individual pointsx∈Ω{\displaystyle x\in \Omega }are denoted asx=(x1,x2,x3,⋯).{\displaystyle x=(x_{1},x_{2},x_{3},\cdots ).}
The Bernoulli process is conventionally endowed with a collection ofmeasures, the Bernoulli measures, given byμp(xn=1)=p{\displaystyle \mu _{p}(x_{n}=1)=p}andμp(xn=0)=1−p{\displaystyle \mu _{p}(x_{n}=0)=1-p}, for some0<p<1{\displaystyle 0<p<1}independent ofn{\displaystyle n}. The value ofp=1/2{\displaystyle p=1/2}is rather special; it corresponds to the special case of theHaar measure, whenΩ{\displaystyle \Omega }is viewed as acompactAbelian group. Note that the Bernoulli measure isnotthe same as the 2-adic measure on thedyadic integers! Formally, one can observe thatΩ{\displaystyle \Omega }is also the base space for the dyadic integers; however, the dyadic integers are endowed with ametric, the p-adic metric, which induces ametric topologydistinct from the product topology used here.
The spaceΩ{\displaystyle \Omega }can be endowed with addition, defined as coordinate addition, with a carry bit. That is, for each coordinate, let(x+y)n=xn+yn+εnmod2{\displaystyle (x+y)_{n}=x_{n}+y_{n}+\varepsilon _{n}\,{\bmod {\,}}2}whereε0=0{\displaystyle \varepsilon _{0}=0}and
inductively. Increment-by-one is then called the (dyadic)odometer. It is the transformationT:Ω→Ω{\displaystyle T:\Omega \to \Omega }given byT(x)=x+1_{\displaystyle T(x)=x+{\underline {1}}}, where1_:=(1,0,0,…){\displaystyle {\underline {1}}:=(1,0,0,\dots )}. It is called theodometerdue to how it looks when it "rolls over":T{\displaystyle T}is the transformationT(1,…,1,0,xk+1,xk+2,…)=(0,…,0,1,xk+1,xk+2,…){\displaystyle T\left(1,\dots ,1,0,x_{k+1},x_{k+2},\dots \right)=\left(0,\dots ,0,1,x_{k+1},x_{k+2},\dots \right)}. Note thatT−1(0,0,⋯)=(1,1,⋯){\displaystyle T^{-1}(0,0,\cdots )=(1,1,\cdots )}and thatT{\displaystyle T}isB{\displaystyle {\mathcal {B}}}-measurable, that is,T−1(σ)∈B{\displaystyle T^{-1}(\sigma )\in {\mathcal {B}}}for allσ∈B.{\displaystyle \sigma \in {\mathcal {B}}.}
The transformationT{\displaystyle T}isnon-singularfor everyμp{\displaystyle \mu _{p}}. Recall that a measurable transformationτ:Ω→Ω{\displaystyle \tau :\Omega \to \Omega }is non-singular when, givenσ∈B{\displaystyle \sigma \in {\mathcal {B}}}, one has thatμ(τ−1σ)=0{\displaystyle \mu (\tau ^{-1}\sigma )=0}if and only ifμ(σ)=0{\displaystyle \mu (\sigma )=0}. In this case, one finds
whereφ(x)=min{n∈N∣xn=0}−2{\displaystyle \varphi (x)=\min \left\{n\in \mathbb {N} \mid x_{n}=0\right\}-2}. HenceT{\displaystyle T}is nonsingular with respect toμp{\displaystyle \mu _{p}}.
The transformationT{\displaystyle T}isergodic. This follows because, for everyx∈Ω{\displaystyle x\in \Omega }and natural numbern{\displaystyle n}, the orbit ofx{\displaystyle x}underT0,T1,⋯,T2n−1{\displaystyle T^{0},T^{1},\cdots ,T^{2^{n}-1}}is the set{0,1}n{\displaystyle \{0,1\}^{n}}. This in turn implies thatT{\displaystyle T}isconservative, since every invertible ergodic nonsingular transformation in anonatomic spaceis conservative.
Note that for the special case ofp=1/2{\displaystyle p=1/2}, that(Ω,B,μ1/2,T){\displaystyle \left(\Omega ,{\mathcal {B}},\mu _{1/2},T\right)}is ameasure-preserving dynamical system.
The same construction enables to define such a system for everyproductofdiscrete spaces. In general, one writes
forAn=Z/mnZ={0,1,…,mn−1}{\displaystyle A_{n}=\mathbb {Z} /m_{n}\mathbb {Z} =\{0,1,\dots ,m_{n}-1\}}withmn≥2{\displaystyle m_{n}\geq 2}an integer. The product topology extends naturally to the product Borel sigma-algebraB{\displaystyle {\mathcal {B}}}onΩ{\displaystyle \Omega }. Aproduct measureonB{\displaystyle {\mathcal {B}}}is conventionally defined asμ=∏n∈Nμn,{\displaystyle \textstyle \mu =\prod _{n\in \mathbb {N} }\mu _{n},}given some measureμn{\displaystyle \mu _{n}}onAn{\displaystyle A_{n}}. The corresponding map is defined by
wherek{\displaystyle k}is the smallest index for whichxk≠mk−1{\displaystyle x_{k}\neq m_{k}-1}. This is again a topological group.
A special case of this is theOrnstein odometer, which is defined on the space
with the measure a product of
A concept closely related to the conservative odometer is that of theabelian sandpile model. This model replaces the directed linear sequence of finite groups constructed above by an undirected graph(V,E){\displaystyle (V,E)}of vertexes and edges. At each vertexv∈V{\displaystyle v\in V}one places a finite groupZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }withn=deg(v){\displaystyle n=deg(v)}thedegreeof the vertexv{\displaystyle v}. Transition functions are defined by thegraph Laplacian. That is, one can increment any given vertex by one; when incrementing the largest group element (so that it increments back down to zero), each of the neighboring vertexes are incremented by one.
Sandpile models differ from the above definition of a conservative odometer in three different ways. First, in general, there is no unique vertex singled out as the starting vertex, whereas in the above, the first vertex is the starting vertex; it is the one that is incremented by the transition function. Next, the sandpile models in general use undirected edges, so that the wrapping of the odometer redistributes in all directions. A third difference is that sandpile models are usually not taken on an infinite graph, and that rather, there is one special vertex singled out, the "sink", which absorbs all increments and never wraps. The sink is equivalent to cutting away the infinite parts of an infinite graph, and replacing them by the sink; alternately, as ignoring all changes past that termination point.
LetB=(V,E){\displaystyle B=(V,E)}be an orderedBratteli–Vershik diagram, consists on a set of vertices of the form∐n∈NV(n){\displaystyle \textstyle \coprod _{n\in \mathbb {N} }V^{(n)}}(disjoint union) whereV0{\displaystyle V^{0}}is a singleton and on a set of edges∐n∈NE(n){\displaystyle \textstyle \coprod _{n\in \mathbb {N} }E^{(n)}}(disjoint union).
The diagram includes source surjection-mappingssn:E(n)→V(n−1){\displaystyle s_{n}:E^{(n)}\to V^{(n-1)}}and range surjection-mappingsrn:E(n)→V(n){\displaystyle r_{n}:E^{(n)}\to V^{(n)}}. We assume thate,e′∈E(n){\displaystyle e,e'\in E^{(n)}}are comparable if and only ifrn(e)=rn(e′){\displaystyle r_{n}(e)=r_{n}(e')}.
For such diagram we look at the product spaceE:=∏n∈NE(n){\displaystyle \textstyle E:=\prod _{n\in \mathbb {N} }E^{(n)}}equipped with theproduct topology. Define "Bratteli–Vershik compactum" to be the subspace of infinite paths,
Assume there exists only one infinite pathxmax=(xn)n∈N{\displaystyle x_{\max }=(x_{n})_{n\in \mathbb {N} }}for which eachxn{\displaystyle x_{n}}is maximal and similarly one infinite pathxmin{\displaystyle x_{\text{min}}}. Define the "Bratteli-Vershik map"TB:XB→XB{\displaystyle T_{B}:X_{B}\to X_{B}}byT(xmax)=xmin{\displaystyle T(x_{\max })=x_{\min }}and, for anyx=(xn)n∈N≠xmax{\displaystyle x=(x_{n})_{n\in \mathbb {N} }\neq x_{\max }}defineTB(x1,…,xk,xk+1,…)=(y1,…,yk,xk+1,…){\displaystyle T_{B}(x_{1},\dots ,x_{k},x_{k+1},\dots )=(y_{1},\dots ,y_{k},x_{k+1},\dots )}, wherek{\displaystyle k}is the first index for whichxk{\displaystyle x_{k}}is not maximal and accordingly let(y1,…,yk){\displaystyle (y_{1},\dots ,y_{k})}be the unique path for whichy1,…,yk−1{\displaystyle y_{1},\dots ,y_{k-1}}are all maximal andyk{\displaystyle y_{k}}is the successor ofxk{\displaystyle x_{k}}. ThenTB{\displaystyle T_{B}}ishomeomorphismofXB{\displaystyle X_{B}}.
LetP=(P(1),P(2),…){\displaystyle P=\left(P^{(1)},P^{(2)},\dots \right)}be a sequence ofstochastic matricesP(n)=(p(v,e)∈Vn−1×E(n)(n)){\displaystyle P^{(n)}=\left(p_{(v,e)\in V^{n-1}\times E^{(}n)}^{(n)}\right)}such thatpv,e(n)>0{\displaystyle p_{v,e}^{(n)}>0}if and only ifv=sn(e){\displaystyle v=s_{n}(e)}. Define "Markov measure" on the cylinders ofXB{\displaystyle X_{B}}byμP([e1,…,en])=ps1(e1),e1(1)⋯psn(en),en(n){\displaystyle \mu _{P}([e_{1},\dots ,e_{n}])=p_{s_{1}(e_{1}),e_{1}}^{(1)}\cdots p_{s_{n}(e_{n}),e_{n}}^{(n)}}. Then the system(XB,B,μP,TB){\displaystyle \left(X_{B},{\mathcal {B}},\mu _{P},T_{B}\right)}is called a "Markov odometer".
One can show that the nonsingular odometer is a Markov odometer where all theV(n){\displaystyle V^{(n)}}are singletons.
|
https://en.wikipedia.org/wiki/Markov_odometer
|
Inprobability theoryandergodic theory, aMarkov operatoris anoperatoron a certainfunction spacethat conserves the mass (the so-called Markov property). If the underlyingmeasurable spaceistopologicallysufficiently rich enough, then the Markov operator admits akernelrepresentation. Markov operators can belinearor non-linear. Closely related to Markov operators is theMarkov semigroup.[1]
The definition of Markov operators is not entirely consistent in the literature. Markov operators are named after the Russian mathematicianAndrey Markov.
Let(E,F){\displaystyle (E,{\mathcal {F}})}be ameasurable spaceandV{\displaystyle V}a set of real, measurable functionsf:(E,F)→(R,B(R)){\displaystyle f:(E,{\mathcal {F}})\to (\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))}.
A linear operatorP{\displaystyle P}onV{\displaystyle V}is aMarkov operatorif the following is true[1]: 9–12
Some authors define the operators on theLpspacesasP:Lp(X)→Lp(Y){\displaystyle P:L^{p}(X)\to L^{p}(Y)}and replace the first condition (bounded, measurable functions on such) with the property[2][3]
LetP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}be a family of Markov operators defined on the set of bounded, measurables function on(E,F){\displaystyle (E,{\mathcal {F}})}. ThenP{\displaystyle {\mathcal {P}}}is aMarkov semigroupwhen the following is true[1]: 12
Each Markov semigroupP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}induces adual semigroup(Pt∗)t≥0{\displaystyle (P_{t}^{*})_{t\geq 0}}through
Ifμ{\displaystyle \mu }is invariant underP{\displaystyle {\mathcal {P}}}thenPt∗μ=μ{\displaystyle P_{t}^{*}\mu =\mu }.
Let{Pt}t≥0{\displaystyle \{P_{t}\}_{t\geq 0}}be a family of bounded, linear Markov operators on theHilbert spaceL2(μ){\displaystyle L^{2}(\mu )}, whereμ{\displaystyle \mu }is an invariant measure. Theinfinitesimal generatorL{\displaystyle L}of the Markov semigroupP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}is defined as
and the domainD(L){\displaystyle D(L)}is theL2(μ){\displaystyle L^{2}(\mu )}-space of all such functions where this limit exists and is inL2(μ){\displaystyle L^{2}(\mu )}again.[1]: 18[4]
Thecarré du champ operatorΓ{\displaystyle \Gamma }measuers how farL{\displaystyle L}is from being aderivation.
A Markov operatorPt{\displaystyle P_{t}}has a kernel representation
with respect to someprobability kernelpt(x,A){\displaystyle p_{t}(x,A)}, if the underlying measurable space(E,F){\displaystyle (E,{\mathcal {F}})}has the following sufficient topological properties:
If one defines now aσ-finitemeasure on(E,F){\displaystyle (E,{\mathcal {F}})}then it is possible to prove that ever Markov operatorP{\displaystyle P}admits such a kernel representation with respect tok(x,dy){\displaystyle k(x,\mathrm {d} y)}.[1]: 7–13
|
https://en.wikipedia.org/wiki/Markov_operator
|
Inphysics,chemistry, and related fields,master equationsare used to describe thetime evolutionof a system that can be modeled as being in aprobabilisticcombination of states at any given time, and the switching between states is determined by atransition rate matrix. The equations are a set ofdifferential equations– over time – of the probabilities that the system occupies each of the different states.
The name was proposed in 1940:[1][2]
When the probabilities of the elementary processes are known, one can write down a continuity equation for W, from which all other equations can be derived and which we will call therefore the "master” equation.
A master equation is a phenomenological set of first-orderdifferential equationsdescribing the time evolution of (usually) theprobabilityof a system to occupy each one of a discretesetofstateswith regard to a continuous time variablet. The most familiar form of a master equation is a matrix form:dP→dt=AP→,{\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} {\vec {P}},}whereP→{\displaystyle {\vec {P}}}is a column vector, andA{\displaystyle \mathbf {A} }is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either
When the connections are time-independent rate constants, the master equation represents akinetic scheme, and the process isMarkovian(any jumping time probability density function for stateiis an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrixA{\displaystyle \mathbf {A} }depends on the time,A→A(t){\displaystyle \mathbf {A} \rightarrow \mathbf {A} (t)}), the process is not stationary and the master equation readsdP→dt=A(t)P→.{\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} (t){\vec {P}}.}
When the connections represent multi exponentialjumping timeprobability density functions, the process issemi-Markovian, and the equation of motion is anintegro-differential equationtermed the generalized master equation:dP→dt=∫0tA(t−τ)P→(τ)dτ.{\displaystyle {\frac {d{\vec {P}}}{dt}}=\int _{0}^{t}\mathbf {A} (t-\tau ){\vec {P}}(\tau )\,d\tau .}
Thetransition rate matrixA{\displaystyle \mathbf {A} }can also representbirth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium.
When the transition rate matrix can be related to the probabilities, one obtains theKolmogorov equations.
LetA{\displaystyle \mathbf {A} }be the matrix describing the transition rates (also known as kinetic rates orreaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventionalmatrix multiplication.
For each statek, the increase in occupation probability depends on the contribution from all other states tok, and is given by:∑ℓAkℓPℓ,{\displaystyle \sum _{\ell }A_{k\ell }P_{\ell },}wherePℓ{\displaystyle P_{\ell }}is the probability for the system to be in the stateℓ{\displaystyle \ell }, while thematrixA{\displaystyle \mathbf {A} }is filled with a grid of transition-rateconstants. Similarly,Pk{\displaystyle P_{k}}contributes to the occupation of all other statesPℓ,{\displaystyle P_{\ell },}∑ℓAℓkPk,{\displaystyle \sum _{\ell }A_{\ell k}P_{k},}
In probability theory, this identifies the evolution as acontinuous-time Markov process, with the integrated master equation obeying aChapman–Kolmogorov equation.
The master equation can be simplified so that the terms withℓ=kdo not appear in the summation. This allows calculations even if the main diagonal ofA{\displaystyle \mathbf {A} }is not defined or has been assigned an arbitrary value.dPkdt=∑ℓ(AkℓPℓ)=∑ℓ≠k(AkℓPℓ)+AkkPk=∑ℓ≠k(AkℓPℓ−AℓkPk).{\displaystyle {\frac {dP_{k}}{dt}}=\sum _{\ell }(A_{k\ell }P_{\ell })=\sum _{\ell \neq k}(A_{k\ell }P_{\ell })+A_{kk}P_{k}=\sum _{\ell \neq k}(A_{k\ell }P_{\ell }-A_{\ell k}P_{k}).}
The final equality arises from the fact that∑ℓ,k(AℓkPk)=ddt∑ℓ(Pℓ)=0{\displaystyle \sum _{\ell ,k}(A_{\ell k}P_{k})={\frac {d}{dt}}\sum _{\ell }(P_{\ell })=0}because the summation over the probabilitiesPℓ{\displaystyle P_{\ell }}yields one, a constant function. Since this has to hold for any probabilityP→{\displaystyle {\vec {P}}}(and in particular for any probability of the formPℓ=δℓk{\displaystyle P_{\ell }=\delta _{\ell k}}for some k) we get∑ℓ(Aℓk)=0∀k.{\displaystyle \sum _{\ell }(A_{\ell k})=0\qquad \forall k.}Using this we can write the diagonal elements asAkk=−∑ℓ≠k(Aℓk)⇒AkkPk=−∑ℓ≠k(AℓkPk).{\displaystyle A_{kk}=-\sum _{\ell \neq k}(A_{\ell k})\Rightarrow A_{kk}P_{k}=-\sum _{\ell \neq k}(A_{\ell k}P_{k}).}
The master equation exhibitsdetailed balanceif each of the terms of the summation disappears separately at equilibrium—i.e. if, for all stateskandℓhaving equilibrium probabilitiesπk{\displaystyle \pi _{k}}andπℓ{\displaystyle \pi _{\ell }},Akℓπℓ=Aℓkπk.{\displaystyle A_{k\ell }\pi _{\ell }=A_{\ell k}\pi _{k}.}
These symmetry relations were proved on the basis of thetime reversibilityof microscopic dynamics (microscopic reversibility) asOnsager reciprocal relations.
Many physical problems inclassical,quantum mechanicsand problems in other sciences, can be reduced to the form of amaster equation, thereby performing a great simplification of the problem (seemathematical model).
TheLindblad equationinquantum mechanicsis a generalization of the master equation describing the time evolution of adensity matrix. Though the Lindblad equation is often referred to as amaster equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information aboutquantum coherencebetween the states of the system (non-diagonal elements of the density matrix).
Another special case of the master equation is theFokker–Planck equationwhich describes the time evolution of acontinuous probability distribution.[3]Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as thesystem size expansion.
Stochasticchemical kineticsprovide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules).[4]The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans.[5]
Aquantum master equationis a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of adensity matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements representquantum coherencewhich is a physical characteristic that is intrinsically quantum mechanical.
TheRedfield equationandLindblad equationare examples of approximatequantum master equationsassumed to beMarkovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and theVPQME(variational polaron transformed quantum master equation).[6]
BecauseA{\displaystyle \mathbf {A} }fulfills∑ℓAℓk=0∀k{\displaystyle \sum _{\ell }A_{\ell k}=0\qquad \forall k}andAℓk≥0∀ℓ≠k,{\displaystyle A_{\ell k}\geq 0\qquad \forall \ell \neq k,}one can show[7]that:
This has important consequences for the time evolution of a state.
|
https://en.wikipedia.org/wiki/Master_equation
|
Inmathematics, thequantum Markov chainis a reformulation of the ideas of a classicalMarkov chain, replacing the classical definitions of probability withquantum probability.
Very roughly, the theory of a quantum Markov chain resembles that of ameasure-many automaton, with some important substitutions: the initial state is to be replaced by adensity matrix, and the projection operators are to be replaced bypositive operator valued measures.
More precisely, a quantum Markov chain is a pair(E,ρ){\displaystyle (E,\rho )}withρ{\displaystyle \rho }adensity matrixandE{\displaystyle E}aquantum channelsuch that
is acompletely positive trace-preservingmap, andB{\displaystyle {\mathcal {B}}}aC*-algebraof bounded operators. The pair must obey the quantum Markov condition, that
for allb1,b2∈B{\displaystyle b_{1},b_{2}\in {\mathcal {B}}}.
|
https://en.wikipedia.org/wiki/Quantum_Markov_chain
|
Stochastic cellular automataorprobabilistic cellular automata(PCA) orrandom cellular automataorlocally interactingMarkov chains[1][2]are an important extension ofcellular automaton. Cellular automata are a discrete-timedynamical systemof interacting entities, whose state is discrete.
The state of the collection of entities is updated at each discrete time according to some simple homogeneous rule. All entities' states are updated in parallel or synchronously. Stochastic cellular automata are CA whose updating rule is astochasticone, which means the new entities' states are chosen according to some probability distributions. It is a discrete-timerandom dynamical system. From the spatial interaction between the entities, despite the simplicity of the updating rules,complex behaviourmayemergelikeself-organization. As mathematical object, it may be considered in the framework ofstochastic processesas aninteracting particle systemin discrete-time.
See[3]for a more detailed introduction.
As discrete-time Markov process, PCA are defined on aproduct spaceE=∏k∈GSk{\displaystyle E=\prod _{k\in G}S_{k}}(cartesian product) whereG{\displaystyle G}is a finite or infinite graph, likeZ{\displaystyle \mathbb {Z} }and whereSk{\displaystyle S_{k}}is a finite space, like for instanceSk={−1,+1}{\displaystyle S_{k}=\{-1,+1\}}orSk={0,1}{\displaystyle S_{k}=\{0,1\}}. The transition probability has a product formP(dσ|η)=⊗k∈Gpk(dσk|η){\displaystyle P(d\sigma |\eta )=\otimes _{k\in G}p_{k}(d\sigma _{k}|\eta )}whereη∈E{\displaystyle \eta \in E}andpk(dσk|η){\displaystyle p_{k}(d\sigma _{k}|\eta )}is a probability distribution onSk{\displaystyle S_{k}}.
In general some locality is requiredpk(dσk|η)=pk(dσk|ηVk){\displaystyle p_{k}(d\sigma _{k}|\eta )=p_{k}(d\sigma _{k}|\eta _{V_{k}})}whereηVk=(ηj)j∈Vk{\displaystyle \eta _{V_{k}}=(\eta _{j})_{j\in V_{k}}}withVk{\displaystyle {V_{k}}}a finite neighbourhood of k. See[4]for a more detailed introduction following the probability theory's point of view.
There is a version of themajority cellular automatonwith probabilistic updating rules. See theToom's rule.
PCA may be used to simulate theIsing modelofferromagnetisminstatistical mechanics.[5]Some categories of models were studied from a statistical mechanics point of view.
There is a strong connection[6]between probabilistic cellular automata and thecellular Potts modelin particular when it is implemented in parallel.
TheGalves–Löcherbach modelis an example of a generalized PCA with a non Markovian aspect.
|
https://en.wikipedia.org/wiki/Stochastic_cellular_automaton
|
Inprobability theory, atelescoping Markov chain (TMC)is a vector-valuedstochastic processthat satisfies aMarkov propertyand admits a hierarchical format through a network of transition matrices with cascading dependence.[1]
For anyN>1{\displaystyle N>1}consider the set of spaces{Sℓ}ℓ=1N{\displaystyle \{{\mathcal {S}}^{\ell }\}_{\ell =1}^{N}}. The hierarchical processθk{\displaystyle \theta _{k}}defined in the product-space
is said to be a TMC if there is a set of transition probability kernels{Λn}n=1N{\displaystyle \{\Lambda ^{n}\}_{n=1}^{N}}such that
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Telescoping_Markov_chain
|
Inprobability theory,oddsprovide a measure of the probability of a particular outcome. Odds are commonly used ingamblingandstatistics. For example for an event that is 40% probable, one could say that the odds are"2 in 5","2 to 3 in favor","2 to 3 on",or"3 to 2 against".
Whengambling, odds are often given as the ratio of the possible net profittothe possible net loss. However in many situations, you pay the possible loss ("stake" or "wager") up front and, if you win, you are paid the net win plus you also get your stake returned. So wagering 2 at"3to2", pays out3 + 2 = 5, which is called"5for2".WhenMoneyline oddsare quoted as a positive number+X, it means that a wager paysXto 100.When Moneyline odds are quoted as a negative number−X, it means that a wager pays100 toX.
Odds have a simple relationship withprobability. When probability is expressed as a number between 0 and 1, the relationships between probabilitypand odds are as follows. Note that if probability is to be expressed as a percentage these probability values should be multiplied by 100%.
The numbers for odds can be scaled. Ifkis any positive number thenXtoYis the same askXtokY,and similarly if "to" is replaced with "in" or "for". For example,"3 to 2 against"is the same as both"1.5 to 1 against"and"6 to 4 against".
When the value of the probabilityp(between 0 and 1; not a percentage) can be written as a fractionN/Dthen the odds can be said to be"p/(1−p)to 1 in favor","(1−p)/pto 1 against","NinD","NtoD−Nin favor",or"D−NtoNagainst",and these can be scaled to equivalent odds. Similarly, fair betting odds can be expressed as"(1−p)/pto 1","1/pfor 1","+100(1−p)/p","−100p/(1−p)","D−NtoN","DforN","+100(D−N)/N",or"−100N/(D−N)".
The language of odds, such as the use of phrases like "ten to one" forintuitivelyestimated risks, is found in the sixteenth century, well before the development ofprobability theory.[1]Shakespearewrote:
Knew that we ventured on such dangerous seasThat if we wrought out life 'twas ten to one
The sixteenth-centurypolymathCardanodemonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. Implied by this definition is the fact that the probability of an event is given by theratioof favourable outcomes to the total number of possible outcomes.[2]
In statistics, odds are an expression of relative probabilities, generally quoted as the oddsin favor. The odds (in favor) of aneventor apropositionis the ratio of the probability that the event will happen to the probability that the event will not happen. Mathematically, this is aBernoulli trial, as it has exactly two outcomes. In case of a finitesample spaceofequally probable outcomes, this is the ratio of the number ofoutcomeswhere the event occurs to the number of outcomes where the event does not occur; these can be represented asWandL(for Wins and Losses) orSandF(for Success and Failure). For example, the odds that arandomly chosenday of the week is during a weekend are two to five (2:5), as days of the week form a sample space of seven outcomes, and the event occurs for two of the outcomes (Saturday and Sunday), and not for the other five.[3][4]Conversely, given odds as a ratio of integers, this can be represented by a probability space of a finite number of equally probable outcomes. These definitions are equivalent, since dividing both terms in the ratio by the number of outcomes yields the probabilities:2:5=(2/7):(5/7).{\displaystyle 2:5=(2/7):(5/7).}Conversely, the odds against is the opposite ratio. For example, the odds against a random day of the week being during a weekend are 5:2.
Odds and probability can be expressed in prose via the prepositionstoandin:"odds of so manytoso many on (or against) [some event]" refers toodds—the ratio of numbers of (equally probable) outcomes in favor and against (or vice versa); "chances of so many [outcomes],inso many [outcomes]" refers toprobability—the number of (equally probable) outcomes in favour relative to the number for and against combined. For example, "odds of a weekend are 2to5", while "chances of a weekend are 2in7". In casual use, the wordsoddsandchances(orchance) are often used interchangeably to vaguely indicate some measure of odds or probability, though the intended meaning can be deduced by noting whether the preposition between the two numbers istoorin.[5][6][7]
Odds can be expressed as a ratio of two numbers, in which case it is not unique—scaling both terms by the same factor does not change the proportions: 1:1 odds and 100:100 odds are the same (even odds). Odds can also be expressed as a number, by dividing the terms in the ratio—in this case it is unique (differentfractionscan represent the samerational number). Odds as a ratio, odds as a number, and probability (also a number) are related by simple formulas, and similarly odds in favor and odds against, and probability of success and probability of failure have simple relations. Odds range from 0 to infinity, while probabilities range from 0 to 1, and hence are often represented as a percentage between 0% and 100%: reversing the ratio switches odds for with odds against, and similarly probability of success with probability of failure.
Given odds (in favor) as the ratio W:L (number of outcomes that are wins:number of outcomes that are losses), the odds in favor (as a number)of{\displaystyle o_{f}}and odds against (as a number)oa{\displaystyle o_{a}}can be computed by simply dividing, and aremultiplicative inverses:
Analogously, given odds as a ratio, the probability of successpor failureqcan be computed by dividing, and the probability of success and probability of failure sum tounity(one), as they are the only possible outcomes. In case of a finite number of equally probable outcomes, this can be interpreted as the number of outcomes where the event occurs divided by the total number of events:
Given a probabilityp,the odds as a ratio isp:q{\displaystyle p:q}(probability of success to probability of failure), and the odds as numbers can be computed by dividing:
Conversely, given the odds as a numberof,{\displaystyle o_{f},}this can be represented as the ratioof:1,{\displaystyle o_{f}:1,}or conversely1:(1/of)=1:oa,{\displaystyle 1:(1/o_{f})=1:o_{a},}from which the probability of success or failure can be computed:
Thus if expressed as a fraction with a numerator of 1, probability and odds differ by exactly 1 in the denominator: a probability of 1in100 (1/100 = 1%) is the same as odds of 1to99 (1/99 = 0.0101... = 0.01), while odds of 1to100 (1/100 = 0.01) is the same as a probability of 1in101 (1/101 = 0.00990099... = 0.0099). This is a minor difference if the probability is small (close to zero, or "long odds"), but is a major difference if the probability is large (close to one).
These are worked out for some simple odds:
These transforms have certain special geometric properties: the conversions between odds for and odds against (resp. probability of success with probability of failure) and between odds and probability are allMöbius transformations(fractional linear transformations). They are thusspecified by three points(sharply 3-transitive). Swapping odds for and odds against swaps 0 and infinity, fixing 1, while swapping probability of success with probability of failure swaps 0 and 1, fixing .5; these are both order 2, hencecircular transforms. Converting odds to probability fixes 0, sends infinity to 1, and sends 1 to .5 (even odds are 50% probable), and conversely; this is aparabolic transform.
Inprobability theoryand statistics, odds and similar ratios may be more natural or more convenient than probabilities. In some cases thelog-oddsare used, which is thelogitof the probability. Most simply, odds are frequently multiplied or divided, and log converts multiplication to addition and division to subtractions. This is particularly important in thelogistic model, in which the log-odds of the target variable are alinear combinationof the observed variables.
Similar ratios are used elsewhere in statistics; of central importance is thelikelihood ratioinlikelihoodist statistics, which is used inBayesian statisticsas theBayes factor.
Odds are particularly useful in problems of sequential decision making, as for instance in problems of how to stop (online) on alast specific eventwhich is solved by theodds algorithm.
The odds are aratioof probabilities; anodds ratiois a ratio of odds, that is, a ratio of ratios of probabilities. Odds-ratios are often used in analysis ofclinical trials. While they have useful mathematical properties, they can produce counter-intuitiveresults: an event with an 80% probability of occurring is four timesmore probableto happen than an event with a 20% probability, but theoddsare 16 times higher on the less probable event (4–1against, or 4) than on the more probable one (1–4against, 4-1in favor, 4–1on, or 0.25).
Answer: The odds in favour of a blue marble are 2:13. One can equivalently say that the odds are 13:2against. There are 2 out of 15 chances in favour of blue, 13 out of 15 against blue.
Inprobability theoryandstatistics, where the variablepis theprobabilityin favor of a binary event, and the probability against the event is therefore 1-p, "the odds" of the event are the quotient of the two, orp1−p{\displaystyle {\frac {p}{1-p}}}. That value may be regarded as the relative probability the event will happen, expressed as a fraction (if it is less than 1), or a multiple (if it is equal to or greater than one) of the likelihood that the event will not happen.
In the first example at top, saying the odds of a Sunday are "one to six" or, less commonly, "one-sixth" means the probability of picking a Sunday randomly is one-sixth the probability of not picking a Sunday. While the mathematical probability of an event has a value in the range from zero to one, "the odds" in favor of that same event lie between zero and infinity. The odds against the event with probability given aspare1−pp{\displaystyle {\frac {1-p}{p}}}. The odds against Sunday are 6:1 or 6/1 = 6. It is 6 times as probable that a random day is not a Sunday.
On acoin tossor amatch racebetween two evenly matched horses, it is reasonable for two people to wager level stakes. However, in more variable situations, such as a multi-runner horse race or a football match between two unequally matched teams, betting "at odds" provides the possibility to take the respective likelihoods of the possible outcomes into account. The use of odds in gambling facilitates betting on events where the probabilities of different outcomes vary.
In the modern era, most fixed-odd betting takes place between a betting organisation, such as abookmaker, and an individual, rather than between individuals. Different traditions have grown up in how to express odds to customers.
Favoured bybookmakersin theUnited KingdomandIreland, and also common inhorse racing, fractional odds quote the net total that will be paid out to the bettor, should they win, relative to the stake.[8]Odds of 4/1 (4 to 1 against) would imply that the bettor stands to make a £400 profit on a £100 stake. If the odds are 1/4 (1 to 4 against,4 to 1 in favor, or4 to 1 on), the bettor will make £25 on a £100 stake. In either case, having won, the bettor always receives the original stake back; so if the odds are 4/1 the bettor receives a total of £500 (£400 plus the original £100). Odds of 1/1 are known asevensoreven money.
Thenumeratoranddenominatorof fractional odds are oftenintegers, thus if the bookmaker's payout was to be £1.25 for every £1 stake, this would be equivalent to £5 for every £4 staked, and the odds would therefore be expressed as 5/4. However, not all fractional odds are traditionally read using thelowest common denominator. For example, given that there is a pattern of odds of 5/4, 7/4, 9/4 and so on, odds which are mathematically 3/2 are more easily compared if expressed in the equivalent form 6/4.
Fractional odds are also known asBritish odds,UK odds,[9]or, in that country,traditional odds. They are typically represented with a "/" but can also be represented with a "-", e.g. 4/1 or 4–1. Odds with a denominator of 1 are often presented in listings as the numerator only.[citation needed]
A variation of fractional odds is known asHong Kongodds. Fractional and Hong Kong odds are actually exchangeable. The only difference is that the UK odds are presented as a fractional notation (e.g. 6/5) whilst the Hong Kong odds are decimal (e.g. 1.2). Both exhibit the net return.
The European odds also represent the potential winnings (net returns), but in addition they factor in the stake (e.g. 6/5 or 1.2 plus 1 = 2.2).[10]
Favoured in continentalEurope,Australia,New Zealand,Canada, andSingapore, decimal odds quote the ratio of the payout amount,includingthe original stake, to the stake itself. Therefore, the decimal odds of an outcome are equivalent to the decimal value of the fractional odds plus one.[11]Thus even odds 1/1 are quoted in decimal odds as 2.00. The 4/1 fractional odds discussed above are quoted as 5.00, while the 1/4 odds are quoted as 1.25. This is considered to be ideal forparlaybetting, because the odds to be paid out are simply the product of the odds for each outcome wagered on. When looking at decimal odds in betting terms, the underdog has the higher of the two decimals, while the favorite has the lower of the two. To calculate decimal odds, you can use the equationPayout = Initial Wager × Decimal Value[12].For example, if you bet €100 on Liverpool to beat Manchester City at 2.00 odds the payout, including your stake, would be €200 (€100 × 2.00). Decimal odds are favoured bybetting exchangesbecause they are the easiest to work with for trading, as they reflect the reciprocal of the probability of an outcome.[13]For example, a quoted odds of 5.00 equals to a probability of 1 / 5.00, that is 0.20 or 20%.
Decimal odds are also known asEuropean odds,digital oddsorcontinental odds.[9]
Moneyline odds are favoured by American bookmakers. The figure quoted is either positive or negative.
Moneyline odds are often referred to asAmerican odds. A "moneyline" wager refers to odds on the straight-up outcome of a game with no consideration to apoint spread. In most cases, the favorite will have negative moneyline odds (less payoff for a safer bet) and the underdog will have positive moneyline odds (more payoff for a risky bet). However, if the teams are evenly matched,bothteams can have a negative line at the same time (e.g. −110 −110 or −105 −115), due to house take.
Wholesale odds are the "real odds" or 100% probability of an event occurring. This 100% book is displayed without anybookmaker'sprofit margin, often referred to as a bookmaker's "overround" built in.
A "wholesale odds"indexis an index of all the prices in a probabilistic market operating at 100% competitiveness and displayed without any profit margin factored for market participants.
In gambling, the odds on display do not represent the true chances (as imagined by the bookmaker) that the event will or will not occur, but are the amount that thebookmakerwill pay out on a winning bet, together with the required stake. In formulating the odds to display the bookmaker will have included a profit margin which effectively means that the payout to a successfulbettoris less than that represented by the true chance of the event occurring. This profit is known as the 'overround' on the 'book' (the 'book' refers to the old-fashioned ledger in which wagers were recorded, and is the derivation of the term 'bookmaker') and relates to the sum of the 'odds' in the following way:
In a 3-horse race, for example, the true probabilities of each of the horses winning based on their relative abilities may be 50%, 40% and 10%. The total of these three percentages is 100%, thus representing a fair 'book'. The true odds against winning for each of the three horses are 1–1, 3–2 and 9–1, respectively.
In order to generate a profit on the wagers accepted, the bookmaker may decide to increase the values to 60%, 50% and 20% for the three horses, respectively. This represents the odds against each, which are 4–6, 1–1 and 4–1, in order. These values now total 130%, meaning that the book has anoverroundof 30 (130−100). This value of 30 represents the amount of profit for the bookmaker if he gets bets in good proportions on each of the horses. For example, if he takes £60, £50, and £20 of stakes, respectively, for the three horses, he receives £130 in wagers but only pays £100 back (including stakes), whichever horse wins. And theexpected valueof his profit is positive even if everybody bets on the same horse. The art of bookmaking is in setting the odds low enough so as to have a positive expected value of profit while keeping the odds high enough to attract customers, and at the same time attracting enough bets for each outcome to reduce his risk exposure.
A study on soccer betting found that the probability for the home team to win was generally about 3.4% less than the value calculated from the odds (for example, 46.6% for even odds). It was about 3.7% less for wins by the visitors, and 5.7% less for draws.[14]
Making a profit ingamblinginvolves predicting the relationship of the true probabilities to the payout odds.Sports information servicesare often used by professional and semi-professional sports bettors to help achieve this goal.
The odds or amounts the bookmaker will pay are determined by the total amount that has been bet on all of the possible events. They reflect the balance of wagers on either side of the event, and include the deduction of a bookmaker's brokerage fee ("vig" orvigorish).
Also, depending on how the betting is affected by jurisdiction, taxes may be involved for the bookmaker and/or the winning player. This may be taken into account when offering the odds and/or may reduce the amount won by a player.
|
https://en.wikipedia.org/wiki/Odds
|
Clinical trialsare prospective biomedical or behavioral research studies onhuman participantsdesigned to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novelvaccines,drugs,dietary choices,dietary supplements, andmedical devices) and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy.[1][2]They are conducted only after they have receivedhealth authority/ethics committeeapproval in the country where approval of the therapy is sought. These authorities are responsible for vetting therisk/benefit ratioof the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted.
Depending on product type and development stage, investigators initially enroll volunteers or patients into smallpilot studies, and subsequently conduct progressively larger scale comparative studies. Clinical trials can vary in size and cost, and they can involve a single research center ormultiple centers, in one country or in multiple countries.Clinical study designaims to ensure thescientific validityandreproducibilityof the results.
Costs for clinical trials can range into the billions of dollars per approved drug,[3]and the complete trial process to approval may require 7–15 years.[4][5]The sponsor may be a governmental organization or apharmaceutical,biotechnologyor medical-device company. Certain functions necessary to the trial, such as monitoring and lab work, may be managed by an outsourced partner, such as acontract research organizationor a central laboratory. Only 10 percent of all drugs started in human clinical trials becomeapproved drugs.[6]
Some clinical trials involve healthy subjects with nopre-existing medical conditions. Other clinical trials pertain to people with specific health conditions who are willing to try an experimental treatment. Pilot experiments are conducted to gain insights for design of the clinical trial to follow.[7]
There are two goals to testing medical treatments: to learn whether they work well enough, called "efficacy", or "effectiveness"; and to learn whether they are safe enough, called "safety".[1]Neither is an absolute criterion; both safety and efficacy are evaluated relative to how the treatment is intended to be used, what other treatments are available, and the severity of the disease or condition. The benefits must outweigh the risks.[8][9]: 8For example, many drugs to treat cancer have severe side effects that would not be acceptable for an over-the-counter pain medication, yet the cancer drugs have been approved since they are used under a physician's care and are used for a life-threatening condition.[10]
In the US the elderly constitute 14% of the population, while they consume over one-third of drugs.[11]People over 55 (or a similar cutoff age) are often excluded from trials because their greater health issues and drug use complicate data interpretation, and because they have different physiological capacity than younger people. Children and people with unrelated medical conditions are also frequently excluded.[12]Pregnant women are often excluded due to potential risks to thefetus.
The sponsor designs the trial in coordination with a panel of expert clinical investigators, including what alternative or existing treatments to compare to the new drug and what type(s) of patients might benefit. If the sponsor cannot obtain enough test subjects at one location investigators at other locations are recruited to join the study.[13]
During the trial, investigators recruit subjects with the predetermined characteristics, administer the treatment(s) and collect data on the subjects' health for a defined time period. Data include measurements such asvital signs, concentration of the study drug in the blood or tissues, changes to symptoms, and whether improvement or worsening of the condition targeted by the study drug occurs. The researchers send the data to the trial sponsor, who then analyzes the pooled data usingstatistical tests.[citation needed]
Examples of clinical trial goals include assessing the safety and relative effectiveness of a medication or device:[citation needed]
While most clinical trials test one alternative to the novel intervention, some expand to three or four and may include aplacebo.[14]
Except for small, single-location trials, the design and objectives are specified in a document called aclinical trial protocol. The protocol is the trial's "operating manual" and ensures all researchers perform the trial in the same way on similar subjects and that the data is comparable across all subjects.[citation needed]
As a trial is designed to testhypothesesand rigorously monitor and assess outcomes, it can be seen as an application of thescientific method, specifically the experimental step.[citation needed]
The most common clinical trials evaluate new pharmaceutical products, medical devices,biologics,diagnostic assays,psychological therapies, or other interventions.[15]Clinical trials may be required before a nationalregulatory authority[16]approves marketing of the innovation.
Similarly to drugs, manufacturers of medical devices in the United States are required to conduct clinical trials forpremarket approval.[17]Device trials may compare a new device to an established therapy, or may compare similar devices to each other. An example of the former in the field ofvascular surgeryis the Open versus Endovascular Repair (OVER trial) for the treatment ofabdominal aortic aneurysm, which compared the olderopen aortic repairtechnique to the newerendovascular aneurysm repairdevice.[18]An example of the latter are clinical trials on mechanical devices used in the management of adult femaleurinary incontinence.[19]
Similarly to drugs, medical or surgical procedures may be subjected to clinical trials,[20]such as comparing different surgical approaches in treatment offibroidsforsubfertility.[21]However, when clinical trials are unethical or logistically impossible in the surgical setting,case-controlled studieswill be replaced.[22]
Besides being participants in a clinical trial, members of the public can be actively collaborate with researchers in designing and conductingclinical research. This is known aspatient and public involvement(PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how.[23]PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and compliment their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented inplain languagethat is clear to the wider society and the specific groups it is most relevant for.[24]
Although early medical experimentation was performed often, the use of acontrol groupto provide an accurate comparison for the demonstration of the intervention's efficacy was generally lacking. For instance,Lady Mary Wortley Montagu, who campaigned for the introduction ofinoculation(then called variolation) to preventsmallpox, arranged for seven prisoners who had been sentenced to death to undergo variolation in exchange for their life. Although they survived and did not contract smallpox, there was no control group to assess whether this result was due to the inoculation or some other factor. Similar experiments performed byEdward Jennerover hissmallpox vaccinewere equally conceptually flawed.[25]
The first proper clinical trial was conducted by the Scottish physicianJames Lind.[26]The diseasescurvy, now known to be caused by aVitamin Cdeficiency, would often have terrible effects on the welfare of the crew of long-distance ocean voyages. In 1740, the catastrophic result ofAnson'scircumnavigationattracted much attention in Europe; out of 1900 men, 1400 had died, most of them allegedly from having contracted scurvy.[27]John Woodall, an English military surgeon of theBritish East India Company, had recommended the consumption ofcitrus fruitfrom the 17th century, but their use did not become widespread.[28]
Lind conducted the first systematicclinical trialin 1747.[29]He included a dietary supplement of an acidic quality in the experiment after two months at sea, when the ship was already afflicted with scurvy. He divided twelve scorbutic sailors into six groups of two. They all received the same diet but, in addition, group one was given a quart ofciderdaily, group two twenty-five drops of elixir ofvitriol(sulfuric acid), group three six spoonfuls ofvinegar, group four half a pint of seawater, group five received twoorangesand onelemon, and the last group a spicy paste plus a drink ofbarley water. The treatment of group five stopped after six days when they ran out of fruit, but by then one sailor was fit for duty while the other had almost recovered. Apart from that, only group one also showed some effect of its treatment.[30]Each year, May 20 is celebrated as Clinical Trials Day in honor of Lind's research.[31]
After 1750 the discipline began to take its modern shape.[32][33]The English doctorJohn Haygarthdemonstrated the importance of a control group for the correct identification of theplacebo effectin his celebrated study of the ineffective remedy calledPerkin's tractors. Further work in that direction was carried out by the eminent physicianSir William Gull, 1st Baronetin the 1860s.[25]
Frederick Akbar Mahomed(d. 1884), who worked atGuy's HospitalinLondon, made substantial contributions to the process of clinical trials, where "he separated chronicnephritiswithsecondary hypertensionfrom what we now termessential hypertension. He also founded the Collective Investigation Record for theBritish Medical Association; this organization collected data from physicians practicing outside the hospital setting and was the precursor of modern collaborative clinical trials."[34]
Ideas of SirRonald A. Fisherstill play a role in clinical trials. While working for theRothamsted experimental stationin the field of agriculture, Fisher developed hisPrinciples of experimental designin the 1920s as an accurate methodology for the proper design of experiments. Among his major ideas include the importance ofrandomization—the random assignment of individual elements (eg crops or patients) to different groups for the experiment;[35]replication—to reduceuncertainty, measurements should be repeated and experiments replicated to identify sources of variation;[36]blocking—to arrange experimental units into groups of units that are similar to each other, and thus reducing irrelevant sources of variation; use offactorial experiments—efficient at evaluating the effects and possibleinteractionsof several independent factors.[25]Of these, blocking and factorial design are seldom applied in clinical trials, because the experimental units are human subjects and there is typically only one independent intervention: the treatment.[citation needed]
TheBritish Medical Research Councilofficially recognized the importance of clinical trials from the 1930s. The council established theTherapeutic Trials Committeeto advise and assist in the arrangement of properly controlled clinical trials on new products that seem likely on experimental grounds to have value in the treatment of disease.[25]
The first randomised curative trial was carried out at the MRC Tuberculosis Research Unit by Sir Geoffrey Marshall (1887–1982). The trial, carried out between 1946 and 1947, aimed to test the efficacy of the chemicalstreptomycinfor curingpulmonary tuberculosis. The trial was bothdouble-blindandplacebo-controlled.[37]
The methodology of clinical trials was further developed by SirAustin Bradford Hill, who had been involved in the streptomycin trials. From the 1920s, Hill appliedstatisticsto medicine, attending the lectures of renowned mathematicianKarl Pearson, among others. He became famous for a landmark study carried out in collaboration withRichard Dollon the correlation betweensmokingandlung cancer. They carried out acase-control studyin 1950, which compared lung cancer patients with matched control and also began a sustainedlong-term prospective studyinto the broader issue of smoking and health, which involvedstudying the smoking habits and health of more than 30,000 doctorsover a period of several years. His certificate for election to theRoyal Societycalled him "...the leader in the development in medicine of the precise experimental methods now used nationally and internationally in the evaluation of new therapeutic andprophylactic agents."
International clinical trials day is celebrated on 20 May.[38]
The acronyms used in thetitling of clinical trialsare often contrived, and have been the subject of derision.[39]
Clinical trials are classified by the research objective created by the investigators.[15]
Trials are classified by their purpose. After approval for human research is granted to the trial sponsor, the U.S.Food and Drug Administration(FDA) organizes and monitors the results of trials according to type:[15]
Clinical trials are conducted typically in four phases, with each phase using different numbers of subjects and having a different purpose to construct focus on identifying a specific effect.[15]
Clinical trials involving new drugs are commonly classified into five phases. Each phase of the drug approval process is treated as a separate clinical trial. Thedrug developmentprocess will normally proceed through phases I–IV over many years, frequently involving adecadeor longer. If the drug successfully passes through phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population.[15]Phase IV trials are performed after the newly approved drug, diagnostic or device is marketed, providing assessment about risks, benefits, or best uses.[15]
A fundamental distinction inevidence-based practiceis betweenobservational studiesandrandomized controlled trials.[48]Types of observational studies inepidemiology, such as thecohort studyand thecase-control study, provide less compelling evidence than the randomized controlled trial.[48]In observational studies, the investigators retrospectively assess associations between the treatments given to participants and their health status, with potential for considerable errors in design and interpretation.[49]
A randomized controlled trial can provide compelling evidence that the study treatment causes an effect on human health.[48]
Some Phase II and most Phase III drug trials are designed as randomized,double-blind, andplacebo-controlled.[citation needed]
Clinical studies having small numbers of subjects may be "sponsored" by single researchers or a small group of researchers, and are designed to test simple questions or feasibility to expand the research for a more comprehensive randomized controlled trial.[50]
Clinical studies can be "sponsored" (financed and organized) by academic institutions, pharmaceutical companies, government entities and even private groups. Trials are conducted for new drugs, biotechnology, diagnostic assays or medical devices to determine their safety and efficacy prior to being submitted for regulatory review that would determine market approval.[citation needed]
In cases where giving a placebo to a person suffering from a disease may be unethical, "active comparator" (also known as "active control") trials may be conducted instead.[51]In trials with an active control group, subjects are given either the experimental treatment or a previously approved treatment with known effectiveness. In other cases, sponsors may conduct an active comparator trial to establish an efficacy claim relative to the active comparator instead of the placebo inlabeling.[citation needed]
A master protocol includes multiple substudies, which may have different objectives and involve coordinated efforts to evaluate one or more medical products in one or more diseases or conditions within the overall study structure. Trials that could develop a master protocol include the umbrella trial (multiple medical products for a single disease),platform trial(multiple products for a single disease entering and leaving the platform), and basket trial (one medical product for multiple diseases or disease subtypes).[52]
Genetic testingenables researchers to group patients according to their genetic profile, deliver drugs based on that profile to that group and compare the results. Multiple companies can participate, each bringing a different drug. The first such approach targetssquamous cell cancer, which includes varying genetic disruptions from patient to patient. Amgen, AstraZeneca and Pfizer are involved, the first time they have worked together in a late-stage trial. Patients whose genomic profiles do not match any of the trial drugs receive a drug designed to stimulate the immune system to attack cancer.[53]
Aclinical trial protocolis a document used to define and manage the trial. It is prepared by a panel of experts. All study investigators are expected to strictly observe the protocol.[citation needed]
The protocol describes the scientific rationale, objective(s), design, methodology, statistical considerations and organization of the planned trial. Details of the trial are provided in documents referenced in the protocol, such as aninvestigator's brochure.[citation needed]
The protocol contains a precise study plan to assure safety and health of the trial subjects and to provide an exact template for trial conduct by investigators. This allows data to be combined across all investigators/sites. The protocol also informs the study administrators (often acontract research organization).[citation needed]
The format and content of clinical trial protocols sponsored by pharmaceutical, biotechnology or medical device companies in the United States, European Union, or Japan have been standardized to follow Good Clinical Practice guidance[54]issued by theInternational Conference on Harmonisation(ICH).[55]Regulatory authorities inCanada,China,South Korea, andthe UKalso follow ICH guidelines. Journals such asTrials, encourage investigators to publish their protocols.
Clinical trials recruit study subjects to sign a document representing their "informed consent".[56]The document includes details such as its purpose, duration, required procedures, risks, potential benefits, key contacts and institutional requirements.[57]The participant then decides whether to sign the document. The document is not a contract, as the participant can withdraw at any time without penalty.[citation needed]
Informed consent is a legal process in which a recruit is instructed about key facts before deciding whether to participate.[56]Researchers explain the details of the study in terms the subject can understand. The information is presented in the subject's native language. Generally, children cannot autonomously provide informed consent, but depending on their age and other factors, may be required to provide informed assent.[citation needed]
In any clinical trial, the number of subjects, also called the sample size, has a large impact on the ability to reliably detect and measure the effects of the intervention. This ability is described as its "power", which must be calculated before initiating a study to figure out if the study is worth its costs.[58]In general, a larger sample size increases the statistical power, also the cost.
The statistical power estimates the ability of a trial to detect a difference of a particular size (or larger) between the treatment and control groups. For example, a trial of alipid-lowering drug versus placebo with 100 patients in each group might have a power of 0.90 to detect a difference between placebo and trial groups receiving dosage of 10 mg/dL or more, but only 0.70 to detect a difference of 6 mg/dL.[citation needed]
Merely giving a treatment can have nonspecific effects. These are controlled for by the inclusion of patients who receive only a placebo. Subjects are assignedrandomlywithout informing them to which group they belonged. Many trials are doubled-blinded so that researchers do not know to which group a subject is assigned.
Assigning a subject to a placebo group can pose an ethical problem if it violates his or her right to receive the best available treatment. TheDeclaration of Helsinkiprovides guidelines on this issue.
Clinical trials are only a small part of the research that goes into developing a new treatment. Potential drugs, for example, first have to be discovered, purified, characterized, and tested in labs (in cell and animal studies) before ever undergoing clinical trials. In all, about 1,000 potential drugs are tested before just one reaches the point of being tested in a clinical trial.[59]For example, a new cancer drug has, on average, six years of research behind it before it even makes it to clinical trials. But the major holdup in making new cancer drugs available is the time it takes to complete clinical trials themselves. On average, about eight years pass from the time a cancer drug enters clinical trials until it receives approval from regulatory agencies for sale to the public.[60]Drugs for other diseases have similar timelines.
Some reasons a clinical trial might last several years:
A clinical trial might also include an extended post-study follow-up period from months to years for people who have participated in the trial, a so-called "extension phase", which aims to identify long-term impact of the treatment.[61]
The biggest barrier to completing studies is the shortage of people who take part. All drug and many device trials target a subset of the population, meaning not everyone can participate. Some drug trials require patients to have unusual combinations of disease characteristics. It is a challenge to find the appropriate patients and obtain their consent, especially when they may receive no direct benefit (because they are not paid, the study drug is not yet proven to work, or the patient may receive a placebo). In the case of cancer patients, fewer than 5% of adults with cancer will participate in drug trials. According to the Pharmaceutical Research and Manufacturers of America (PhRMA), about 400 cancer medicines were being tested in clinical trials in 2005. Not all of these will prove to be useful, but those that are may be delayed in getting approved because the number of participants is so low.[62]
For clinical trials involving potential for seasonal influences (such asairborne allergies,seasonal affective disorder,influenza, andskin diseases), the study may be done during a limited part of the year (such as spring for pollen allergies), when the drug can be tested.[63][64]
Clinical trials that do not involve a new drug usually have a much shorter duration. (Exceptions are epidemiological studies, such as theNurses' Health Study).
Clinical trials designed by a local investigator, and (in the US) federally funded clinical trials, are almost always administered by the researcher who designed the study and applied for the grant. Small-scale device studies may be administered by the sponsoring company. Clinical trials of new drugs are usually administered by acontract research organization(CRO) hired by the sponsoring company. The sponsor provides the drug and medical oversight. A CRO is contracted to perform all the administrative work on a clinical trial. For PhasesII–IV the CRO recruits participating researchers, trains them, provides them with supplies, coordinates study administration anddata collection, sets up meetings, monitors the sites for compliance with the clinical protocol, and ensures the sponsor receives data from every site. Specialistsite management organizationscan also be hired to coordinate with the CRO to ensure rapid IRB/IEC approval and faster site initiation and patient recruitment. PhaseI clinical trials of new medicines are often conducted in a specialist clinical trial clinic, with dedicated pharmacologists, where the subjects can be observed by full-time staff. These clinics are often run by a CRO which specialises in these studies.
At a participating site, one or more research assistants (often nurses) do most of the work in conducting the clinical trial. The research assistant's job can include some or all of the following: providing the localinstitutional review board(IRB) with the documentation necessary to obtain its permission to conduct the study, assisting with study start-up, identifying eligible patients, obtaining consent from them or their families, administering study treatment(s), collecting and statistically analyzing data, maintaining and updating data files during followup, and communicating with the IRB, as well as the sponsor and CRO.
In the context of a clinical trial, quality typically refers to the absence of errors which can impact decision making, both during the conduct of the trial and in use of the trial results.[65]
An Interactional Justice Model may be used to test the effects of willingness to talk with a doctor about clinical trial enrollment.[66]Results found that potential clinical trial candidates were less likely to enroll in clinical trials if the patient is more willing to talk with their doctor. The reasoning behind this discovery may be patients are happy with their current care. Another reason for the negative relationship between perceived fairness and clinical trial enrollment is the lack of independence from the care provider. Results found that there is a positive relationship between a lack of willingness to talk with their doctor and clinical trial enrollment. Lack of willingness to talk about clinical trials with current care providers may be due to patients' independence from the doctor. Patients who are less likely to talk about clinical trials are more willing to use other sources of information to gain a better insight of alternative treatments. Clinical trial enrollment should be motivated to utilize websites and television advertising to inform the public about clinical trial enrollment.
The last decade has seen a proliferation ofinformation technologyuse in the planning and conduct of clinical trials.Clinical trial management systemsare often used by research sponsors or CROs to help plan and manage the operational aspects of a clinical trial, particularly with respect to investigational sites. Advanced analytics for identifying researchers and research sites with expertise in a given area utilize public and private information about ongoing research.[67]Web-basedelectronic data capture(EDC) andclinical data management systemsare used in a majority of clinical trials[68]to collect case report data from sites, manage its quality and prepare it for analysis.Interactive voice responsesystems are used by sites to register the enrollment of patients using a phone and to allocate patients to a particular treatment arm (although phones are being increasingly replaced with web-based (IWRS) tools which are sometimes part of the EDC system). Whilepatient-reported outcomewere often paper based in the past, measurements are increasingly being collected using web portals or hand-heldePRO(or eDiary) devices, sometimes wireless.[69]Statistical softwareis used to analyze the collected data and prepare them for regulatory submission. Access to many of these applications are increasingly aggregated in web-basedclinical trial portals. In 2011, the FDA approved a PhaseI trial that used telemonitoring, also known as remote patient monitoring, to collect biometric data in patients' homes and transmit it electronically to the trial database. This technology provides many more data points and is far more convenient for patients, because they have fewer visits to trial sites. As noted below, decentralized clinical trials are those that do not require patients' physical presence at a site, and instead rely largely on digital health data collection, digital informed consent processes, and so on.
A clinical trial produces data that could reveal quantitative differences between two or more interventions;statistical analysesare used to determine whether such differences are true, result from chance, or are the same as no treatment (placebo).[70][71]Data from a clinical trial accumulate gradually over the trial duration, extending from months to years.[56]Accordingly, results for participants recruited early in the study become available for analysis while subjects are still being assigned to treatment groups in the trial. Early analysis may allow the emerging evidence to assist decisions about whether to stop the study, or to reassign participants to the more successful segment of the trial.[70]Investigators may also want to stop a trial when data analysis shows no treatment effect.[71]
Clinical trials are closely supervised by appropriate regulatory authorities. All studies involving a medical or therapeutic intervention on patients must be approved by a supervising ethics committee before permission is granted to run the trial. The local ethics committee has discretion on how it will supervise noninterventional studies (observational studies or those using already collected data). In the US, this body is called theInstitutional Review Board(IRB); in the EU, they are calledEthics committees. Most IRBs are located at the local investigator's hospital or institution, but some sponsors allow the use of a central (independent/for profit) IRB for investigators who work at smaller institutions.
To be ethical, researchers must obtain the full andinformed consentof participating human subjects. (One of the IRB's main functions is to ensure potential patients are adequately informed about the clinical trial.) If the patient is unable to consent for him/herself, researchers can seek consent from the patient's legally authorized representative. In addition, the clinical trial participants must be made aware that they can withdraw from the clinical trial at any time without any adverse action taken against them.[72]InCalifornia, the state has prioritized the individuals who can serve as the legally authorized representative.[73]
In some US locations, the local IRB must certify researchers and their staff before they can conduct clinical trials. They must understand the federal patient privacy (HIPAA) law and good clinical practice. The International Conference of Harmonisation Guidelines for Good Clinical Practice is a set of standards used internationally for the conduct of clinical trials. The guidelines aim to ensure the "rights, safety and well being of trial subjects are protected".
The notion of informed consent of participating human subjects exists in many countries but its precise definition may still vary.
Informed consent is clearly a 'necessary' condition for ethical conduct but does not 'ensure' ethical conduct. Incompassionate usetrials the latter becomes a particularly difficult problem. The final objective is to serve the community of patients or future patients in a best-possible and most responsible way. See alsoExpanded access. However, it may be hard to turn this objective into a well-defined, quantified, objective function. In some cases this can be done, however, for instance, for questions of when to stop sequential treatments (seeOdds algorithm), and then quantified methods may play an important role.
Additional ethical concerns are present when conductingclinical trials on children(pediatrics), and in emergency or epidemic situations.[74][75]
Ethically balancing the rights of multiple stakeholders may be difficult. For example, when drug trials fail, the sponsors may have a duty to tell current and potential investors immediately, which means both the research staff and the enrolled participants may first hear about the end of a trial through publicbusiness news.[76]
In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research were not published, thePharmaceutical Research and Manufacturers of Americapublished new guidelines urging companies to report all findings and limit the financial involvement in drug companies by researchers.[77]TheUS Congresssigned into law a bill which requires PhaseII and PhaseIII clinical trials to be registered by the sponsor on theclinicaltrials.govwebsite compiled by theNational Institutes of Health.[78]
Drug researchers not directly employed by pharmaceutical companies often seek grants from manufacturers, and manufacturers often look to academic researchers to conduct studies within networks of universities and their hospitals, e.g., fortranslationalcancer research. Similarly, competition for tenured academic positions, government grants and prestige create conflicts of interest among academic scientists.[79]According to one study, approximately 75% of articles retracted for misconduct-related reasons have no declared industry financial support.[80]Seeding trialsare particularly controversial.[81]
In the United States, all clinical trials submitted to the FDA as part of a drug approval process are independently assessed by clinical experts within the Food and Drug Administration,[82]including inspections of primary data collection at selected clinical trial sites.[83]
In 2001, the editors of 12 major journals issued a joint editorial, published in each journal, on the control over clinical trials exerted by sponsors, particularly targeting the use of contracts which allow sponsors to review the studies prior to publication and withhold publication. They strengthened editorial restrictions to counter the effect. The editorial noted thatcontract research organizationshad, by 2000, received 60% of the grants frompharmaceutical companiesin the US. Researchers may be restricted from contributing to the trial design, accessing the raw data, and interpreting the results.[84]
Despite explicit recommendations by stakeholders of measures to improve the standards of industry-sponsored medical research,[85]in 2013,Tohenwarned of the persistence of a gap in the credibility of conclusions arising from industry-funded clinical trials, and called for ensuring strict adherence to ethical standards in industrial collaborations with academia, in order to avoid further erosion of the public's trust.[86]Issues referred for attention in this respect include potential observation bias, duration of the observation time for maintenance studies, the selection of the patient populations, factors that affect placebo response, and funding sources.[87][88][89]
Conducting clinical trials of vaccines during epidemics and pandemics is subject to ethical concerns. For diseases with high mortality rates like Ebola, assigning individuals to a placebo or control group can be viewed as a death sentence. In response to ethical concerns regarding clinical research during epidemics, theNational Academy of Medicineauthored a report identifying seven ethical and scientific considerations. These considerations are:[90]
Pregnant women and children are typically excluded from clinical trials as vulnerable populations, though the data to support excluding them is not robust. By excluding them from clinical trials, information about the safety and effectiveness of therapies for these populations is often lacking. During the early history of theHIV/AIDSepidemic, a scientist noted that by excluding these groups from potentially life-saving treatment, they were being "protected to death". Projects such as Research Ethics for Vaccines, Epidemics, and New Technologies (PREVENT) have advocated for the ethical inclusion of pregnant women in vaccine trials. Inclusion of children in clinical trials has additional moral considerations, as children lack decision-making autonomy. Trials in the past had been criticized for using hospitalized children or orphans; these ethical concerns effectively stopped future research. In efforts to maintain effective pediatric care, several European countries and the US have policies to entice or compel pharmaceutical companies to conduct pediatric trials. International guidance recommends ethical pediatric trials by limiting harm, considering varied risks, and taking into account the complexities of pediatric care.[90]
Responsibility for the safety of the subjects in a clinical trial is shared between the sponsor, the local site investigators (if different from the sponsor), the various IRBs that supervise the study, and (in some cases, if the study involves a marketable drug or device), the regulatory agency for the country where the drug or device will be sold.
A systematic concurrent safety review is frequently employed to assure research participant safety. The conduct and on-going review is designed to be proportional to the risk of the trial. Typically this role is filled by aData and Safety Committee, an externally appointed Medical Safety Monitor,[91]anIndependent Safety Officer, or for small or low-risk studies the principal investigator.[92]
For safety reasons, many clinical trials of drugs[93]are designed to exclude women of childbearing age, pregnant women, or women who become pregnant during the study. In some cases, the male partners of these women are also excluded or required to take birth control measures.
Throughout the clinical trial, the sponsor is responsible for accurately informing the local site investigators of the true historical safety record of the drug, device or other medical treatments to be tested, and of any potential interactions of the study treatment(s) with already approved treatments. This allows the local investigators to make an informed judgment on whether to participate in the study or not. The sponsor is also responsible formonitoringthe results of the study as they come in from the various sites as the trial proceeds. In larger clinical trials, a sponsor will use the services of adata monitoring committee(DMC, known in the US as a data safety monitoring board). This independent group of clinicians and statisticians meets periodically to review theunblindeddata the sponsor has received so far. The DMC has the power to recommend termination of the study based on their review, for example if the study treatment is causing more deaths than the standard treatment, or seems to be causing unexpected and study-related seriousadverse events. The sponsor is responsible for collectingadverse eventreports from all site investigators in the study, and for informing all the investigators of the sponsor's judgment as to whether these adverse events were related or not related to the study treatment.
The sponsor and the local site investigators are jointly responsible for writing a site-specificinformed consentthat accurately informs the potential subjects of the true risks and potential benefits of participating in the study, while at the same time presenting the material as briefly as possible and in ordinary language. FDA regulations state that participating in clinical trials is voluntary, with the subject having the right not to participate or to end participation at any time.[94]
The ethical principle ofprimum non-nocere("first, do no harm") guides the trial, and if an investigator believes the study treatment may be harming subjects in the study, the investigator can stop participating at any time. On the other hand, investigators often have a financial interest in recruiting subjects, and could act unethically to obtain and maintain their participation.
The local investigators are responsible for conducting the study according to the study protocol, and supervising the study staff throughout the duration of the study. The local investigator or his/her study staff are also responsible for ensuring the potential subjects in the study understand the risks and potential benefits of participating in the study. In other words, they (or their legally authorized representatives) must give truly informed consent.
Local investigators are responsible for reviewing all adverse event reports sent by the sponsor. These adverse event reports contain the opinions of both the investigator (at the site where the adverse event occurred) and the sponsor, regarding the relationship of the adverse event to the study treatments. Local investigators also are responsible for making an independent judgment of these reports, and promptly informing the local IRB of all serious and study treatment-related adverse events.
When a local investigator is the sponsor, there may not be formal adverse event reports, but study staff at all locations are responsible for informing the coordinating investigator of anything unexpected. The local investigator is responsible for being truthful to the local IRB in all communications relating to the study.
Approval by anInstitutional Review Board(IRB), orIndependent Ethics Committee(IEC), is necessary before all but the most informal research can begin. In commercial clinical trials, the study protocol is not approved by an IRB before the sponsor recruits sites to conduct the trial. However, the study protocol and procedures have been tailored to fit generic IRB submission requirements. In this case, and where there is no independent sponsor, each local site investigator submits the study protocol, the consent(s), the data collection forms, and supporting documentation to the local IRB. Universities and most hospitals have in-house IRBs. Other researchers (such as in walk-in clinics) use independent IRBs.
The IRB scrutinizes the study both for medical safety and for protection of the patients involved in the study, before it allows the researcher to begin the study. It may require changes in study procedures or in the explanations given to the patient. A required yearly "continuing review" report from the investigator updates the IRB on the progress of the study and any new safety information related to the study.
In the US, theFDAcanauditthe files of local site investigators after they have finished participating in a study, to see if they were correctly following study procedures. This audit may be random, or for cause (because the investigator is suspected of fraudulent data). Avoiding an audit is an incentive for investigators to follow study procedures. A 'covered clinical study' refers to a trial submitted to the FDA as part of a marketing application (for example, as part of anNDAor510(k)), about which the FDA may require disclosure of financial interest of theclinical investigatorin the outcome of the study. For example, the applicant must disclose whether an investigator owns equity in the sponsor, or owns proprietary interest in the product under investigation. The FDA defines a covered study as "...any study of a drug, biological product or device in humans submitted in a marketing application or reclassification petition that the applicant or FDA relies on to establish that the product is effective (including studies that show equivalence to an effective product) or any study in which a single investigator makes a significant contribution to the demonstration of safety."[95]
Alternatively, many American pharmaceutical companies have moved some clinical trials overseas. Benefits of conducting trials abroad include lower costs (in some countries) and the ability to run larger trials in shorter timeframes, whereas a potential disadvantage exists in lower-quality trial management.[96]Different countries have different regulatory requirements and enforcement abilities. An estimated 40% of all clinical trials now take place in Asia, Eastern Europe, and Central and South America. "There is no compulsory registration system for clinical trials in these countries and many do not follow European directives in their operations", says Jacob Sijtsma of the Netherlands-based WEMOS, an advocacy health organisation tracking clinical trials in developing countries.[97]
Beginning in the 1980s, harmonization of clinical trial protocols was shown as feasible across countries of the European Union. At the same time, coordination between Europe, Japan and the United States led to a joint regulatory-industry initiative on international harmonization named after 1990 as theInternational Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use(ICH)[98]Currently, most clinical trial programs follow ICH guidelines, aimed at "ensuring that good quality, safe and effective medicines are developed and registered in the most efficient and cost-effective manner. These activities are pursued in the interest of the consumer and public health, to prevent unnecessary duplication of clinical trials in humans and to minimize the use of animal testing without compromising the regulatory obligations of safety and effectiveness."[99]
Aggregating safety data across clinical trials during drug development is important because trials are generally designed to focus on determining how well the drug works. The safety data collected and aggregated across multiple trials as the drug is developed allows the sponsor, investigators and regulatory agencies to monitor the aggregate safety profile of experimental medicines as they are developed. The value of assessing aggregate safety data is: a) decisions based on aggregate safety assessment during development of the medicine can be made throughout the medicine's development and b) it sets up the sponsor and regulators well for assessing the medicine's safety after the drug is approved.[100][101][102][103][104]
Clinical trial costs vary depending on trial phase, type of trial, and disease studied. A study of clinical trials conducted in the United States from 2004 to 2012 found the average cost of PhaseI trials to be between $1.4 million and $6.6 million, depending on the type of disease. Phase II trials ranged from $7 million to $20 million, and PhaseIII trials from $11 million to $53 million.[105]
The cost of a study depends on many factors, especially the number of sites conducting the study, the number of patients involved, and whether the study treatment is already approved for medical use.
The expenses incurred by a pharmaceutical company in administering a Phase III orIV clinical trial may include, among others:
These expenses are incurred over several years.
In the US, sponsors may receive a 50 percenttax creditfor clinical trials conducted on drugs being developed for the treatment oforphan diseases.[106]National health agencies, such as the USNational Institutes of Health, offer grants to investigators who design clinical trials that attempt to answer research questions of interest to the agency. In these cases, the investigator who writes the grant and administers the study acts as the sponsor, and coordinates data collection from any other sites. These other sites may or may not be paid for participating in the study, depending on the amount of the grant and the amount of effort expected from them. Using internet resources can, in some cases, reduce the economic burden.[107]
Investigators are often compensated for their work in clinical trials. These amounts can be small, just covering a partial salary for research assistants and the cost of any supplies (usually the case with national health agency studies), or be substantial and include "overhead" that allows the investigator to pay the research staff during times between clinical trials.[citation needed]
Participants in Phase I drug trials do not gain any direct health benefit from taking part. They are generally paid a fee for their time, with payments regulated and not related to any risk involved. Motivations of healthy volunteers is not limited to financial reward and may include other motivations such as contributing to science and others.[108]In later phase trials, subjects may not be paid to ensure their motivation for participating with potential for a health benefit or contributing to medical knowledge. Small payments may be made for study-related expenses such as travel or as compensation for their time in providing follow-up information about their health after the trial treatment ends.
Phase 0 and Phase I drug trials seek healthy volunteers. Most other clinical trials seek patients who have a specific disease or medical condition. The diversity observed in society should be reflected in clinical trials through the appropriate inclusion ofethnic minoritypopulations.[109]Patient recruitmentor participant recruitment plays a significant role in the activities and responsibilities of sites conducting clinical trials.[110]
All volunteers being considered for a trial are required to undertake a medical screening. Requirements differ according to the trial needs, but typically volunteers would be screened in amedical laboratoryfor:[111]
It has been observed that participants in clinical trials are disproportionately white.[112][113]Often, minorities are not informed about clinical trials.[114]One recent systematic review of the literature found that race/ethnicity as well as sex were not well-represented nor at times even tracked as participants in a large number of clinical trials of hearing loss management in adults.[115]This may reduce the validity of findings in respect of non-white patients[116]by not adequately representing the larger populations.
Depending on the kind of participants required, sponsors of clinical trials, or contract research organizations working on their behalf, try to find sites with qualified personnel as well as access to patients who could participate in the trial. Working with those sites, they may use various recruitment strategies, including patient databases, newspaper and radio advertisements, flyers, posters in places the patients might go (such as doctor's offices), and personal recruitment of patients by investigators.
Volunteers with specific conditions or diseases have additional online resources to help them locate clinical trials. For example, the Fox Trial Finder connectsParkinson's diseasetrials around the world to volunteers who have a specific set of criteria such as location, age, and symptoms.[117]Other disease-specific services exist for volunteers to find trials related to their condition.[118]Volunteers may search directly onClinicalTrials.govto locate trials using a registry run by theU.S. National Institutes of HealthandNational Library of Medicine. There also is software that allows clinicians to find trial options for an individual patient based on data such as genomic data.[119]
The risk information seeking and processing (RISP) model analyzes social implications that affect attitudes and decision making pertaining to clinical trials.[120]People who hold a higher stake or interest in the treatment provided in a clinical trial showed a greater likelihood of seeking information about clinical trials. Cancer patients reported more optimistic attitudes towards clinical trials than the general population. Having a more optimistic outlook on clinical trials also leads to greater likelihood of enrolling.[120]
Matching involves a systematic comparison of a patient's clinical and demographic information against the eligibility criteria of various trials. Methods include:
Although trials are commonly conducted at major medical centers, some participants are excluded due to the distance and expenses required for travel, leading to hardship, disadvantage, and inequity for participants, especially those in rural and underserved communities. Therefore, the concept of a "decentralized clinical trial" that minimizes or eliminates the need for patients to travel to sites,[125]is now more widespread, a capability improved bytelehealthandwearable technologies.[126]
|
https://en.wikipedia.org/wiki/Clinical_trial
|
Expanded accessorcompassionate useis the use of an unapproved drug or medical device under special forms ofinvestigational new drug applications(IND) orIDE applicationfor devices, outside of aclinical trial, by people with serious or life-threatening conditions who do not meet the enrollment criteria for the clinical trial in progress.
These programs go under various names, includingearly access,special access, ormanaged access program,compassionate use,compassionate access,named-patient access,temporary authorization for use,cohort access, andpre-approval access.[1][2][3]
In general the person and their doctor must apply for access to the investigational product, the company has to choose to cooperate, and the medicine's regulatory agency needs to agree that the risks and possible benefits of the drug or device are understood well enough to determine if putting the person at risk has sufficient potential benefit. In some countries the government will pay for the drug or device, but in many countries the person must pay for the drug or device, as well as medical services necessary to receive it.
In the US, compassionate use started with the provision of investigational medicine to certain patients in the late 1970s, and a formal program was established in 1987 in response to HIV/AIDS patients requesting access to drugs in development. An important legal case wasAbigail Alliance v. von Eschenbach, in which the Abigail Alliance, a group that advocates for access to investigational drugs for people who are terminally ill, tried to establish such access as a legal right. The Supreme Court declined to hear the case, effectively upholding previous cases that have maintained that there is not a constitutional right to unapproved medical products.
As of 2016,[update]regulation of access to pharmaceuticals that were not approved for marketing was handled on a country by country basis, including in the European Union, where theEuropean Medicines Agencyissued guidelines for national regulatory agencies to follow. In the US, Europe, and the EU, no company could be compelled to provide a drug or device that it wasdeveloping.[1]
Companies sometimes provide drugs under these programs to people who were in clinical trials and who responded to the drug, after the clinical trial ends.[2][3]
In the US as of 2018, people could try obtain unapproved drugs or medical devices that were in development under specific conditions.[4][5]
These conditions were:[5]
Drugs can be made available to individuals, small groups, or large groups.[5]
In the US, actual provision of the drug depends on the manufacturer's willingness to provide it, as well as the person's ability to pay for it; it is the company's decision whether to require payment or to provide the drug or device for free.[1]The manufacturer can only chargedirect costsfor individual INDs; it can add some but not allindirect costsfor small group or larger expanded access programs.[6]To the extent that a doctor or clinic is required for use of the drug or device, they too may require payment.[1]
In some cases, it may be in the manufacturer's commercial interest to provide access under an EA program; this is a way, for example, for a company to make money before the drug or device is approved. Companies must provide data collected from people getting the drug or device under EA programs to the FDA annually; this data may be helpful with regard to getting the drug or device approved, or may be harmful, should unexpected adverse events occur. The manufacturer remains legally liable as well. If the manufacturer chooses to charge for the investigational product, that price influences later discussions about the price if the product is approved for marketing.[1]
As of February 2019[update], 41 states have passedright-to-try lawsthat permit manufacturers to provide experimental medicines to terminally ill people without US FDA authorization.[7]Legal, medical, and bioethics scholars, including Jonathan Darrow and Arthur Caplan, have argued that these state laws have little practical significance because people can already obtain pre-approval access through the FDA's expanded access program, and because the FDA is generally not the limiting factor in obtaining pre-approval access.[8][9]
In Europe, theEuropean Medicines Agencyissued guidelines that members may follow. Each country has its own regulations, and they vary. In the UK, for example, the program is called "early access to medicine scheme" or EAMS and was established in 2014. If a company that wants to provide a drug under EAMS, it must submit its Phase I data to theMedicines and Healthcare products Regulatory Agencyand apply for what is called a "promising innovative medicine" (PIM) designation. If that designation is approved, the data is reviewed, if that review is positive, theNational Health Serviceis obligated to pay for people who fit the criteria to have access to the drug. As of 2016, governments also paid for early access to drugs in Austria, Germany, Greece, and Spain.[1]Since 2021, France has a system of early and expanded access separated in two systems: AAC and AAP.[10]
Companies sometimes make use of expanded programs in Europe even after they receive EMA approval to market a drug, because drugs also must go through regulatory processes in each member state, and in some countries this process can take nearly a year; companies can start making sales earlier under these programs.[1]
In the Philippines, the usage of unregistered drugs may be allowed through a doctor, a specialist, or health institution or society obtaining a specific compassionate use permit (CSP) from the country'sFood and Drug Administrationfor the treatment of their terminally or seriously ill patients. The issuance of CSP is stated underDepartment of HealthAdministrative Order No. 4 of 1992.[11]
Those seeking CSP are required to provide the following information; estimated amount of the unregistered drug the patient, the "licensed drug/device establishment through which the unregistered drug may be procured", and "the names and address of the specialists qualified and authorized to use the product." A CSP may also be obtained for processedmedical cannabisdespite cannabis in general being illegal in the Philippines.[11][12]
In the US, one of the earliest expanded access programs was a compassionate use IND that was established in 1978, which allowed a limited number of people to usemedical cannabisgrown at theUniversity of Mississippi, under the direction of Marijuana Research Project Director Dr.Mahmoud ElSohly. It is administered by theNational Institute on Drug Abuse.[citation needed]
The program was started afterRobert C. Randallbrought a lawsuit (Randall v. United States)[13]against the FDA, theDrug Enforcement Administration, theNational Institute on Drug Abuse, theDepartment of Justice, and theDepartment of Health, Education & Welfare. Randall, who hadglaucoma, had successfully used theCommon Law doctrine of necessityto argue against criminal charges of marijuana cultivation that had been brought against him, because his use of cannabis was deemed amedical necessity(U.S. v. Randall).[13]On November 24, 1976, federal Judge James Washington ruled in his favor.[14]: 142[15]
The settlement inRandall v. U.S.became the legal basis for the FDA's compassionate IND program.[13]People were only allowed to use cannabis under the program who had certain conditions, like glaucoma, known to be alleviated with cannabis. The scope was later expanded to include people withAIDSin the mid-1980s. At its peak, fifteen people received the drug. 43 people were approved for the program, but 28 of the people whose doctors completed the necessary paperwork never received any cannabis.[16][14]The program stopped accepting new people in 1992 after public health authorities concluded there was no scientific value to it, and due to PresidentGeorge H. W. Bushadministration's policies. As of 2011, four people continued to receive cannabis from the government under the program.[17]
The closure of the program during the height of the AIDS epidemic led to the formation of the medical cannabis movement in the United States, a movement which initially sought to provide cannabis for treating anorexia and wasting syndrome in people with AIDS.[18]
In November 2001 the Abigail Alliance for Better Access to Developmental Drugs was established by Frank Burroughs in memory of his daughter, Abigail.[19]The Alliance seeks broader availability of investigational drugs on behalf of people with terminal illnesses. It is best known for a legal case, which it lost,Abigail Alliance v. von Eschenbach, in which it was represented by theWashington Legal Foundation. On August 7, 2007, in an 8–2 ruling, theU.S. Court of Appeals for the District of Columbia Circuitreversed an earlier ruling in favor of the Alliance.[20]In 2008, theSupreme Court of the United Statesdeclined to hear their appeal. This decision left standing the appellate court decision that people who are terminal ill patients have no legal right to demand "a potentially toxic drug with no proven therapeutic benefit".[21]
In March 2014, Josh Hardy, a 7-year-old boy from Virginia, made national headlines that sparked a conversation on pediatric access to investigational drugs when his family's request forbrincidofovirwas declined by the drug manufacturer, Chimerix.[22]The company reversed its decision after pressure from cancer advocacy organizations, and Josh received the drug that saved his life.[23][24]Hardy later passed away in September 2016 due to complications related to his underlying cancer diagnosis.[25]In 2016 Kids v Cancer, a pediatric cancer advocacy organization, launched the Compassionate Use Navigator to assist physicians and guide families about the application process.[26]Since then, FDA simplified the application process, but stressed that it cannot require a manufacturer to provide a product.[27][28]FDA receives about 1,500 expanded access requests per year and authorizes 99% of it.[29]
|
https://en.wikipedia.org/wiki/Expanded_access
|
Inmathematics,Sperner's lemmais acombinatorialresult on colorings oftriangulations,analogousto theBrouwer fixed point theorem, which is equivalent to it.[1]It states that everySperner coloring(described below) of a triangulation of ann{\displaystyle n}-dimensionalsimplexcontains a cell whose vertices all have different colors.
The initial result of this kind was proved byEmanuel Sperner, in relation with proofs ofinvariance of domain. Sperner colorings have been used for effective computation offixed pointsand inroot-finding algorithms, and are applied infair division(cake cutting) algorithms.
According to the SovietMathematical Encyclopaedia(ed.I.M. Vinogradov), a related 1929 theorem (ofKnaster,BorsukandMazurkiewicz) had also become known as theSperner lemma– this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as theKnaster–Kuratowski–Mazurkiewicz lemma.
In one dimension, Sperner's Lemma can be regarded as a discrete version of theintermediate value theorem. In this case, it essentially says that if a discretefunctiontakes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times.
The two-dimensional case is the one referred to most frequently. It is stated as follows:
Subdivide atriangleABCarbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that
Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation that has its vertices colored with all three different colors. More precisely, there must be an odd number of rainbow triangles.
In the general case the lemma refers to an-dimensionalsimplex:
Consider any triangulationT, a disjoint division ofA{\displaystyle {\mathcal {A}}}into smallern-dimensional simplices, again meeting face-to-face. Denote the coloring function as:
whereSis the set of vertices ofT. A coloring function defines a Sperner coloring when:
Ai1Ai2…Aik+1{\displaystyle A_{i_{1}}A_{i_{2}}\ldots A_{i_{k+1}}}
are colored only with the colors
i1,i2,…,ik+1.{\displaystyle i_{1},i_{2},\ldots ,i_{k+1}.}
Then every Sperner coloring of every triangulation of then-dimensionalsimplexhas an odd number of instances of arainbow simplex, meaning a simplex whose vertices are colored with alln+ 1colors. In particular, there must be at least one rainbow simplex.
We shall first address the two-dimensional case. Consider a graphGbuilt from the triangulationTas follows:
Note that on the intervalABthere is an odd number of borders colored 1-2 (simply because A is colored 1, B is colored 2; and as we move alongAB, there must be an odd number of color changes in order to get different colors at the beginning and at the end). On the intervals BC and CA, there are no borders colored 1-2 at all. Therefore, the vertex ofGcorresponding to the outer area has an odd degree. But it is known (thehandshaking lemma) that in a finite graph there is an even number of vertices with odd degree. Therefore, the remaining graph, excluding the outer area, has an odd number of vertices with odd degree corresponding to members ofT.
It can be easily seen that the only possible degree of a triangle fromTis 0, 1, or 2, and that the degree 1 corresponds to a triangle colored with the three colors 1, 2, and 3.
Thus we have obtained a slightly stronger conclusion, which says that in a triangulationTthere is an odd number (and at least one) of full-colored triangles.
A multidimensional case can be proved by induction on the dimension of a simplex. We apply the same reasoning, as in the two-dimensional case, to conclude that in an-dimensional triangulation there is an odd number of full-colored simplices.
Here is an elaboration of the proof given previously, for a reader new tograph theory.
This diagram numbers the colors of the vertices of the example given previously. The small triangles whose vertices all have different numbers are shaded in the graph. Each small triangle becomes a node in the new graph derived from the triangulation. The small letters identify the areas, eight inside the figure, and areaidesignates the space outside of it.
As described previously, those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph. For example, nodedshares an edge with the outer areai, and its vertices all have different numbers, so it is also shaded. Nodebis not shaded because two vertices have the same number, but it is joined to the outer area.
One could add a new full-numbered triangle, say by inserting a node numbered 3 into the edge between 1 and 1 of nodea, and joining that node to the other vertex ofa. Doing so would have to create a pair of new nodes, like the situation with nodesfandg.
Andrew McLennan and Rabee Tourky presented a different proof, using thevolume of a simplex. It proceeds in one step, with no induction.[2][3]
Suppose there is ad-dimensional simplex of side-lengthN, and it is triangulated into sub-simplices of side-length 1. There is a function that, given any vertex of the triangulation, returns its color. The coloring is guaranteed to satisfy Sperner's boundary condition. How many times do we have to call the function in order to find a rainbow simplex? Obviously, we can go over all the triangulation vertices, whose number is O(Nd), which is polynomial inNwhen the dimension is fixed. But, can it be done in time O(poly(logN)), which is polynomial in the binary representation of N?
This problem was first studied byChristos Papadimitriou. He introduced acomplexity classcalledPPAD, which contains this as well as related problems (such as finding aBrouwer fixed point). He proved that finding a Sperner simplex isPPAD-completeeven ford=3. Some 15 years later, Chen and Deng proved PPAD-completeness even ford=2.[4]It is believed that PPAD-hard problems cannot be solved in time O(poly(logN)).
Suppose that each vertex of the triangulation may be labeled with multiple colors, so that the coloring function isF:S→ 2[n+1].
For every sub-simplex, the set of labelings on its vertices is a set-family over the set of colors[n+ 1]. This set-family can be seen as ahypergraph.
If, for every vertexvon a face of the simplex, the colors inf(v)are a subset of the set of colors on the face endpoints, then there exists a sub-simplex with abalanced labeling– a labeling in which the correspondinghypergraph admits a perfect fractional matching. To illustrate, here are some balanced labeling examples forn= 2:
This was proved byShapleyin 1973.[5]It is a combinatorial analogue of theKKMS lemma.
Suppose that we have ad-dimensionalpolytopePwithnvertices.Pis triangulated, and each vertex of the triangulation is labeled with a label from{1, …,n}.Every main vertexiis labeledi. A sub-simplex is calledfully-labeledif it isd-dimensional, and each of itsd+ 1vertices has a different label. If every vertex in a faceFofPis labeled with one of the labels on the endpoints ofF, then there are at leastn–dfully-labeled simplices. Some special cases are:
The general statement was conjectured byAtanassovin 1996, who proved it for the cased= 2.[6]The proof of the general case was first given by de Loera, Peterson, andSuin 2002.[7]They provide two proofs: the first is non-constructive and uses the notion ofpebble sets; the second is constructive and is based on arguments of following paths ingraphs.
Meunier[8]extended the theorem from polytopes topolytopal bodies,which need not be convex or simply-connected. In particular, ifPis a polytope, then the set of its faces is a polytopal body. In every Sperner labeling of a polytopal body with verticesv1, …,vn, there are at least:
fully-labeled simplices such that any pair of these simplices receives two different labelings. The degreedegB(P)(vi)is the number of edges ofB(P)to whichvibelongs. Since the degree is at leastd, the lower bound is at leastn–d. But it can be larger. For example, for thecyclic polytopein 4 dimensions withnvertices, the lower bound is:
Musin[9]further extended the theorem tod-dimensionalpiecewise-linear manifolds, with or without a boundary.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[10]further extended the theorem topseudomanifoldswith boundary, and improved the lower bound on the number of facets with pairwise-distinct labels.
Suppose that, instead of a simplex triangulated into sub-simplices, we have ann-dimensional cube partitioned into smallern-dimensional cubes.
Harold W. Kuhn[11]proved the following lemma. Suppose the cube[0,M]n, for some integerM, is partitioned intoMnunit cubes. Suppose each vertex of the partition is labeled with a label from{1, …,n+ 1},such that for every vertexv: (1) ifvi= 0then the label onvis at mosti; (2) ifvi=Mthen the label onvis noti. Then there exists a unit cube with all the labels{1, …,n+ 1}(some of them more than once). The special casen= 2is: suppose a square is partitioned into sub-squares, and each vertex is labeled with a label from{1,2,3}.The left edge is labeled with1(= at most 1); the bottom edge is labeled with1or2(= at most 2); the top edge is labeled with1or3(= not 2); and the right edge is labeled with2or3(= not 1). Then there is a square labeled with1,2,3.
Another variant, related to thePoincaré–Miranda theorem,[12]is as follows. Suppose the cube[0,M]nis partitioned intoMnunit cubes. Suppose each vertex is labeled with a binary vector of lengthn, such that for every vertexv: (1) ifvi= 0then the coordinateiof label onvis 0; (2) ifvi=Mthen coordinateiof the label onvis 1; (3) if two vertices are neighbors, then their labels differ by at most one coordinate. Then there exists a unit cube in which all2nlabels are different. In two dimensions, another way to formulate this theorem is:[13]in any labeling that satisfies conditions (1) and (2), there is at least one cell in which the sum of labels is 0 [a 1-dimensional cell with(1,1)and(-1,-1)labels, or a 2-dimensional cells with all four different labels].
Wolsey[14]strengthened these two results by proving that the number of completely-labeled cubes is odd.
Musin[13]extended these results to generalquadrangulations.
Suppose that, instead of a single labeling, we havendifferent Sperner labelings. We consider pairs (simplex, permutation) such that, the label of each vertex of the simplex is chosen from a different labeling (so for each simplex, there aren!different pairs). Then there are at leastn!fully labeled pairs. This was proved byRavindra Bapat[15]for any triangulation. A simpler proof, which only works for specific triangulations, was presented later by Su.[16]
Another way to state this lemma is as follows. Suppose there arenpeople, each of whom produces a different Sperner labeling of the same triangulation. Then, there exists a simplex, and a matching of the people to its vertices, such that each vertex is labeled by its owner differently (one person labels its vertex by 1, another person labels its vertex by 2, etc.). Moreover, there are at leastn!such matchings. This can be used to find anenvy-free cake-cuttingwith connected pieces.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[10]extended this theorem topseudomanifoldswith boundary.
More generally, suppose we havemdifferent Sperner labelings, wheremmay be different thann. Then:[17]: Thm 2.1
Both versions reduce to Sperner's lemma whenm= 1, or when allmlabelings are identical.
See[18]for similar generalizations.
Brown and Cairns[19]strengthened Sperner's lemma by considering theorientationof simplices. Each sub-simplex has an orientation that can be either +1 or -1 (if it is fully-labeled), or 0 (if it is not fully-labeled). They proved that the sum of all orientations of simplices is +1. In particular, this implies that there is an odd number of fully-labeled simplices.
As an example forn= 3, suppose a triangle is triangulated and labeled with{1,2,3}.Consider the cyclic sequence of labels on the boundary of the triangle. Define thedegreeof the labeling as the number of switches from 1 to 2, minus the number of switches from 2 to 1. See examples in the table at the right. Note that the degree is the same if we count switches from 2 to 3 minus 3 to 2, or from 3 to 1 minus 1 to 3.
Musin proved thatthe number of fully labeled triangles is at least the degree of the labeling.[20]In particular, if the degree is nonzero, then there exists at least one fully labeled triangle.
If a labeling satisfies the Sperner condition, then its degree is exactly 1: there are 1-2 and 2-1 switches only in the side between vertices 1 and 2, and the number of 1-2 switches must be one more than the number of 2-1 switches (when walking from vertex 1 to vertex 2). Therefore, the original Sperner lemma follows from Musin's theorem.
There is a similar lemma about finite and infinitetreesandcycles.[21]
Mirzakhani and Vondrak[22]study a weaker variant of a Sperner labeling, in which the only requirement is that labeliis not used on the face opposite to vertexi. They call itSperner-admissible labeling. They show that there are Sperner-admissible labelings in which every cell contains at most 4 labels. They also prove an optimal lower bound on the number of cells that must have at least two different labels in each Sperner-admissible labeling. They also prove that, for any Sperner-admissible partition of the regular simplex, the total area of the boundary between the parts is minimized by theVoronoi partition.
Sperner colorings have been used for effective computation offixed points. A Sperner coloring can be constructed such that fully labeled simplices correspond to fixed points of a given function. By making a triangulation smaller and smaller, one can show that the limit of the fully labeled simplices is exactly the fixed point. Hence, the technique provides a way to approximate fixed points.
A related application is the numerical detection ofperiodic orbitsandsymbolic dynamics.[23]Sperner's lemma can also be used inroot-finding algorithmsandfair divisionalgorithms; seeSimmons–Su protocols.
Sperner's lemma is one of the key ingredients of the proof ofMonsky's theorem, that a square cannot be cut into an odd number ofequal-area triangles.[24]
Sperner's lemma can be used to find acompetitive equilibriumin anexchange economy, although there are more efficient ways to find it.[25]: 67
Fifty years after first publishing it, Sperner presented a survey on the development, influence and applications of his combinatorial lemma.[26]
There are several fixed-point theorems which come in three equivalent variants: analgebraic topologyvariant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in
the top row can be deduced from the one below it in the same column.[27]
|
https://en.wikipedia.org/wiki/Sperner%27s_lemma
|
Inmathematics, thediscrete exterior calculus(DEC) is the extension of theexterior calculustodiscretespaces includinggraphs,finite element meshes, and lately also general polygonal meshes[1](non-flat and non-convex). DEC methods have proved to be very powerful in improving and analyzing finite element methods: for instance, DEC-based methods allow the use of highly non-uniform meshes to obtain accurate results. Non-uniform meshes are advantageous because they allow the use of large elements where the process to be simulated is relatively simple, as opposed to a fine resolution where the process may be complicated (e.g., near an obstruction to a fluid flow), while using less computational power than if a uniformly fine mesh were used.
Stokes' theoremrelates theintegralof adifferential (n− 1)-formωover theboundary∂Mof ann-dimensionalmanifoldMto the integral of dω(theexterior derivativeofω, and a differentialn-form onM) overMitself:
One could think of differentialk-forms aslinear operatorsthat act onk-dimensional "bits" of space, in which case one might prefer to use thebracket notationfor a dual pairing. In this notation, Stokes' theorem reads as
In finite element analysis, the first stage is often the approximation of the domain of interest by atriangulation,T. For example, a curve would be approximated as a union of straight line segments; a surface would be approximated by a union of triangles, whose edges are straight line segments, which themselves terminate in points. Topologists would refer to such a construction as asimplicial complex. The boundary operator on this triangulation/simplicial complexTis defined in the usual way: for example, ifLis a directed line segment from one point,a, to another,b, then the boundary ∂LofLis the formal differenceb−a.
Ak-form onTis a linear operator acting onk-dimensional subcomplexes ofT; e.g., a 0-form assigns values to points, and extends linearly to linear combinations of points; a 1-form assigns values to line segments in a similarly linear way. Ifωis ak-form onT, then thediscrete exterior derivativedωofωis the unique (k+ 1)-form defined so that Stokes' theorem holds:
For every (k+ 1)-dimensional subcomplex ofT,S.
Other operators and operations such as the discretewedge product,[2]Hodge star, orLie derivativecan also be defined.
|
https://en.wikipedia.org/wiki/Discrete_exterior_calculus
|
Inmathematics,topological graph theoryis a branch ofgraph theory. It studies theembedding of graphsinsurfaces,spatial embeddings of graphs, andgraphsastopological spaces.[1]It also studiesimmersionsof graphs.
Embedding a graph in a surface means that we want to draw the graph on a surface, aspherefor example, without twoedgesintersecting. A basic embedding problem often presented as amathematical puzzleis thethree utilities problem. Other applications can be found in printingelectronic circuitswhere the aim is to print (embed) a circuit (the graph) on acircuit board(the surface) without two connections crossing each other and resulting in ashort circuit.
To anundirected graphwe may associate anabstract simplicial complexCwith a single-element set per vertex and a two-element set per edge. The geometric realization |C| of the complex consists of a copy of theunit interval[0,1] per edge, with the endpoints of theseintervalsglued together at vertices. In this view, embeddings of graphs into a surface or assubdivisionsof other graphs are both instances of topological embedding,homeomorphism of graphsis just the specialization of topologicalhomeomorphism, the notion of aconnected graphcoincides withtopological connectedness, and a connected graph is atreeif and only ifitsfundamental groupis trivial.
Other simplicial complexes associated with graphs include theWhitney complexorclique complex, with a set percliqueof the graph, and thematching complex, with a set permatchingof the graph (equivalently, the clique complex of the complement of theline graph). The matching complex of acomplete bipartite graphis called achessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard.[2]
John HopcroftandRobert Tarjan[3]derived a means oftesting the planarityof a graph in time linear to the number of edges. Their algorithm does this by constructing a graph embedding which they term a "palm tree". Efficient planarity testing is fundamental tograph drawing.
Fan Chunget al[4]studied the problem ofembedding a graph into a bookwith the graph's vertices in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards.
Graph embeddingsare also used toprovestructural results about graphs, viagraph minor theoryand thegraph structure theorem.
|
https://en.wikipedia.org/wiki/Topological_graph_theory
|
Inmathematics,combinatorial topologywas an older name foralgebraic topology, dating from the time whentopological invariantsof spaces (for example theBetti numbers) were regarded as derived from combinatorial decompositions of spaces, such as decomposition intosimplicial complexes. After the proof of thesimplicial approximation theoremthis approach provided rigour.
The change of name reflected the move to organise topological classes such as cycles-modulo-boundaries explicitly intoabelian groups. This point of view is often attributed toEmmy Noether,[1]and so the change of title may reflect her influence. The transition is also attributed to the work ofHeinz Hopf,[2]who was influenced by Noether, and toLeopold VietorisandWalther Mayer, who independently defined homology.[3]
A fairly precise date can be supplied in the internal notes of theBourbaki group. While this kind of topology was still "combinatorial" in 1942, it had become "algebraic" by 1944.[4]This corresponds also to the period wherehomological algebraandcategory theorywere introduced for the study oftopological spaces, and largely supplanted combinatorial methods.
More recently the term combinatorial topology has been revived for investigations carried out by treating topological objects as composed of pieces as in the older combinatorial topology, which is again found useful.
Azriel Rosenfeld(1973) proposeddigital topologyfor a type ofimage processingthat can be considered as a new development of combinatorial topology. The digital forms of theEuler characteristictheorem and theGauss–Bonnet theoremwere obtained by Li Chen and Yongwu Rong.[5][6]A 2Dgrid cell topologyalready appeared in the Alexandrov–Hopf book Topologie I (1935).
Gottfried Wilhelm Leibnizhad envisioned a form of combinatorial topology as early as 1679 in his workCharacteristica Geometrica.[7]
|
https://en.wikipedia.org/wiki/Combinatorial_topology
|
Inmathematics, afinite topological spaceis atopological spacefor which the underlyingpoint setisfinite. That is, it is a topological space which has only finitely many elements.
Finite topological spaces are often used to provide examples of interesting phenomena orcounterexamplesto plausible sounding conjectures.William Thurstonhas called the study of finite topologies in this sense "an oddball topic that can
lend good insight to a variety of questions".[1]
LetX{\displaystyle X}be a finite set. AtopologyonX{\displaystyle X}is a subsetτ{\displaystyle \tau }ofP(X){\displaystyle P(X)}(thepower setofX{\displaystyle X}) such that
In other words, a subsetτ{\displaystyle \tau }ofP(X){\displaystyle P(X)}is a topology ifτ{\displaystyle \tau }contains both∅{\displaystyle \varnothing }andX{\displaystyle X}and is closed under arbitraryunionsandintersections. Elements ofτ{\displaystyle \tau }are calledopen sets. The general description of topological spaces requires that a topology be closed under arbitrary (finite or infinite) unions of open sets, but only under intersections of finitely many open sets. Here, that distinction is unnecessary. Since the power set of a finite set is finite there can be only finitely manyopen sets(and only finitely manyclosed sets).
A topology on a finite set can also be thought of as asublatticeof(P(X),⊂){\displaystyle (P(X),\subset )}which includes both the bottom element∅{\displaystyle \varnothing }and the top elementX{\displaystyle X}.
There is a unique topology on theempty set∅. The only open set is the empty one. Indeed, this is the only subset of ∅.
Likewise, there is a unique topology on asingleton set{a}. Here the open sets are ∅ and {a}. This topology is bothdiscreteandtrivial, although in some ways it is better to think of it as a discrete space since it shares more properties with the family of finite discrete spaces.
For any topological spaceXthere is a uniquecontinuous functionfrom ∅ toX, namely theempty function. There is also a unique continuous function fromXto the singleton space {a}, namely theconstant functiontoa. In the language ofcategory theorythe empty space serves as aninitial objectin thecategory of topological spaceswhile the singleton space serves as aterminal object.
LetX= {a,b} be a set with 2 elements. There are four distinct topologies onX:
The second and third topologies above are easily seen to behomeomorphic. The function fromXto itself which swapsaandbis a homeomorphism. A topological space homeomorphic to one of these is called aSierpiński space. So, in fact, there are only three inequivalent topologies on a two-point set: the trivial one, the discrete one, and the Sierpiński topology.
The specialization preorder on the Sierpiński space {a,b} with {b} open is given by:a≤a,b≤b, anda≤b.
LetX= {a,b,c} be a set with 3 elements. There are 29 distinct topologies onXbut only 9 inequivalent topologies:
The last 5 of these are allT0. The first one is trivial, while in 2, 3, and 4 the pointsaandbaretopologically indistinguishable.
LetX= {a,b,c,d} be a set with 4 elements. There are 355 distinct topologies onXbut only 33 inequivalent topologies:
The last 16 of these are allT0.
Topologies on a finite setXare inone-to-one correspondencewithpreordersonX. Recall that a preorder onXis abinary relationonXwhich isreflexiveandtransitive.
Given a (not necessarily finite) topological spaceXwe can define a preorder onXby
where cl{y} denotes theclosureof thesingleton set{y}. This preorder is called thespecialization preorderonX. Every open setUofXwill be anupper setwith respect to ≤ (i.e. ifx∈Uandx≤ytheny∈U). Now ifXis finite, the converse is also true: every upper set is open inX. So for finite spaces, the topology onXis uniquely determined by ≤.
Going in the other direction, suppose (X, ≤) is a preordered set. Define a topology τ onXby taking the open sets to be the upper sets with respect to ≤. Then the relation ≤ will be the specialization preorder of (X, τ). The topology defined in this way is called theAlexandrov topologydetermined by ≤.
The equivalence between preorders and finite topologies can be interpreted as a version ofBirkhoff's representation theorem, an equivalence between finite distributive lattices (the lattice of open sets of the topology) and partial orders (the partial order of equivalence classes of the preorder). This correspondence also works for a larger class of spaces calledfinitely generated spaces. Finitely generated spaces can be characterized as the spaces in which an arbitrary intersection of open sets is open. Finite topological spaces are a special class of finitely generated spaces.
Every finite topological space iscompactsince anyopen covermust already be finite. Indeed, compact spaces are often thought of as a generalization of finite spaces since they share many of the same properties.
Every finite topological space is alsosecond-countable(there are only finitely many open sets) andseparable(since the space itself iscountable).
If a finite topological space isT1(in particular, if it isHausdorff) then it must, in fact, be discrete. This is because thecomplementof a point is a finite union of closed points and therefore closed. It follows that each point must be open.
Therefore, any finite topological space which is not discrete cannot be T1, Hausdorff, or anything stronger.
However, it is possible for a non-discrete finite space to beT0. In general, two pointsxandyaretopologically indistinguishableif and only ifx≤yandy≤x, where ≤ is the specialization preorder onX. It follows that a spaceXis T0if and only ifthe specialization preorder ≤ onXis apartial order. There are numerous partial orders on a finite set. Each defines a unique T0topology.
Similarly, a space isR0if and only if the specialization preorder is anequivalence relation. Given any equivalence relation on a finite setXthe associated topology is thepartition topologyonX. The equivalence classes will be the classes of topologically indistinguishable points. Since the partition topology ispseudometrizable, a finite space is R0if and only if it iscompletely regular.
Non-discrete finite spaces can also benormal. Theexcluded point topologyon any finite set is acompletely normalT0space which is non-discrete.
Connectivity in a finite spaceXis best understood by considering the specialization preorder ≤ onX. We can associate to any preordered setXadirected graphΓ by taking the points ofXas vertices and drawing an edgex→ywheneverx≤y. The connectivity of a finite spaceXcan be understood by considering theconnectivityof the associated graph Γ.
In any topological space, ifx≤ythen there is apathfromxtoy. One can simply takef(0) =xandf(t) =yfort> 0. It is easy to verify thatfis continuous. It follows that thepath componentsof a finite topological space are precisely the (weakly)connected componentsof the associated graph Γ. That is, there is a topological path fromxtoyif and only if there is anundirected pathbetween the corresponding vertices of Γ.
Every finite space islocally path-connectedsince the set
is a path-connected openneighborhoodofxthat is contained in every other neighborhood. In other words, this single set forms alocal baseatx.
Therefore, a finite space isconnectedif and only if it is path-connected. The connected components are precisely the path components. Each such component is bothclosed and openinX.
Finite spaces may have stronger connectivity properties. A finite spaceXis
For example, theparticular point topologyon a finite space is hyperconnected while theexcluded point topologyis ultraconnected. TheSierpiński spaceis both.
A finite topological space ispseudometrizableif and only if it isR0. In this case, one possiblepseudometricis given by
wherex≡ymeansxandyaretopologically indistinguishable. A finite topological space ismetrizableif and only if it is discrete.
Likewise, a topological space isuniformizableif and only if it is R0. Theuniform structurewill be the pseudometric uniformity induced by the above pseudometric.
Perhaps surprisingly, there are finite topological spaces with nontrivialfundamental groups. A simple example is thepseudocircle, which is spaceXwith four points, two of which are open and two of which are closed. There is a continuous map from theunit circleS1toXwhich is aweak homotopy equivalence(i.e. it induces anisomorphismofhomotopy groups). It follows that the fundamental group of the pseudocircle isinfinite cyclic.
More generally it has been shown that for any finiteabstract simplicial complexK, there is a finite topological spaceXKand a weak homotopy equivalencef: |K| →XKwhere |K| is thegeometric realizationofK. It follows that the homotopy groups of |K| andXKare isomorphic. In fact, the underlying set ofXKcan be taken to beKitself, with the topology associated to the inclusion partial order.
As discussed above, topologies on a finite set are in one-to-one correspondence withpreorderson the set, andT0topologiesare in one-to-one correspondence withpartial orders. Therefore, the number of topologies on a finite set is equal to the number of preorders and the number of T0topologies is equal to the number of partial orders.
The table below lists the number of distinct (T0) topologies on a set withnelements. It also lists the number of inequivalent (i.e.nonhomeomorphic) topologies.
LetT(n) denote the number of distinct topologies on a set withnpoints. There is no known simple formula to computeT(n) for arbitraryn. TheOnline Encyclopedia of Integer Sequencespresently listsT(n) forn≤ 18.
The number of distinct T0topologies on a set withnpoints, denotedT0(n), is related toT(n) by the formula
whereS(n,k) denotes theStirling number of the second kind.
|
https://en.wikipedia.org/wiki/Finite_topological_space
|
Recurrent neural networks(RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, andtime series,[1]where the order of elements is important. Unlikefeedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.
The fundamental building block of RNNs is therecurrent unit, which maintains ahidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connectedhandwriting recognition,[2]speech recognition,[3][4]natural language processing, andneural machine translation.[5][6]
However, traditional RNNs suffer from thevanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of thelong short-term memory(LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later,Gated Recurrent Units(GRUs) were introduced as a more computationally efficient alternative.
In recent years,transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortexformed byparallel fiber,Purkinje cells, andgranule cells.[7][8]In 1933,Lorente de Nódiscovered "recurrent, reciprocal connections" byGolgi's method, and proposed that excitatory loops explain certain aspects of thevestibulo-ocular reflex.[9][10]During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure.Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[11]The McCulloch and Pitts paper (1943), which proposed theMcCulloch-Pitts neuronmodel, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past.[12]They were both interested in closed loops as possible explanations for e.g.epilepsyandcausalgia.[13][14]Recurrent inhibitionwas proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at theMacy conferences.[15]See[16]for an extensive review of recurrent neural network models in neuroscience.
Frank Rosenblattin 1960 published "close-loop cross-coupled perceptrons", which are 3-layeredperceptronnetworks whose middle layer contains recurrent connections that change by aHebbian learningrule.[18]: 73–75Later, inPrinciples of Neurodynamics(1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,[17]: Chapter 19, 21and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.[17]: Section 19.11
Similar networks were published by Kaoru Nakano in 1971,[19][20]Shun'ichi Amariin 1972,[21]andWilliam A. Little[de]in 1974,[22]who was acknowledged by Hopfield in his 1982 paper.
Another origin of RNN wasstatistical mechanics. TheIsing modelwas developed byWilhelm Lenz[23]andErnst Ising[24]in the 1920s[25]as a simple statistical mechanical model of magnets at equilibrium.Glauberin 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time.[26]
TheSherrington–Kirkpatrick modelof spin glass, published in 1975,[27]is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.[28]In a 1984 paper he extended this to continuous activation functions.[29]It became a standard model for the study of neural networks through statistical mechanics.[30][31]
Modern RNN networks are mainly based on two architectures: LSTM and BRNN.[32]
At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets".[33]Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[34]
Long short-term memory(LSTM) networks were invented byHochreiterandSchmidhuberin 1995 and set accuracy records in multiple applications domains.[35][36]It became the default choice for RNN architecture.
Bidirectional recurrent neural networks(BRNN) uses two RNN that processes the same input in opposite directions.[37]These two are often combined, giving the bidirectional LSTM architecture.
Around 2006, bidirectional LSTM started to revolutionizespeech recognition, outperforming traditional models in certain speech applications.[38][39]They also improved large-vocabulary speech recognition[3][4]andtext-to-speechsynthesis[40]and was used inGoogle voice search, and dictation onAndroid devices.[41]They broke records for improvedmachine translation,[42]language modeling[43]and Multilingual Language Processing.[44]Also, LSTM combined withconvolutional neural networks(CNNs) improvedautomatic image captioning.[45]
The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014.[46][47]Aseq2seqarchitecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development ofattention mechanismsandtransformers.
An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
RNNs come in many variants. Abstractly speaking, an RNN is a functionfθ{\displaystyle f_{\theta }}of type(xt,ht)↦(yt,ht+1){\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})}, where
In words, it is a neural network that maps an inputxt{\displaystyle x_{t}}into an outputyt{\displaystyle y_{t}}, with the hidden vectorht{\displaystyle h_{t}}playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing.
The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to belayersare, in fact, different steps in time, "unfolded" to produce the appearance oflayers.
Astacked RNN, ordeep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows
Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.
Abidirectional RNN(biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:
The two output sequences are then concatenated to give the total output:((y0,y0′),(y1,y1′),…,(yN,yN′)){\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))}.
Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. TheELMomodel (2018)[48]is a stacked bidirectionalLSTMwhich takes character-level as inputs and produces word-level embeddings.
Two RNNs can be run front-to-back in anencoder-decoderconfiguration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optionalattention mechanism. This was used to construct state of the artneural machine translatorsduring the 2014–2017 period. This was an instrumental step towards the development oftransformers.[49]
An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions.[50]For example, the row-by-row direction processes ann×n{\displaystyle n\times n}grid of vectorsxi,j{\displaystyle x_{i,j}}in the following order:x1,1,x1,2,…,x1,n,x2,1,x2,2,…,x2,n,…,xn,n{\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}}Thediagonal BiLSTMuses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processesxi,j{\displaystyle x_{i,j}}depending on its hidden state and cell state on the top and the left side:hi−1,j,ci−1,j{\displaystyle h_{i-1,j},c_{i-1,j}}andhi,j−1,ci,j−1{\displaystyle h_{i,j-1},c_{i,j-1}}. The other processes it from the top-right corner to the bottom-left.
Fully recurrent neural networks(FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is afully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.
TheHopfield networkis an RNN in which all connections across layers are equally sized. It requiresstationaryinputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained usingHebbian learning, then the Hopfield network can perform asrobustcontent-addressable memory, resistant to connection alteration.
AnElmannetworkis a three-layer network (arranged horizontally asx,y, andzin the illustration) with the addition of a set of context units (uin the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.[51]At each time step, the input is fed forward and alearning ruleis applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standardmultilayer perceptron.
Jordannetworksare similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves.[51]
Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).
Variables and functions
Long short-term memory(LSTM) is the most widely used RNN architecture. It was designed to solve thevanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates".[54]LSTM prevents backpropagated errors from vanishing or exploding.[55]Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.[56]LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components.
Many applications use stacks of LSTMs,[57]for which it is called "deep LSTM". LSTM can learn to recognizecontext-sensitive languagesunlike previous models based onhidden Markov models(HMM) and similar concepts.[58]
Gated recurrent unit(GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants.[59][60]They have fewer parameters than LSTM, as they lack an output gate.[61]
Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.[62]There does not appear to be particular performance difference between LSTM and GRU.[62][63]
Introduced by Bart Kosko,[64]a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and itstranspose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models usingMarkovstepping were optimized for increased network stability and relevance to real-world applications.[65]
A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.[66]
Echo state networks(ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certaintime series.[67]A variant forspiking neuronsis known as aliquid state machine.[68]
Arecursive neural network[69]is created by applying the same set of weightsrecursivelyover a differentiable graph-like structure by traversing the structure intopological order. Such networks are typically also trained by the reverse mode ofautomatic differentiation.[70][71]They can processdistributed representationsof structure, such aslogical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied tonatural language processing.[72]The Recursive Neural Tensor Network uses atensor-based composition function for all nodes in the tree.[73]
Neural Turing machines(NTMs) are a method of extending recurrent neural networks by coupling them to externalmemoryresources with which they interact. The combined system is analogous to aTuring machineorVon Neumann architecturebut isdifferentiableend-to-end, allowing it to be efficiently trained withgradient descent.[74]
Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.[75]
Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers ofcontext free grammars(CFGs).[76]
Recurrent neural networks areTuring completeand can run arbitrary programs to process arbitrary sequences of inputs.[77]
An RNN can be trained into a conditionallygenerative modelof sequences, akaautoregression.
Concretely, let us consider the problem of machine translation, that is, given a sequence(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}of English words, the model is to produce a sequence(y1,…,ym){\displaystyle (y_{1},\dots ,y_{m})}of French words. It is to be solved by aseq2seqmodel.
Now, during training, the encoder half of the model would first ingest(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}, then the decoder half would start generating a sequence(y^1,y^2,…,y^l){\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})}. The problem is that if the model makes a mistake early on, say aty^2{\displaystyle {\hat {y}}_{2}}, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shifty^2{\displaystyle {\hat {y}}_{2}}towardsy2{\displaystyle y_{2}}, but not the others.
Teacher forcingmakes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see(y1,…,yk){\displaystyle (y_{1},\dots ,y_{k})}in order to generatey^k+1{\displaystyle {\hat {y}}_{k+1}}.
Gradient descent is afirst-orderiterativeoptimizationalgorithmfor finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linearactivation functionsaredifferentiable.
The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm ofbackpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[78][79]which is an instance ofautomatic differentiationin the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.[80][81]
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing theJacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.[82]An online hybrid between BPTT and RTRL with intermediate complexity exists,[83][84]along with variants for continuous time.[85]
A major problem with gradient descent for standard RNN architectures is thaterror gradients vanishexponentially quickly with the size of the time lag between important events.[55][86]LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.[36]This problem is also solved in the independently recurrent neural network (IndRNN)[87]by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem.
The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.[88]It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.
One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.[89]It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.[90]It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.[90]
Theconnectionist temporal classification(CTC)[91]is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.[92]
Training the weights in a neural network can be modeled as a non-linearglobal optimizationproblem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs isgenetic algorithms, especially in unstructured networks.[93][94][95]
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in thechromosomerepresents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such assimulated annealingorparticle swarm optimization.
The independently recurrent neural network (IndRNN)[87]addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections.
The neural history compressor is an unsupervised stack of RNNs.[96]At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
The system effectively minimizes the description length or the negativelogarithmof the probability of the data.[97]Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).[96]Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.[96]
Agenerative modelpartially overcame thevanishing gradient problem[55]ofautomatic differentiationorbackpropagationin neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.[34]
Second-order RNNs use higher order weightswijk{\displaystyle w{}_{ijk}}instead of the standardwij{\displaystyle w{}_{ij}}weights, and states can be a product. This allows a direct mapping to afinite-state machineboth in training, stability, and representation.[98][99]Long short-term memory is an example of this but has no such formal mappings or proof of stability.
Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.[96][100]Such hierarchical structures of cognition are present in theories of memory presented by philosopherHenri Bergson, whose philosophical views have inspired hierarchical models.[101]
Hierarchical recurrent neural networks are useful inforecasting, helping to predict disaggregated inflation components of theconsumer price index(CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various establishedinflationprediction methods.[102]
Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.[103]
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.[104][105]With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in thememory-predictiontheory of brain function byHawkinsin his bookOn Intelligence.[citation needed]Such a hierarchy also agrees with theories of memory posited by philosopherHenri Bergson, which have been incorporated into an MTRNN model.[101][106]
Greg Snider ofHP Labsdescribes a system of cortical computing with memristive nanodevices.[107]Thememristors(memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film.DARPA'sSyNAPSE projecthas funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.Memristive networksare a particular type ofphysical neural networkthat have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to theIsing model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type ofneuromorphic engineeringin which the device behavior depends on the circuit wiring or topology.
The evolution of these networks can be studied analytically using variations of theCaravelli–Traversa–Di Ventraequation.[108]
A continuous-time recurrent neural network (CTRNN) uses a system ofordinary differential equationsto model the effects on a neuron of the incoming inputs. They are typically analyzed bydynamical systems theory. Many RNN models in neuroscience are continuous-time.[16]
For a neuroni{\displaystyle i}in the network with activationyi{\displaystyle y_{i}}, the rate of change of activation is given by:
Where:
CTRNNs have been applied toevolutionary roboticswhere they have been used to address vision,[109]co-operation,[110]and minimal cognitive behaviour.[111]
Note that, by theShannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalentdifference equations.[112]This transformation can be thought of as occurring after the post-synaptic node activation functionsyi(t){\displaystyle y_{i}(t)}have been low-pass filtered but prior to sampling.
They are in factrecursive neural networkswith a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
From a time-series perspective, RNNs can appear as nonlinear versions offinite impulse responseandinfinite impulse responsefilters and also as anonlinear autoregressive exogenous model(NARX).[113]RNN has infinite impulse response whereasconvolutional neural networkshavefinite impulseresponse. Both classes of networks exhibit temporaldynamic behavior.[114]A finite impulse recurrent network is adirected acyclic graphthat can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is adirected cyclic graphthat cannot be unrolled.
The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity.[115]
Additional stored states and the storage under direct control by the network can be added to bothinfinite-impulseandfinite-impulsenetworks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part oflong short-term memorynetworks (LSTMs) andgated recurrent units. This is also called Feedback Neural Network (FNN).
Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop byjust-in-time compilation.
Applications of recurrent neural networks include:
|
https://en.wikipedia.org/wiki/Recurrent_neural_networks
|
Ininformation geometry, theFisher information metric[1]is a particularRiemannian metricwhich can be defined on a smoothstatistical manifold,i.e., asmooth manifoldwhose points areprobability distributions. It can be used to calculate the distance between probability distributions.[2]
The metric is interesting in several aspects. ByChentsov’s theorem, the Fisher information metric on statistical models is the only Riemannian metric (up to rescaling) that is invariant undersufficient statistics.[3][4]
It can also be understood to be the infinitesimal form of the relative entropy (i.e., theKullback–Leibler divergence); specifically, it is theHessianof the divergence. Alternately, it can be understood as the metric induced by the flat spaceEuclidean metric, after appropriate changes of variable. When extended to complexprojective Hilbert space, it becomes theFubini–Study metric; when written in terms ofmixed states, it is the quantumBures metric.[clarification needed]
Considered purely as a matrix, it is known as theFisher information matrix. Considered as a measurement technique, where it is used to estimate hidden parameters in terms of observed random variables, it is known as theobserved information.
Given a statistical manifold with coordinatesθ=(θ1,θ2,…,θn){\displaystyle \theta =(\theta _{1},\theta _{2},\ldots ,\theta _{n})}, one writesp(x∣θ){\displaystyle p(x\mid \theta )}for the likelihood, that is the probability density of x as a function ofθ{\displaystyle \theta }. Herex{\displaystyle x}is drawn from the value spaceRfor a (discrete or continuous)random variableX. The likelihood is normalized overx{\displaystyle x}but notθ{\displaystyle \theta }:∫Rp(x∣θ)dx=1{\displaystyle \int _{R}p(x\mid \theta )\,dx=1}.
The Fisher information metric then takes the form:[clarification needed]
The integral is performed over all valuesxinR. The variableθ{\displaystyle \theta }is now a coordinate on aRiemann manifold. The labelsjandkindex the local coordinate axes on the manifold.
When the probability is derived from theGibbs measure, as it would be for anyMarkovian process, thenθ{\displaystyle \theta }can also be understood to be aLagrange multiplier; Lagrange multipliers are used to enforce constraints, such as holding theexpectation valueof some quantity constant. If there arenconstraints holdingndifferent expectation values constant, then the dimension of the manifold isndimensions smaller than the original space. In this case, the metric can be explicitly derived from thepartition function; a derivation and discussion is presented there.
Substitutingi(x∣θ)=−logp(x∣θ){\displaystyle i(x\mid \theta )=-\log {}p(x\mid \theta )}frominformation theory, an equivalent form of the above definition is:
To show that the equivalent form equals the above definition note that
and apply∂∂θk{\displaystyle {\frac {\partial }{\partial \theta _{k}}}}on both sides.
The Fisher information metric is particularly simple for theexponential family, which hasp(x∣θ)=exp[η(θ)⋅T(x)−A(θ)+B(x)]{\displaystyle p(x\mid \theta )=\exp \!{\bigl [}\ \eta (\theta )\cdot T(x)-A(\theta )+B(x)\ {\bigr ]}}The metric isgjk(θ)=∂2A(θ)∂θj∂θk−∂2η(θ)∂θj∂θk⋅E[T(x)]{\displaystyle g_{jk}(\theta )={\frac {\partial ^{2}A(\theta )}{\partial \theta _{j}\,\partial \theta _{k}}}-{\frac {\partial ^{2}\eta (\theta )}{\partial \theta _{j}\,\partial \theta _{k}}}\cdot \mathrm {E} [T(x)]}The metric has a particularly simple form if we are using thenatural parameters. In this case,η(θ)=θ{\displaystyle \eta (\theta )=\theta }, so the metric is just∇θ2A{\displaystyle \nabla _{\theta }^{2}A}.
Multivariate normal distributionN(μ,Σ){\displaystyle {\mathcal {N}}(\mu ,\Sigma )}−lnp(x|μ,Σ)=12(x−μ)TΣ−1(x−μ)+12lndet(Σ)+C{\displaystyle -\ln p(x|\mu ,\Sigma )={\frac {1}{2}}(x-\mu )^{T}\Sigma ^{-1}(x-\mu )+{\frac {1}{2}}\ln \det(\Sigma )+C}LetT=Σ−1{\displaystyle T=\Sigma ^{-1}}be the precision matrix.
The metric splits to a mean part and a precision/variance part, becausegμ,Σ=0{\displaystyle g_{\mu ,\Sigma }=0}. The mean part is the precision matrix:gμi,μj=Tij{\displaystyle g_{\mu _{i},\mu _{j}}=T_{ij}}. The precision part isgT,T=−12∇T2lndetT{\displaystyle g_{T,T}=-{\frac {1}{2}}\nabla _{T}^{2}\ln \det T}.
In particular, for single variable normal distribution,g=[t00(2t2)−1]=σ−2[1002]{\displaystyle g={\begin{bmatrix}t&0\\0&(2t^{2})^{-1}\end{bmatrix}}=\sigma ^{-2}{\begin{bmatrix}1&0\\0&2\end{bmatrix}}}. Letx=μ/2,y=σ{\displaystyle x=\mu /{\sqrt {2}},y=\sigma }, thends2=2dx2+dy2y2{\displaystyle ds^{2}=2{\frac {dx^{2}+dy^{2}}{y^{2}}}}. This is thePoincaré half-plane model.
The shortest paths (geodesics) between two univariate normal distributions are either parallel to theσ{\displaystyle \sigma }axis, or half circular arcs centered on theμ/2{\displaystyle \mu /{\sqrt {2}}}-axis.
The geodesic connectingδμ0,δμ1{\displaystyle \delta _{\mu _{0}},\delta _{\mu _{1}}}has formulaϕ↦N(μ0+μ12+μ1−μ02cosϕ,σ2sin2ϕ){\displaystyle \phi \mapsto {\mathcal {N}}\left({\frac {\mu _{0}+\mu _{1}}{2}}+{\frac {\mu _{1}-\mu _{0}}{2}}\cos \phi ,\sigma ^{2}\sin ^{2}\phi \right)}whereσ=μ1−μ022{\displaystyle \sigma ={\frac {\mu _{1}-\mu _{0}}{2{\sqrt {2}}}}}, and the arc-length parametrization iss=2lntan(ϕ/2){\displaystyle s={\sqrt {2}}\ln \tan(\phi /2)}.
Alternatively, the metric can be obtained as the second derivative of therelative entropyorKullback–Leibler divergence.[5]To obtain this, one considers two probability distributionsP(θ){\displaystyle P(\theta )}andP(θ0){\displaystyle P(\theta _{0})}, which are infinitesimally close to one another, so that
withΔθj{\displaystyle \Delta \theta ^{j}}an infinitesimally small change ofθ{\displaystyle \theta }in thejdirection. Then, since the Kullback–Leibler divergenceDKL[P(θ0)‖P(θ)]{\displaystyle D_{\mathrm {KL} }[P(\theta _{0})\|P(\theta )]}has an absolute minimum of 0 whenP(θ)=P(θ0){\displaystyle P(\theta )=P(\theta _{0})}, one has an expansion up to second order inθ=θ0{\displaystyle \theta =\theta _{0}}of the form
The symmetric matrixgjk{\displaystyle g_{jk}}is positive (semi) definite and is theHessian matrixof the functionfθ0(θ){\displaystyle f_{\theta _{0}}(\theta )}at the extremum pointθ0{\displaystyle \theta _{0}}. This can be thought of intuitively as: "The distance between two infinitesimally close points on a statistical differential manifold is the informational difference between them."
TheRuppeiner metricandWeinhold metricare the Fisher information metric calculated forGibbs distributionsas the ones found in equilibrium statistical mechanics.[6][7]
Theactionof a curve on aRiemannian manifoldis given by
The path parameter here is timet; this action can be understood to give the change infree entropyof a system as it is moved from timeato timeb.[7]Specifically, one has
as the change in free entropy. This observation has resulted in practical applications inchemicalandprocessing industry[citation needed]: in order to minimize the change in free entropy of a system, one should follow the minimumgeodesicpath between the desired endpoints of the process. The geodesic minimizes the entropy, due to theCauchy–Schwarz inequality, which states that the action is bounded below by the length of the curve, squared.
The Fisher metric also allows the action and the curve length to be related to theJensen–Shannon divergence.[7]Specifically, one has
where the integranddJSDis understood to be the infinitesimal change in the Jensen–Shannon divergence along the path taken. Similarly, for thecurve length, one has
That is, the square root of the Jensen–Shannon divergence is just the Fisher metric (divided by the square root of 8).
For adiscrete probability space, that is, a probability space on a finite set of objects, the Fisher metric can be understood to simply be theEuclidean metricrestricted to a positiveorthant(e.g. "quadrant" inR2{\displaystyle \mathbb {R} ^{2}}) of a unit sphere, after appropriate changes of variable.[8]
Consider a flat, Euclidean space, of dimensionN+1, parametrized by pointsy=(y0,⋯,yn){\displaystyle y=(y_{0},\cdots ,y_{n})}. The metric for Euclidean space is given by
where thedyi{\displaystyle \textstyle dy_{i}}are1-forms; they are the basis vectors for thecotangent space. Writing∂∂yj{\displaystyle \textstyle {\frac {\partial }{\partial y_{j}}}}as the basis vectors for thetangent space, so that
the Euclidean metric may be written as
The superscript 'flat' is there to remind that, when written in coordinate form, this metric is with respect to the flat-space coordinatey{\displaystyle y}.
AnN-dimensional unit sphere embedded in (N+ 1)-dimensional Euclidean space may be defined as
This embedding induces a metric on the sphere, it is inherited directly from the Euclidean metric on the ambient space. It takes exactly the same form as the above, taking care to ensure that the coordinates are constrained to lie on the surface of the sphere. This can be done, e.g. with the technique ofLagrange multipliers.
Consider now the change of variablepi=yi2{\displaystyle p_{i}=y_{i}^{2}}. The sphere condition now becomes the probability normalization condition
while the metric becomes
The last can be recognized as one-fourth of the Fisher information metric. To complete the process, recall that the probabilities are parametric functions of the manifold variablesθ{\displaystyle \theta }, that is, one haspi=pi(θ){\displaystyle p_{i}=p_{i}(\theta )}. Thus, the above induces a metric on the parameter manifold:
or, in coordinate form, the Fisher information metric is:
where, as before,
The superscript 'fisher' is present to remind that this expression is applicable for the coordinatesθ{\displaystyle \theta }; whereas the non-coordinate form is the same as the Euclidean (flat-space) metric. That is, the Fisher information metric on a statistical manifold is simply (four times) the Euclidean metric restricted to the positive orthant of the sphere, after appropriate changes of variable.
When the random variablep{\displaystyle p}is not discrete, but continuous, the argument still holds. This can be seen in one of two different ways. One way is to carefully recast all of the above steps in an infinite-dimensional space, being careful to define limits appropriately, etc., in order to make sure that all manipulations are well-defined, convergent, etc. The other way, as noted byGromov,[8]is to use acategory-theoreticapproach; that is, to note that the above manipulations remain valid in the category of probabilities. Here, one should note that such a category would have theRadon–Nikodym property, that is, theRadon–Nikodym theoremholds in this category. This includes theHilbert spaces; these are square-integrable, and in the manipulations above, this is sufficient to safely replace the sum over squares by an integral over squares.
The above manipulations deriving the Fisher metric from the Euclidean metric can be extended to complexprojective Hilbert spaces. In this case, one obtains theFubini–Study metric.[9]This should perhaps be no surprise, as the Fubini–Study metric provides the means of measuring information in quantum mechanics. TheBures metric, also known as theHelstrom metric, is identical to the Fubini–Study metric,[9]although the latter is usually written in terms ofpure states, as below, whereas the Bures metric is written formixed states. By setting the phase of the complex coordinate to zero, one obtains exactly one-fourth of the Fisher information metric, exactly as above.
One begins with the same trick, of constructing aprobability amplitude, written inpolar coordinates, so:
Here,ψ(x;θ){\displaystyle \psi (x;\theta )}is a complex-valuedprobability amplitude;p(x;θ){\displaystyle p(x;\theta )}andα(x;θ){\displaystyle \alpha (x;\theta )}are strictly real. The previous calculations are obtained by
settingα(x;θ)=0{\displaystyle \alpha (x;\theta )=0}. The usual condition that probabilities lie within asimplex, namely that
is equivalently expressed by the idea the square amplitude be normalized:
Whenψ(x;θ){\displaystyle \psi (x;\theta )}is real, this is the surface of a sphere.
TheFubini–Study metric, written in infinitesimal form, using quantum-mechanicalbra–ket notation, is
In this notation, one has that⟨x∣ψ⟩=ψ(x;θ){\displaystyle \langle x\mid \psi \rangle =\psi (x;\theta )}and integration over the entire measure spaceXis written as
The expression|δψ⟩{\displaystyle \vert \delta \psi \rangle }can be understood to be an infinitesimal variation; equivalently, it can be understood to be a1-formin thecotangent space. Using the infinitesimal notation, the polar form of the probability above is simply
Inserting the above into the Fubini–Study metric gives:
Settingδα=0{\displaystyle \delta \alpha =0}in the above makes it clear that the first term is (one-fourth of) the Fisher information metric. The full form of the above can be made slightly clearer by changing notation to that of standard Riemannian geometry, so that the metric becomes a symmetric2-formacting on thetangent space. The change of notation is done simply replacingδ→d{\displaystyle \delta \to d}andds2→h{\displaystyle ds^{2}\to h}and noting that the integrals are just expectation values; so:
The imaginary term is asymplectic form, it is theBerry phaseorgeometric phase. In index notation, the metric is:
Again, the first term can be clearly seen to be (one fourth of) the Fisher information metric, by settingα=0{\displaystyle \alpha =0}. Equivalently, the Fubini–Study metric can be understood as the metric on complex projective Hilbert space that is induced by the complex extension of the flat Euclidean metric. The difference between this, and the Bures metric, is that the Bures metric is written in terms of mixed states.
A slightly more formal, abstract definition can be given, as follows.[10]
LetXbe anorientable manifold, and let(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}be ameasureonX. Equivalently, let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability spaceonΩ=X{\displaystyle \Omega =X}, withsigma algebraF=Σ{\displaystyle {\mathcal {F}}=\Sigma }and probabilityP=μ{\displaystyle P=\mu }.
Thestatistical manifoldS(X) ofXis defined as the space of all measuresμ{\displaystyle \mu }onX(with the sigma-algebraΣ{\displaystyle \Sigma }held fixed). Note that this space is infinite-dimensional, and is commonly taken to be aFréchet space. The points ofS(X) are measures.
Pick a pointμ∈S(X){\displaystyle \mu \in S(X)}and consider thetangent spaceTμS{\displaystyle T_{\mu }S}. The Fisher information metric is then aninner producton the tangent space. With someabuse of notation, one may write this as
Here,σ1{\displaystyle \sigma _{1}}andσ2{\displaystyle \sigma _{2}}are vectors in the tangent space; that is,σ1,σ2∈TμS{\displaystyle \sigma _{1},\sigma _{2}\in T_{\mu }S}. The abuse of notation is to write the tangent vectors as if they are derivatives, and to insert the extraneousdin writing the integral: the integration is meant to be carried out using the measureμ{\displaystyle \mu }over the whole spaceX. This abuse of notation is, in fact, taken to be perfectly normal inmeasure theory; it is the standard notation for theRadon–Nikodym derivative.
In order for the integral to be well-defined, the spaceS(X) must have theRadon–Nikodym property, and more specifically, the tangent space is restricted to those vectors that aresquare-integrable. Square integrability is equivalent to saying that aCauchy sequenceconverges to a finite value under theweak topology: the space contains its limit points. Note thatHilbert spacespossess this property.
This definition of the metric can be seen to be equivalent to the previous, in several steps. First, one selects asubmanifoldofS(X) by considering only those measuresμ{\displaystyle \mu }that are parameterized by some smoothly varying parameterθ{\displaystyle \theta }. Then, ifθ{\displaystyle \theta }is finite-dimensional, then so is the submanifold; likewise, the tangent space has the same dimension asθ{\displaystyle \theta }.
With some additional abuse of language, one notes that theexponential mapprovides a map from vectors in a tangent space to points in an underlying manifold. Thus, ifσ∈TμS{\displaystyle \sigma \in T_{\mu }S}is a vector in the tangent space, thenp=exp(σ){\displaystyle p=\exp(\sigma )}is the corresponding probability associated with pointp∈S(X){\displaystyle p\in S(X)}(after theparallel transportof the exponential map toμ{\displaystyle \mu }.) Conversely, given a pointp∈S(X){\displaystyle p\in S(X)}, the logarithm gives a point in the tangent space (roughly speaking, as again, one must transport from the origin to pointμ{\displaystyle \mu }; for details, refer to original sources). Thus, one has the appearance of logarithms in the simpler definition, previously given.
|
https://en.wikipedia.org/wiki/Fisher_information_metric
|
Actuarial scienceis the discipline that appliesmathematicalandstatisticalmethods toassess riskininsurance,pension,finance,investmentand other industries and professions.
Actuariesare professionals trained in this discipline. In many countries, actuaries must demonstrate their competence by passing a series of rigorous professional examinations focused in fields such as probability and predictive analysis.
Actuarial science includes a number of interrelated subjects, including mathematics,probability theory, statistics, finance,economics,financial accountingandcomputer science. Historically, actuarial science used deterministic models in the construction of tables and premiums. The science has gone through revolutionary changes since the 1980s due to the proliferation of high speed computers and the union ofstochasticactuarial models with modern financial theory.[1]
Many universities have undergraduate and graduate degree programs in actuarial science. In 2010,[needs update]a study published by job search website CareerCast ranked actuary as the #1 job in the United States.[2]The study used five key criteria to rank jobs: environment, income, employment outlook, physical demands, and stress. In 2024,U.S. News & World Reportranked actuary as the third-best job in the business sector and the eighth-best job inSTEM.[3]
Actuarial science became a formal mathematical discipline in the late 17th century with the increased demand for long-term insurance coverage such as burial,life insurance, andannuities. These long term coverages required that money be set aside to pay future benefits, such as annuity and death benefits many years into the future. This requires estimating future contingent events, such as the rates of mortality by age, as well as the development of mathematical techniques for discounting the value of funds set aside and invested. This led to the development of an important actuarial concept, referred to as thepresent valueof a future sum. Certain aspects of the actuarial methods for discountingpension fundshave come under criticism from modernfinancial economics.[citation needed]
Actuarial science is also applied toproperty,casualty,liability, andgeneral insurance. In these forms of insurance, coverage is generally provided on a renewable period, (such as a yearly). Coverage can be cancelled at the end of the period by either party.[citation needed]
Propertyandcasualty insurancecompanies tend to specialize because of the complexity and diversity of risks.[citation needed]One division is to organize around personal and commercial lines of insurance. Personal lines of insurance are for individuals and include fire, auto, homeowners, theft and umbrella coverages. Commercial lines address the insurance needs of businesses and include property, business continuation, product liability, fleet/commercial vehicle, workers compensation, fidelity and surety, andD&Oinsurance. The insurance industry also provides coverage for exposures such as catastrophe, weather-related risks, earthquakes, patent infringement and other forms of corporate espionage, terrorism, and "one-of-a-kind" (e.g., satellite launch). Actuarial science provides data collection, measurement, estimating, forecasting, and valuation tools to provide financial and underwriting data for management to assess marketing opportunities and the nature of the risks. Actuarial science often helps to assess the overall risk from catastrophic events in relation to its underwriting capacity or surplus.[citation needed]
In thereinsurancefields, actuarial science can be used to design and price reinsurance and retrocession arrangements, and to establish reserve funds for known claims and future claims and catastrophes.[citation needed]
There is an increasing trend to recognize that actuarial skills can be applied to a range of applications outside the traditional fields of insurance, pensions, etc. One notable example is the use in some US states of actuarial models to set criminal sentencing guidelines. These models attempt to predict the chance of re-offending according to rating factors which include the type of crime, age, educational background and ethnicity of the offender.[7]However, these models have been open to criticism as providing justification for discrimination against specific ethnic groups by law enforcement personnel. Whether this is statistically correct or a self-fulfilling correlation remains under debate.[8]
Another example is the use of actuarial models to assess the risk of sex offense recidivism. Actuarial models and associated tables, such as the MnSOST-R, Static-99, and SORAG, have been used since the late 1990s to determine the likelihood that a sex offender will re-offend and thus whether he or she should be institutionalized or set free.[9]
Traditional actuarial science and modernfinancial economicsin the US have different practices, which is caused by different ways of calculating funding and investment strategies, and by different regulations.[citation needed]
Regulations are from theArmstrong investigation of 1905, theGlass–Steagall Act of 1932, the adoption of theMandatory Security Valuation Reserveby theNational Association of Insurance Commissioners, which cushioned market fluctuations, and theFinancial Accounting Standards Board, (FASB) in the US and Canada, which regulates pensions valuations and funding.[citation needed]
Historically, much of the foundation of actuarial theory predated modern financial theory. In the early twentieth century, actuaries were developing many techniques that can be found in modern financial theory, but for various historical reasons, these developments did not achieve much recognition.[10]
As a result, actuarial science developed along a different path, becoming more reliant on assumptions, as opposed to thearbitrage-freerisk-neutral valuationconcepts used in modern finance. The divergence is not related to the use of historical data and statistical projections of liability cash flows, but is instead caused by the manner in which traditional actuarial methods apply market data with those numbers. For example, one traditional actuarial method suggests that changing theasset allocationmix of investments can change the value of liabilities and assets (by changing thediscount rateassumption). This concept is inconsistent withfinancial economics.[citation needed]
The potential of modern financial economics theory to complement existing actuarial science was recognized by actuaries in the mid-twentieth century.[11]In the late 1980s and early 1990s, there was a distinct effort for actuaries to combine financial theory and stochastic methods into their established models.[12]Ideas from financial economics became increasingly influential in actuarial thinking, and actuarial science has started to embrace more sophisticated mathematical modelling of finance.[13]Today, the profession, both in practice and in the educational syllabi of many actuarial organizations, is cognizant of the need to reflect the combined approach of tables, loss models, stochastic methods, and financial theory.[14]However, assumption-dependent concepts are still widely used (such as the setting of the discount rate assumption as mentioned earlier), particularly in North America.[citation needed]
Product design adds another dimension to the debate. Financial economists argue that pension benefits are bond-like and should not be funded with equity investments without reflecting the risks of not achieving expected returns. But some pension products do reflect the risks of unexpected returns. In some cases, the pension beneficiary assumes the risk, or the employer assumes the risk. The current debate now seems to be focusing on four principles:
Essentially, financial economics state that pension assets should not be invested in equities for a variety of theoretical and practical reasons.[15]
Elementarymutual aidagreements and pensions arose in antiquity.[16]Early in theRoman empire, associations were formed to meet the expenses of burial, cremation, and monuments—precursors toburial insuranceandfriendly societies. A small sum was paid into a communal fund on a weekly basis, and upon the death of a member, the fund would cover the expenses of rites and burial. These societies sometimes sold shares in the building ofcolumbāria, or burial vaults, owned by the fund—the precursor tomutual insurance companies.[17]Other early examples of mutualsuretyand assurance pacts can be traced back to various forms of fellowship within the Saxon clans of England and their Germanic forebears, and to Celtic society.[18]However, many of these earlier forms of surety and aid would often fail due to lack of understanding and knowledge.[19]
The 17th century was a period of advances in mathematics in Germany, France and England. At the same time there was a rapidly growing desire and need to place the valuation of personal risk on a more scientific basis. Independently of each other,compound interestwas studied andprobability theoryemerged as a well-understood mathematical discipline. Another important advance came in 1662 from a Londondraper, the father ofdemography,John Graunt, who showed that there were predictable patterns of longevity and death in a group, orcohort, of people of the same age, despite the uncertainty of the date of death of any one individual. This study became the basis for the originallife table. One could now set up an insurance scheme to provide life insurance or pensions for a group of people, and to calculate with some degree of accuracy how much each person in the group should contribute to a common fund assumed to earn a fixed rate of interest. The first person to demonstrate publicly how this could be done wasEdmond Halley(ofHalley's cometfame). Halley constructed his own life table, and showed how it could be used to calculate thepremiumamount someone of a given age should pay to purchase a life annuity.[20]
James Dodson's pioneering work on the long term insurance contracts under which the same premium is charged each year led to the formation of the Society for Equitable Assurances on Lives and Survivorship (now commonly known asEquitable Life) in London in 1762.[21]William Morganis often considered the father of modern actuarial science for his work in the field in the 1780s and 90s. Many other life insurance companies and pension funds were created over the following 200 years. Equitable Life was the first to use the word "actuary" for its chief executive officer in 1762.[22]Previously, "actuary" meant an official who recorded the decisions, or "acts", of ecclesiastical courts.[19]Other companies that did not use such mathematical and scientific methods most often failed or were forced to adopt the methods pioneered by Equitable.[23]
In the 18th and 19th centuries, calculations were performed without computers. The computations of life insurance premiums and reserving requirements are rather complex, and actuaries developed techniques to make the calculations as easy as possible, for example "commutation functions" (essentially precalculated columns of summations over time of discounted values of survival and death probabilities).[24]Actuarial organizations were founded to support and further both actuaries and actuarial science, and to protect the public interest by promoting competency and ethical standards.[25]However, calculations remained cumbersome, and actuarial shortcuts were commonplace. Non-life actuaries followed in the footsteps of their life insurance colleagues during the 20th century. The 1920 revision for the New-York based National Council on Workmen's Compensation Insurance rates took over two months of around-the-clock work by day and night teams of actuaries.[26]In the 1930s and 1940s, the mathematical foundations forstochasticprocesses were developed.[27]Actuaries could now begin to estimate losses using models of random events, instead of thedeterministicmethods they had used in the past. The introduction and development of the computer further revolutionized the actuarial profession. From pencil-and-paper to punchcards to current high-speed devices, the modeling and forecasting ability of the actuary has rapidly improved, while still being heavily dependent on the assumptions input into the models, and actuaries needed to adjust to this new world .[28]
|
https://en.wikipedia.org/wiki/Actuarial_science
|
Artificial intelligence in healthcareis theapplication of artificial intelligence(AI) to analyze and understand complex medical and healthcare data. In some cases, it can exceed or augment human capabilities by providing better or faster ways to diagnose, treat, or prevent disease.[1][2][3]
As the widespread use of AI in healthcare is still relatively new, research is ongoing into its applications across various medical subdisciplines and related industries. AI programs are being applied to practices such asdiagnostics,[4]treatment protocoldevelopment,[5]drug development,[6]personalized medicine,[7]andpatient monitoringand care.[8]Sinceradiographsare the most commonly performed imaging tests in radiology, the potential for AI to assist with triage and interpretation of radiographs is particularly significant.[9]
Using AI also presents unprecedented ethical concerns related to issues such asdata privacy, automation of jobs, and amplifying already existingbiases.[10]Furthermore, new technologies such as AI are often resisted by healthcare leaders, leading to slow and erratic adoption.[11]In contrast, there are also several cases where AI has been put to use in healthcare without proper testing.[12][13][14][15]A systematic review and thematic analysis in 2023 showed that most stakeholders including health professionals, patients, and the general public doubted that care involving AI could be empathetic.[16]Moreover, meta-studies have found that the scientific literature on AI in healthcare often suffers from a lack ofreproducibility.[17][18][19][20]
Accurate and early diagnosis of diseases is still a challenge in healthcare. Recognizing medical conditions and their symptoms is a complex problem. AI can assist clinicians with its data processing capabilities to save time and improve accuracy.[21]Through the use of machine learning, artificial intelligence can be able to substantially aid doctors in patient diagnosis through the analysis of masselectronic health records(EHRs).[22]AI can help early prediction, for example, ofAlzheimer's diseaseanddementias, by looking through large numbers of similar cases and possible treatments.[23]
Doctors' decision making could also be supported by AI in urgent situations, for example in theemergency department. Here AI algorithms can help prioritize more serious cases and reduce waiting time.Decision support systemsaugmented with AI can offer real-time suggestions and faster data interpretation to aid the decisions made by healthcare professionals.[21]
In 2023 a study reported higher satisfaction rates withChatGPT-generated responses compared with those from physicians for medical questions posted onReddit’s r/AskDocs.[24]Evaluators preferred ChatGPT's responses to physician responses in 78.6% of 585 evaluations, noting better quality and empathy. The authors noted that these were isolated questions taken from an online forum, not in the context of an established patient-physician relationship.[24]Moreover, responses were not graded on the accuracy of medical information, and some have argued that the experiment was not properlyblinded, with the evaluators being coauthors of the study.[25][26][27]
Recent developments instatistical physics,machine learning, andinferencealgorithms are also being explored for their potential in improving medical diagnostic approaches.[28]Also, the establishment of largehealthcare-related data warehousesof sometimes hundreds of millions of patients provides extensive training data for AI models.[29]
Electronic health records (EHR) are crucial to the digitalization and information spread of the healthcare industry. Now that around 80% of medical practices use EHR, some anticipate the use of artificial intelligence to interpret the records and provide new information to physicians.[30]
One application uses natural language processing (NLP) to make more succinct reports that limit the variation between medical terms by matching similar medical terms.[30]For example, the term heart attack andmyocardial infarctionmean the same things, but physicians may use one over the other based on personal preferences.[30]NLP algorithms consolidate these differences so that larger datasets can be analyzed.[30]Another use of NLP identifies phrases that are redundant due to repetition in a physician's notes and keeps the relevant information to make it easier to read.[30]Other applications useconcept processingto analyze the information entered by the current patient's doctor to present similar cases and help the physician remember to include all relevant details.[31]
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient's record andpredict a riskfor a disease based on their previous information and family history.[32]One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts.[33]This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses.[33]Thus, the algorithm can take in a new patient's data and try to predict the likeliness that they will have a certain condition or disease.[33]Since the algorithms can evaluate a patient's information based on collective data, they can find any outstanding issues to bring to a physician's attention and save time.[32]One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response.[34]These methods are helpful due to the fact that the amount of online health records doubles every five years.[32]Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.[32]
Improvements innatural language processingled to the development of algorithms to identifydrug-drug interactionsin medical literature.[35][36][37][38]Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.[39]To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers atCarlos III Universityassembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.[40]Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.[41]Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.[35][36][38]
Other algorithms identify drug-drug interactions from patterns inuser-generated content, especially electronic health records and/or adverse event reports.[36][37]Organizations such as theFDA Adverse Event Reporting System(FAERS) and the World Health Organization'sVigiBaseallow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.[42]
The increase oftelemedicine, the treatment of patients remotely, has shown the rise of possible AI applications.[43]AI can assist in caring for patients remotely by monitoring their information through sensors.[44]A wearable device may allow for constant monitoring of a patient and the ability to notice changes that may be less distinguishable by humans. The information can be compared to other data that has already been collected using artificial intelligence algorithms that alert physicians if there are any issues to be aware of.[44]
Another application of artificial intelligence is chat-bot therapy. Some researchers charge that the reliance onchatbots for mental healthcaredoes not offer the reciprocity and accountability of care that should exist in the relationship between the consumer of mental healthcare and the care provider (be it a chat-bot or psychologist), though.[45]Some examples of these chatbots include Woebot, Earkick and Wysa.[46][47][48]
Since the average age has risen due to a longer life expectancy, artificial intelligence could be useful in helping take care of older populations.[49]Tools such as environment and personal sensors can identify a person's regular activities and alert a caretaker if a behavior or a measured vital is abnormal.[49]Although the technology is useful, there are also discussions about limitations of monitoring in order to respect a person's privacy since there are technologies that are designed to map out home layouts and detect human interactions.[49]
AI has the potential to streamline care coordination and reduce the workload. AI algorithms can automate administrative tasks, prioritize patient needs and facilitate seamless communication in a healthcare team.[50]This enables healthcare providers to focus more on direct patient care and ensures the efficient and coordinated delivery of healthcare services.
Artificial intelligence algorithms have shown promising results in accurately diagnosing and risk stratifying patients with concern for coronary artery disease, showing potential as an initial triage tool.[51][52]Other algorithms have been used in predicting patient mortality, medication effects, and adverse events following treatment foracute coronary syndrome.[51]Wearables, smartphones, and internet-based technologies have also shown the ability to monitor patients' cardiac data points, expanding the amount of data and the various settings AI models can use and potentially enabling earlier detection of cardiac events occurring outside of the hospital.[53]A research in 2019 found that AI can be used to predict heart attack with up to 90% accuracy.[54]Another growing area of research is the utility of AI in classifyingheart soundsand diagnosingvalvular disease.[55]Challenges of AI in cardiovascular medicine have included the limited data available to train machine learning models, such as limited data onsocial determinants of healthas they pertain tocardiovascular disease.[56]
A key limitation in early studies evaluating AI were omissions of data comparing algorithmic performance to humans. Examples of studies which assess AI performance relative to physicians includes how AI is non-inferior to humans in interpretation of cardiac echocardiograms[57]and that AI can diagnose heart attack better than human physicians in the emergency setting, reducing both low-value testing and missed diagnoses.[58]
In cardiovasculartissue engineeringandorganoidstudies, AI is increasingly used to analyze microscopy images, and integrate electrophysiological read outs.[59]
Medical imaging(such as X-ray and photography) is a commonly used tool indermatology[60]and thedevelopment of deep learninghas been strongly tied toimage processing. Therefore, there is a natural fit between the dermatology and deep learning. Machine learning learning holds great potential to process these images for better diagnoses.[61]Han et al. showed keratinocytic skin cancer detection from face photographs.[62]Esteva et al. demonstrated dermatologist-level classification of skin cancer from lesion images.[63]Noyan et al. demonstrated aconvolutional neural networkthat achieved 94% accuracy at identifying skin cells from microscopicTzanck smearimages.[64]A concern raised with this work is that it has not engaged with disparities related to skin color or differential treatment of patients with non-white skin tones.[65]
According to some researchers, AI algorithms have been shown to be more effective than dermatologists at identifying cancer.[66]However, a 2021 review article found that a majority of papers analyzing the performance of AI algorithms designed for skin cancer classification failed to use external test sets.[67]Only four research studies were found in which the AI algorithms were tested on clinics, regions, or populations distinct from those it was trained on, and in each of those four studies, the performance of dermatologists was found to be on par with that of the algorithm. Moreover, only one study[68]was set in the context of a full clinical examination; others were based on interaction through web-apps or online questionnaires, with most based entirely on context-free images of lesions. In this study, it was found that dermatologists significantly outperformed the algorithms. Many articles claiming superior performance of AI algorithms also fail to distinguish between trainees and board-certified dermatologists in their analyses.[67]
It has also been suggested that AI could be used to automatically evaluate the outcome ofmaxillo-facial surgeryorcleft palatetherapy in regard to facial attractiveness or age appearance.[69][70]
AI can play a role in various facets of the field ofgastroenterology.Endoscopicexams such asesophagogastroduodenoscopies(EGD) andcolonoscopiesrely on rapid detection of abnormal tissue. By enhancing these endoscopic procedures with AI, clinicians can more rapidly identify diseases, determine their severity, and visualize blind spots. Early trials in using AI detection systems of earlystomach cancerhave shownsensitivityclose to expert endoscopists.[71]
AI can assist doctors treatingulcerative colitisin detecting the microscopic activity of the disease in people and predicting when flare-ups will happen. For example, an AI-powered tool was developed to analyse digitised bowel samples (biopsies). The tool was able to distinguish with 80% accuracy between samples that showremissionof colitis and those with active disease. It also predicted the risk of a flare-up happening with the same accuracy. These rates of successfully using microscopic disease activity to predict disease flare are similar to the accuracy ofpathologists.[72][73]
Artificial intelligence utilises massive amounts of data to help with predicting illness, prevention, and diagnosis, as well as patient monitoring. In obstetrics, artificial intelligence is utilized in magnetic resonance imaging, ultrasound, and foetal cardiotocography. AI contributes in the resolution of a variety of obstetrical diagnostic issues.[74]
AI has shown potential in both the laboratory and clinical spheres ofinfectious diseasemedicine.[75]During theCOVID-19 pandemic, AI has been used for early detection, tracking virus spread and analysing virus behaviour, among other things.[76]However, there were only a few examples of AI being used directly in clinical practice during the pandemic itself.[77]
Other applications of AI around infectious diseases includesupport-vector machinesidentifyingantimicrobial resistance, machine learning analysis of blood smears to detectmalaria, and improved point-of-care testing ofLyme diseasebased on antigen detection. Additionally, AI has been investigated for improving diagnosis ofmeningitis,sepsis, andtuberculosis, as well as predicting treatment complications inhepatitis Bandhepatitis Cpatients.[75]
AI has been used to identify causes of knee pain that doctors miss, that disproportionately affect Black patients.[78]Underserved populations experience higher levels of pain. These disparities persist even after controlling for the objective severity of diseases like osteoarthritis, as graded by human physicians using medical images, raising the possibility that underserved patients’ pain stems from factors external to the knee, such as stress. Researchers have conducted a study using a machine-learning algorithm to show that standard radiographic measures of severity overlook objective but undiagnosed features that disproportionately affect diagnosis and management of underserved populations with knee pain. They proposed that new algorithmic measure ALG-P could potentially enable expanded access to treatments for underserved patients.[79]
The use of AI technologies has been explored for use in the diagnosis and prognosis ofAlzheimer's disease(AD). For diagnostic purposes, machine learning models have been developed that rely on structural MRI inputs.[80]The input datasets for these models are drawn from databases such as the Alzheimer's Disease Neuroimaging Initiative.[81]Researchers have developed models that rely onconvolutional neural networkswith the aim of improving early diagnostic accuracy.[82]Generative adversarial networksare a form ofdeep learningthat have also performed well in diagnosing AD.[83]There have also been efforts to develop machine learning models into forecasting tools that can predict the prognosis of patients with AD. Forecasting patient outcomes through generative models has been proposed by researchers as a means of synthesizing training and validation sets.[84]They suggest that generated patient forecasts could be used to provide future models larger training datasets than current open access databases.
AI has been explored for use incancerdiagnosis, risk stratification, molecular characterization of tumors, and cancer drug discovery. A particular challenge in oncologic care that AI is being developed to address is the ability to accurately predict which treatment protocols will be best suited for each patient based on their individual genetic, molecular, and tumor-based characteristics.[85]AI has been trialed in cancer diagnostics with the reading of imaging studies andpathologyslides.[86]
In January 2020,Google DeepMindannounced an algorithm capable of surpassing human experts inbreast cancer detectionin screening scans.[87][88]A number of researchers, includingTrevor Hastie,Joelle Pineau, andRobert Tibshiraniamong others, published a reply claiming that DeepMind's research publication inNaturelacked key details on methodology and code, "effectively undermin[ing] its scientific value" and making it impossible for the scientific community to confirm the work.[89]In theMIT Technology Review, author Benjamin Haibe-Kains characterized DeepMind's work as "an advertisement" having little to do with science.[90]
In July 2020, it was reported that an AI algorithm developed by the University of Pittsburgh achieves the highest accuracy to date inidentifyingprostate cancer, with 98% sensitivity and 97% specificity.[91][92]In 2023 a study reported the use of AI forCT-basedradiomicsclassification at grading the aggressiveness of retroperitonealsarcomawith 82% accuracy compared with 44% for lab analysis of biopsies.[93][94]
Artificial intelligence-enhanced technology is being used as an aid in the screening of eye disease and prevention of blindness.[95]In 2018, the U.S. Food and Drug Administration authorized the marketing of the first medical device to diagnose a specific type of eye disease, diabetic retinopathy using an artificial intelligence algorithm.[96]Moreover, AI technology may be used to further improve "diagnosis rates" because of the potential to decrease detection time.[97]
For many diseases,pathologicalanalysis of cells and tissues is considered to be the gold standard of disease diagnosis. Methods ofdigital pathologyallows microscopy slides to be scanned and digitally analyzed. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including breast cancer, hepatitis B,gastric cancer, andcolorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes.[71]AI is well-suited for use in low-complexity pathological analysis of large-scalescreeningsamples, such as colorectal orbreast cancerscreening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis.[99]Several deep learning and artificialneural networkmodels have shown accuracy similar to that of human pathologists,[99]and a study of deep learning assistance in diagnosingmetastaticbreast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone.[100]Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years,[101]though savings attributed to AI specifically have not yet been widely researched. The use ofaugmentedandvirtual realitycould prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review.[99]AI also has the potential to identifyhistologicalfindings at levels beyond what the human eye can see,[99]and has shown the ability to usegenotypicandphenotypicdata to more accurately detect the tumor of origin for metastatic cancer.[102]One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlledtrialsin determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research.[99]
Primary care has become one key development area for AI technologies.[103][104]AI in primary care has been used for supporting decision making, predictive modeling, and business analytics.[105]There are only a few examples of AI decision support systems that were prospectively assessed on clinical efficacy when used in practice by physicians. But there are cases where the use of these systems yielded a positive effect on treatment choice by physicians.[106]
As of 2022 in relation to elder care, AIrobotshad been helpful in guiding older residents living in assisted living with entertainment and company. These bots are allowing staff in the home to have more one-on-one time with each resident, but the bots are also programmed with more ability in what they are able to do; such as knowing different languages and different types of care depending on the patient's conditions. The bot is an AI machine, which means it goes through the same training as any other machine - using algorithms to parse the given data, learn from it and predict the outcome in relation to what situation is at hand.[107]
In psychiatry, AI applications are still in a phase of proof-of-concept.[108]Areas where the evidence is widening quickly include predictive modelling of diagnosis and treatment outcomes,[109]chatbots, conversational agents that imitate human behaviour and which have been studied for anxiety and depression.[110]
Challenges include the fact that many applications in the field are developed and proposed by private corporations, such as the screening for suicidal ideation implemented by Facebook in 2017.[111]Such applications outside the healthcare system raise various professional, ethical and regulatory questions.[112]Another issue is often with the validity and interpretability of the models. Small training datasets contain bias that is inherited by the models, and compromises the generalizability and stability of these models. Such models may also have the potential to be discriminatory against minority groups that are underrepresented in samples.[113]
In 2023, US-basedNational Eating Disorders Associationreplaced its humanhelplinestaff with achatbotbut had to take it offline after users reported receiving harmful advice from it.[114][115][116]
AI is being studied within the field ofradiologyto detect and diagnose diseases throughcomputerized tomography(CT) andmagnetic resonance(MR) imaging.[117]It may be particularly useful in settings where demand for human expertise exceeds supply, or where data is too complex to be efficiently interpreted by human readers.[118]Several deep learning models have shown the capability to be roughly as accurate as healthcare professionals in identifying diseases through medical imaging, though few of the studies reporting these findings have been externally validated.[119]AI can also provide non-interpretive benefit to radiologists, such as reducing noise in images, creating high-quality images from lower doses of radiation, enhancing MR image quality,[120]and automatically assessing image quality.[121]Further research investigating the use of AI innuclear medicinefocuses on image reconstruction, anatomical landmarking, and the enablement of lower doses in imaging studies.[122]The analysis of images for supervised AI applications in radiology encompasses two primary techniques at present: (1)convolutional neural network-basedanalysis; and (2) utilization ofradiomics.[118]
AI is also used in breast imaging for analyzing screening mammograms and can participate in improving breast cancer detection rate[123]as well as reducing radiologist's reading workload.
The trend of large health companies merging allows for greater health data accessibility. Greater health data lays the groundwork for the implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in theclinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions.[117]Numerous companies are exploring the possibilities of the incorporation ofbig datain the healthcare industry. Many companies investigate the market opportunities through the realms of "data assessment, storage, management, and analysis technologies" which are all crucial parts of the healthcare industry.[131]With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies.[131]The following are examples of large companies that are contributing to AI algorithms for use in healthcare:
Ava Industries Ltd., a Canadian healthcare technology firm, is developing integrated AI tools to support clinical efficiency. Ava has implemented an embedded AI medical scribe within theis electronic medical record system (EMR) and is further developing tools such as an AI chart summarizer and an AI document classifier.[1]The company has received support through grants fromCanada Health Infowayforits work in advancing digital health solutions.[2]
Tencentis working on several medical systems and services. These includeAI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork
Digital consultant apps use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user's medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
IFlyteklaunched a service robot "Xiao Man", which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas.[citation needed]It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") andSoftbankRobotics ("Pepper").[citation needed]
The Indian startupHaptikdeveloped aWhatsAppchatbot in 2021 which answers questions associated with the deadlycoronavirusinIndia. Similarly, a software platformChatBotin partnership withmedtechstartupInfermedica launchedCOVID-19Risk Assessment ChatBot.[135]
Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well.[131]Companies such asBMW,GE,Tesla,Toyota, andVolvoall have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances.[131]
Artificial intelligence continues to expand in its abilities to diagnose more people accurately in nations where fewer doctors are accessible to the public. Many new technology companies such asSpaceXand theRaspberry Pi Foundationhave enabled more developing countries to have access to computers and the internet than ever before.[136]With the increasing capabilities of AI over the internet, advanced machine learning algorithms can allow patients to get accurately diagnosed when they would previously have no way of knowing if they had a life-threatening disease or not.[136]
Using AI in developing nations that do not have the resources will diminish the need for outsourcing and can improve patient care. AI can allow for not only diagnosis of patient in areas where healthcare is scarce, but also allow for a good patient experience by resourcing files to find the best treatment for a patient.[137]The ability of AI to adjust course as it goes also allows the patient to have their treatment modified based on what works for them; a level of individualized care that is nearly non-existent in developing countries.[137]
Challenges of the clinical use of AI have brought about a potential need forregulations. AI studies need to be completely and transparently reported to have value to inform regulatory approval. Depending on the phase of study, international consensus-based reporting guidelines (TRIPOD+AI,[138]DECIDE-AI,[139]CONSORT-AI[140]) have been developed to provide recommendations on the key details that need to be reported.
While regulations exist pertaining to the collection of patient data such as the Health Insurance Portability and Accountability Act in the US (HIPAA) and the European General Data Protection Regulation (GDPR) pertaining to patients within the EU, health care AI is “"severely under-regulated worldwide" as of 2025.[132]Unclear is whether healthcare AI can be classified merely assoftwareor asmedical device.[132]
The jointITU-WHOFocus Group on Artificial Intelligence for Health(FG-AI4H) has built a platform - known as the ITU-WHOAI for Health Framework- for the testing and benchmarking of AI applications in health domain. As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.
In 2015, theOffice for Civil Rights(OCR) issued rules and regulations to protect the privacy of individuals’ health information, requiring healthcare providers to follow certain privacy rules when using AI, to keep a record of how they use AI and to ensure that their AI systems are secure.[142]
In May 2016, theWhite Houseannounced its plan to host a series of workshops and formation of theNational Science and Technology Council(NSTC) Subcommittee on Machine Learning and Artificial Intelligence.[citation needed]In October 2016, the group published The National Artificial Intelligence Research and Development Strategic Plan, outlining its proposed priorities for Federally-funded AI research and development (within government and academia). The report notes a strategic R&D plan for the subfield ofhealth information technologywas in development stages.[citation needed]
In January 2021, the USFDApublished a new Action Plan, entitled Artificial Intelligence (AI) /Machine Learning (ML)-Based Software as a Medical Device (SaMD) Action Plan.[143]It layed out the FDA's future plans for regulation of medical devices that would include artificial intelligence in their software with five main actions: 1. Tailored Regulatory Framework for Ai/M:-based SaMD, 2. Good Machine Learning Practice (GMLP), 3. Patient-Centered Approach Incorporating Transparency to Users, 4. Regulatory Science Methods Related to Algorithm Bias & Robustness, and 5. Real-World Performance(RWP). This plan was in direct response to stakeholders' feedback on a 2019 discussion paper also published by the FDA.[143]
UnderPresident Bidenthe DHSS and the National Institute of Standards and Technology were instructed to develop regulation of healthcare AI.[132]According to theU.S. Department of Health and Human Services, the OCR issued guidance on theethical use of AIin healthcare in 2021. It outlined four core ethical principles that must be followed: respect forautonomy,beneficence (ethics),non-maleficence, and justice. Respect for autonomy requires that individuals have control over their own data and decisions. Beneficence requires that AI be used to do good, such as improving the quality of care and reducing health disparities. Non-maleficence requires that AI be used to do no harm, such as avoiding discrimination in decisions. Finally, justice requires that AI be used fairly, such as using the same standards for decisions no matter a person's race, gender, or income level. As of March 2021, the OCR had hired a Chief Artificial Intelligence Officer (OCAIO) to pursue the "implementation of the HHS AI strategy".[144]
With thesecond Trump administrationderegulation of health AI began on January 20, 2025 with merely voluntary standards for collecting and sharing data, statutory definitions for algorithmic discrimination, automation bias, and equity being cancelled, cuts toNISTand 19% of FDA workforce eliminated.[132]
Other countries have implemented data protection regulations, more specifically with company privacy invasions. In Denmark, the Danish Expert Group ondata ethicshas adopted recommendations on 'Data for the Benefit of the People'. These recommendations are intended to encourage the responsible use of data in the business sector, with a focus on data processing. The recommendations include a focus on equality and non-discrimination with regard to bias in AI, as well ashuman dignitywhich is to outweigh profit and must be respected in all data processes.[145]
The European Union has implemented theGeneral Data Protection Regulation(GDPR) to protect citizens' personal data, which applies to the use of AI in healthcare. In addition, the European Commission has established guidelines to ensure the ethical development of AI, including the use of algorithms to ensure fairness and transparency.[146]With GDPR, the European Union was the first to regulate AI through data protection legislation. The Union finds privacy as a fundamental human right, it wants to prevent unconsented and secondary uses of data by private or public health facilities. By streamlining access to personal data for health research and findings, they are able to instate the right and importance of patient privacy.[146]In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires organizations to protect the privacy and security of patient information. The Centers for Medicare and Medicaid Services have also released guidelines for the development of AI-based medical applications.[147]
In 2025, Europe was leading the USA on AI regulation, while lagging in innovation and at least one California-based biotech company was "engaging theEuropean Medicines Agencyearlier in development than previously anticipated to mitigate concerns about the FDA's ability to meet development timelines."[132]
While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may introduce several new types of risk to patients and healthcare providers, such asalgorithmic bias,Do not resuscitateimplications, and othermachine moralityissues. AI may also compromise the protection of patients' rights, such as the right to informed consent and the right to medical data protection.[148]
In order to effectively train Machine Learning and use AI in healthcare, massive amounts of data must be gathered. Acquiring this data, however, comes at the cost of patient privacy , i.e.autonomyin most cases and is not well received publicly. For example, a survey conducted in the UK estimated that 63% of the population is uncomfortable with sharing their personal data in order to improve artificial intelligence technology.[149]The scarcity of real, accessible patient data is a hindrance that deters the progress of developing and deploying more artificial intelligence in healthcare.
The lack of regulations surrounding AI in the United States has generated concerns about mismanagement of patient data, such as with corporations utilizing patient data for financial gain. For example, as of 2020Roche, a Swiss healthcare company, was found to have purchased healthcare data for approximately 2 million cancer patients at an estimated total cost of $1.9 billion.[150]Naturally, this generates questions of ethical concern; Is there a monetary price that can be set for data, and should it depend on its perceived value or contributions to science? Is it fair to patients to sell their data? These concerns were addressed in a survey conducted by thePew Research Centerin 2022 that asked Americans for their opinions about the increased presence of AI in their daily lives, and the survey estimated that 37% of Americans were more concerned than excited about such increased presence, with 8% of participants specifically associating their concern with "people misusing AI".[151]Ultimately, the current potential of artificial intelligence in healthcare is additionally hindered by concerns about mismanagement of data collected, especially in the United States.
A systematic review and thematic analysis in 2023 showed that most stakeholders including health professionals, patients, and the general public doubted that care involving AI could beempathetic, or fulfillbeneficence.[16]
According to a 2019 study, AI can replace up to 35% of jobs in the UK within the next 10 to 20 years.[152]However, of these jobs, it was concluded that AI has not eliminated any healthcare jobs so far. Though if AI were to automate healthcare-related jobs, the jobs most susceptible to automation would be those dealing with digital information, radiology, and pathology, as opposed to those dealing with doctor-to-patient interaction.[152]
Ouutputs can be incorrect or incomplete and diagnosis and recommendations harm people.[132]
Since AI makes decisions solely on the data it receives as input, it is important that this data represents accurate patient demographics. In a hospital setting, patients do not have full knowledge of how predictive algorithms are created or calibrated. Therefore, these medical establishments can unfairly code their algorithms todiscriminateagainst minorities and prioritize profits rather than providing optimal care, i.e. violating the ethical principle of social justice ornon-maleficence.[153]A recent scoping review identified 18 equity challenges along with 15 strategies that can be implemented to help address them when AI applications are developed usingmany-to-manymapping.[154]
There can be unintended bias in algorithms that can exacerbate social and healthcare inequities.[153]Since AI's decisions are a direct reflection of its input data, the data it receives must have accurate representation of patient demographics. For instance, if populations are less represented in healthcare data it is likely to create bias in AI tools that lead to incorrect assumptions of a demographic and impact the ability to provide appropriate care.[155]White males are overly represented in medical data sets.[156]Therefore, having minimal patient data on minorities can lead to AI making more accurate predictions for majority populations, leading to unintended worse medical outcomes for minority populations.[157]Collecting data from minority communities can also lead to medical discrimination. For instance, HIV is a prevalent virus among minority communities and HIV status can be used to discriminate against patients.[156]In addition to biases that may arise from sample selection, different clinical systems used to collect data may also impact AI functionality. For example, radiographic systems and their outcomes (e.g., resolution) vary by provider. Moreover, clinician work practices, such as the positioning of the patient for radiography, can also greatly influence the data and make comparability difficult.[158]However, these biases are able to be eliminated through careful implementation and a methodical collection of representative data.
A final source ofalgorithmic bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients.[159]Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program.
Research in the 1960s and 1970s produced the first problem-solving program, orexpert system, known asDendral.[160][161]While it was designed for applications in organic chemistry, it provided the basis for a subsequent systemMYCIN,[162]considered one of the most significant early uses of artificial intelligence in medicine.[162][163]MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.[164]
The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians.[165]Approaches involvingfuzzy settheory,[166]Bayesian networks,[167]andartificial neural networks,[168][169]have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this half-century period that have enabled the growth of healthcare-related applications of AI to include:
|
https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare
|
Analytical proceduresare one of manyfinancial auditprocedures which help anauditorunderstand an entity's business and changes in the business, and to identify potentialriskareas to plan other audit procedures. It can also be anaudit substantive testinvolving the evaluation of financial information made by a study of plausible relationships among both financial and non-financial data. Analytical procedures also encompass such investigation as is necessary of identified fluctuations or relationships that are inconsistent with other relevant information or that differ from expected values by a significant amount.[1]
Analytical procedures are performed at three stages of the audit: at the start, in the middle and at the end of the audit. These three stages arerisk assessmentprocedures,substantiveanalytical procedures, and final analytical procedures.[2]
Analytical procedures include comparison of financial information (data infinancial statement) with prior periods,budgets,forecasts, similarindustriesand so on. It also includes consideration of predictable relationships, such asgross profittosales,payrollcosts toemployees, and financial information and non-financial information, for examples the CEO's reports and the industry news. Possible sources of information about the client include interim financial information,budgets, management accounts, non-financial information, bank and cash records,VATreturns, board minutes, and discussion or correspondence with the client at the year-end.
When designing and performing substantive analytical procedures, the auditor:[1]
If the difference between the expectation and the amount recorded by the entity exceeds the threshold, then the auditor investigates such differences.[1]
In June 2024, thePCAOBproposed a new AS 2305, Designing and Performing Substantive Analytical Procedures, to better align with the auditor’s risk assessment and to address the increasing use of technology tools in performing these procedures.[4]
This accounting-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Analytical_procedures_(finance_auditing)
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Computational sociologyis a branch ofsociologythat uses computationally intensive methods to analyze and model social phenomena. Usingcomputer simulations,artificial intelligence, complex statistical methods, and analytic approaches likesocial network analysis, computational sociology develops and tests theories of complex social processes through bottom-up modeling of social interactions.[1]
It involves the understanding of social agents, the interaction among these agents, and the effect of these interactions on the social aggregate.[2]Although the subject matter and methodologies insocial sciencediffer from those innatural scienceorcomputer science, several of the approaches used in contemporarysocial simulationoriginated from fields such asphysicsandartificial intelligence.[3][4]Some of the approaches that originated in this field have been imported into the natural sciences, such as measures ofnetwork centralityfrom the fields ofsocial network analysisandnetwork science.
In relevant literature, computational sociology is often related to the study ofsocial complexity.[5]Social complexity concepts such ascomplex systems,non-linearinterconnection among macro and micro process, andemergence, have entered the vocabulary of computational sociology.[6]A practical and well-known example is the construction of a computational model in the form of an "artificial society", by which researchers can analyze the structure of asocial system.[2][7]
In the past four decades, computational sociology has been introduced and gaining popularity[according to whom?]. This has been used primarily for modeling or building explanations of social processes and are depending on the emergence of complex behavior from simple activities.[8]The idea behind emergence is that properties of any bigger system do not always have to be properties of the components that the system is made of.[9]Alexander, Morgan, and Broad, classical emergentists, introduced the idea of emergence in the early 20th century. The aim of this method was to find a good enough accommodation between two different and extreme ontologies, which were reductionist materialism and dualism.[8]
While emergence has had a valuable and important role with the foundation of Computational Sociology, there are those who do not necessarily agree. One major leader in the field, Epstein, doubted the use because there were aspects that are unexplainable. Epstein put up a claim against emergentism, in which he says it "is precisely the generative sufficiency of the parts that constitutes the whole's explanation".[8]
Agent-based models have had a historical influence on Computational Sociology. These models first came around in the 1960s, and were used to simulate control and feedback processes in organizations, cities, etc. During the 1970s, the application introduced the use of individuals as the main units for the analyses and used bottom-up strategies for modeling behaviors. The last wave occurred in the 1980s. At this time, the models were still bottom-up; the only difference is that the agents interact interdependently.[8]
In the post-war era,Vannevar Bush'sdifferential analyser,John von Neumann'scellular automata,Norbert Wiener'scybernetics, andClaude Shannon'sinformation theorybecame influential paradigms for modeling and understanding complexity in technical systems. In response, scientists in disciplines such as physics, biology, electronics, and economics began to articulate ageneral theory of systemsin which all natural and physical phenomena are manifestations of interrelated elements in a system that has common patterns and properties. FollowingÉmile Durkheim's call to analyze complex modern societysui generis,[10]post-war structural functionalist sociologists such asTalcott Parsonsseized upon these theories of systematic and hierarchical interaction among constituent components to attempt to generate grand unified sociological theories, such as theAGIL paradigm.[11]Sociologists such asGeorge Homansargued that sociological theories should be formalized into hierarchical structures of propositions and precise terminology from which other propositions and hypotheses could be derived and operationalized into empirical studies.[12]Because computer algorithms and programs had been used as early as 1956 to test and validate mathematical theorems, such as thefour color theorem,[13]some scholars anticipated that similar computational approaches could "solve" and "prove" analogously formalized problems and theorems of social structures and dynamics.
By the late 1960s and early 1970s, social scientists used increasingly available computing technology to perform macro-simulations of control and feedback processes in organizations, industries, cities, and global populations. These models used differential equations to predict population distributions as holistic functions of other systematic factors such as inventory control, urban traffic, migration, and disease transmission.[14][15]Although simulations of social systems received substantial attention in the mid-1970s after theClub of Romepublished reports predicting that policies promoting exponential economic growth would eventually bring global environmental catastrophe,[16]the inconvenient conclusions led many authors to seek to discredit the models, attempting to make the researchers themselves appear unscientific.[2][17]Hoping to avoid the same fate, many social scientists turned their attention toward micro-simulation models to make forecasts and study policy effects by modeling aggregate changes in state of individual-level entities rather than the changes in distribution at the population level.[18]However, these micro-simulation models did not permit individuals to interact or adapt and were not intended for basic theoretical research.[1]
The 1970s and 1980s were also a time when physicists and mathematicians were attempting to model and analyze how simple component units, such as atoms, give rise to global properties, such as complex material properties at low temperatures, in magnetic materials, and within turbulent flows.[19]Using cellular automata, scientists were able to specify systems consisting of a grid of cells in which each cell only occupied some finite states and changes between states were solely governed by the states of immediate neighbors. Along with advances inartificial intelligenceandmicrocomputerpower, these methods contributed to the development of "chaos theory" and "complexity theory" which, in turn, renewed interest in understanding complex physical and social systems across disciplinary boundaries.[2]Research organizations explicitly dedicated to the interdisciplinary study of complexity were also founded in this era: theSanta Fe Institutewas established in 1984 by scientists based atLos Alamos National Laboratoryand the BACH group at theUniversity of Michiganlikewise started in the mid-1980s.
This cellular automata paradigm gave rise to a third wave of social simulation emphasizing agent-based modeling. Like micro-simulations, these models emphasized bottom-up designs but adopted four key assumptions that diverged from microsimulation: autonomy, interdependency, simple rules, and adaptive behavior.[1]Agent-based models are less concerned with predictive accuracy and instead emphasize theoretical development.[20]In 1981, mathematician and political scientistRobert Axelrodand evolutionary biologistW.D. Hamiltonpublished a major paper inSciencetitled "The Evolution of Cooperation" which used an agent-based modeling approach to demonstrate how social cooperation based upon reciprocity can be established and stabilized in aprisoner's dilemmagame when agents followed simple rules of self-interest.[21]Axelrod and Hamilton demonstrated that individual agents following a simple rule set of (1) cooperate on the first turn and (2) thereafter replicate the partner's previous action were able to develop "norms" of cooperation and sanctioning in the absence of canonical sociological constructs such as demographics, values, religion, and culture as preconditions or mediators of cooperation.[4]Throughout the 1990s, scholars likeWilliam Sims Bainbridge,Kathleen Carley,Michael Macy, andJohn Skvoretzdeveloped multi-agent-based models ofgeneralized reciprocity,prejudice,social influence, and organizationalinformation processing (psychology). In 1999,Nigel Gilbertpublished the first textbook on Social Simulation:Simulation for the social scientistand established its most relevant journal: theJournal of Artificial Societies and Social Simulation.
Independent from developments in computational models of social systems, social network analysis emerged in the 1970s and 1980s from advances in graph theory, statistics, and studies of social structure as a distinct analytical method and was articulated and employed by sociologists likeJames S. Coleman,Harrison White,Linton Freeman,J. Clyde Mitchell,Mark Granovetter,Ronald Burt, andBarry Wellman.[22]The increasing pervasiveness of computing and telecommunication technologies throughout the 1980s and 1990s demanded analytical techniques, such asnetwork analysisandmultilevel modeling, that could scale to increasingly complex and large data sets. The most recent wave of computational sociology, rather than employing simulations, uses network analysis and advanced statistical techniques to analyze large-scale computer databases of electronic proxies for behavioral data. Electronic records such as email and instant message records, hyperlinks on theWorld Wide Web, mobile phone usage, and discussion onUsenetallow social scientists to directly observe and analyze social behavior at multiple points in time and multiple levels of analysis without the constraints of traditional empirical methods such as interviews, participant observation, or survey instruments.[23]Continued improvements inmachine learningalgorithms likewise have permitted social scientists and entrepreneurs to use novel techniques to identify latent and meaningful patterns of social interaction and evolution in large electronic datasets.[24][25]
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale,
turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analysed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes.[27]This automates the approach introduced by quantitative narrative analysis,[28]whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.[26]
Content analysishas been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items.Gender bias,readability, content similarity, reader preferences, and even mood have been analyzed based ontext miningmethods over millions of documents.[29][30][31][32][33]The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al.[34]showing how different topics have different gender biases and levels of readability; the possibility to detect mood shifts in a vast population by analysing Twitter content was demonstrated as well.[35]
The analysis of vast quantities of historical newspaper content has been pioneered by Dzogang et al.,[36]which showed how periodic structures can be automatically discovered in historical newspapers. A similar analysis was performed on social media, again revealing strongly periodic structures.[37]
Computational sociology, as with any field of study, faces a set of challenges.[38]These challenges need to be handled meaningfully so as to make the maximum impact on society.
Each society that is formed tends to be in one level or the other and there exists tendencies of interactions between and across these levels. Levels need not only be micro-level or macro-level in nature. There can be intermediate levels in which a society exists say - groups, networks, communities etc.[38]
The question however arises as to how to identify these levels and how they come into existence? And once they are in existence how do they interact within themselves and with other levels?
If we view entities (agents) as nodes and the connections between them as the edges, we see the formation of networks. The connections in these networks do not come about based on just objective relationships between the entities, rather they are decided upon by factors chosen by the participating entities.[39]The challenge with this process is that, it is difficult to identify when a set of entities will form a network. These networks may be of trust networks, co-operation networks, dependence networks etc. There have been cases where heterogeneous set of entities have shown to form strong and meaningful networks among themselves.[40][41]
As discussed previously, societies fall into levels and in one such level, the individual level, a micro-macro link[42]refers to the interactions which create higher-levels. There are a set of questions that needs to be answered regarding these Micro-Macro links. How they are formed? When do they converge? What is the feedback pushed to the lower levels and how are they pushed?
Another major challenge in this category concerns the validity of information and their sources. In recent years there has been a boom in information gathering and processing. However, little attention was paid to the spread of false information between the societies. Tracing back the sources and finding ownership of such information is difficult.
The evolution of the networks and levels in the society brings about cultural diversity.[43]A thought which arises however is that, when people tend to interact and become more accepting of other cultures and beliefs, how is it that diversity still persists? Why is there no convergence? A major challenge is how to model these diversities. Are there external factors like mass media, locality of societies etc. which influence the evolution or persistence of cultural diversities?[citation needed]
Any study or modelling when combined with experimentation needs to be able to address the questions being asked.Computational social sciencedeals with large scale data and the challenge becomes much more evident as the scale grows. How would one design informative simulations on a large scale? And even if a large scale simulation is brought up, how is the evaluation supposed to be performed?
Another challenge is identifying the models that would best fit the data and the complexities of these models. These models would help us predict how societies might evolve over time and provide possible explanations on how things work.[44]
Generative models helps us to perform extensive qualitative analysis in a controlled fashion. A model proposed by Epstein, is the agent-based simulation, which talks about identifying an initial set of heterogeneous entities (agents) and observe their evolution and growth based on simple local rules.[45]
But what are these local rules? How does one identify them for a set of heterogeneous agents? Evaluation and impact of these rules state a whole new set of difficulties.
Integrating simple models which perform better on individual tasks to form a Hybrid model is an approach that can be looked into.[46]These models can offer better performance and understanding of the data. However the trade-off of identifying and having a deep understanding of the interactions between these simple models arises when one needs to come up with one combined, well performing model. Also, coming up with tools and applications to help analyse and visualize the data based on these hybrid models is another added challenge.
Computational sociology can bring impacts to science, technology and society.[38]
In order for the study of computational sociology to be effective, there has to be valuable innovations. These innovation can be of the form of new data analytics tools, better models and algorithms. The advent of such innovation will be a boom for the scientific community in large.[citation needed]
One of the major challenges of computational sociology is the modelling of social processes[citation needed]. Various law and policy makers would be able to see efficient and effective paths to issue new guidelines and the mass in general would be able to evaluate and gain fair understanding of the options presented in front of them enabling an open and well balanced decision process.[citation needed].
|
https://en.wikipedia.org/wiki/Computational_sociology
|
Criminal Reduction Utilising Statistical History(CRUSH) is anIBMpredictive analyticssystem that attempts to predict the location of future crimes.[1]It was developed as part of the Blue CRUSH program in conjunction withMemphis Police Departmentand theUniversity of MemphisCriminology and Research department.[2]In Memphis it was “credited as a key factor behind a 31 per cent fall in crime and 15 per cent drop in violent crime.”[3]
As of July 2010[update], it was being trialed by two British police forces.[1]
In 2014 a modified version of the system, called CRASH (Crash Reduction Analysing Statistical History) became operational inTennesseeaimed at preventing vehicle accidents.[4]
Thiscrime-related article is astub. You can help Wikipedia byexpanding it.
Thislaw enforcement–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Criminal_Reduction_Utilising_Statistical_History
|
Decision managementrefers to the process of designing, building, and managing automated decision-making systems that support or replace human decision-making in organizations.[1]It integrates business rules, predictive analytics, and decision modeling to streamline and automate operational decisions.[1]These systems combine business rules and potentially machine learning to automate routine business decisions[1]and are typically embedded in business operations where large volumes of routine decisions are made, such as fraud detection, customer service routing, and claims processing.[1]
Decision management differs fromdecision support systemsin that its primary focus is on automatingoperationaldecisions, rather than solely providing information to assist human decision-makers. It incorporates technologies designed for real-time decision-making with minimal human intervention.[2]
The roots of decision management can be traced back to theexpert systemsand management science/operations researchpractices developed in the mid-20th century.[3]These early systems aimed to replicate human reasoning using predefined logic. As technology advanced, decision management evolved to incorporate data-driven analytics and visual analytics tools. For instance, the Decision Exploration Lab introduced visual analytics solutions to help understand and refine decision logic, streamlining business decision-making.[3]This historical context helps place current decision management strategies within their evolutionary framework.
A key distinction within decision management is its focus onoperational decisionsrather thanstrategic decisions.[4]Operational decisions are typically:
Strategic decisions, in contrast, are generally unique, complex, less structured, and made less frequently by senior management. Decision management primarily targets the automation and improvement of high-volume operational decisions.[4]
Modern decision management systems integrate a combination of rule engines, data analytics, and increasingly, AI models.[5]These components help organizations formalize decision logic, improve the quality and speed of decisions, and enhance agility in response to changing business environments.
Key components include:
Artificial Intelligence(AI) is increasingly integrated into decision management, leading to "AI-enhanced hybrid decision management".[5]AI technologies, particularly machine learning, enhance decision-making by enabling systems to:[7]* Learn from vast amounts of data.
Combining AI with established decision modeling standards like DMN facilitates the creation of more sophisticated, dynamic, and context-aware automated decision systems.[5]
Organizations adopt decision management to achieve several benefits:
Chief Information Officers (CIOs) often drive adoption to overcome challenges associated with outdated or hard-coded rule engines and to empower business users to manage their own decision logic.[8]
Decision management is applied across various industries to automate operational decisions:[1][2]
Decision management systems frequently utilize aservice-oriented architecturewhere decision logic is encapsulated within distinct "decision services". This architectural pattern, often aligned with frameworks likeThe Decision Model,[6]advocates for decoupling the business decision logic from the core business processes and application code. This separation enhances maintainability, scalability, and the reusability of decision logic across different applications.[6]
|
https://en.wikipedia.org/wiki/Decision_management
|
Disease surveillanceis anepidemiologicalpractice by which the spread ofdiseaseis monitored in order to establish patterns of progression. The main role of disease surveillance is to predict, observe, and minimize the harm caused byoutbreak,epidemic, andpandemicsituations, as well as increase knowledge about which factors contribute to such circumstances. A key part of modern disease surveillance is the practice ofdisease case reporting.[1]
In modern times, reporting incidences of disease outbreaks has been transformed from manual record keeping, to instant worldwide internet communication.
The number of cases could be gathered from hospitals – which would be expected to see most of the occurrences – collated, and eventually made public. With the advent of moderncommunication technology, this has changed dramatically. Organizations like theWorld Health Organization(WHO) and theCenters for Disease Control and Prevention(CDC) now can report cases and deaths from significant diseases within days – sometimes within hours – of the occurrence. Further, there is considerable public pressure to make this information available quickly and accurately.[2][failed verification]
Formal reporting ofnotifiableinfectious diseases is a requirement placed upon health care providers by many regional and national governments, and upon national governments by the World Health Organization to monitor spread as a result of thetransmissionof infectious agents. Since 1969, WHO has required that all cases of the following diseases be reported to the organization:cholera,plague,yellow fever,smallpox,relapsing feverandtyphus. In 2005, the list was extended to includepolioandSARS. Regional and national governments typically monitor a larger set of (around 80 in the U.S.) communicable diseases that can potentially threaten the general population.Tuberculosis,HIV,botulism,hantavirus,anthrax, andrabiesare examples of such diseases. The incidence counts of diseases are often used ashealth indicatorsto describe the overall health of a population.[citation needed]
TheWorld Health Organization(WHO) is the lead agency for coordinating global response to major diseases. The WHO maintains Websites for a number of diseases and has active teams in many countries where these diseases occur.[3]
During the SARS outbreak in early 2004, for example, theBeijingstaff of the WHO produced updates every few days for the duration of the outbreak.[2]Beginning in January 2004, the WHO has produced similar updates forH5N1.[4]These results arewidely reportedand closely watched.[citation needed]
WHO's Epidemic and Pandemic Alert and Response (EPR) to detect, verify rapidly and respond appropriately to epidemic-prone and emerging disease threats covers the following diseases:[5]
As the lead organization in global public health, the WHO occupies a delicate role inglobal politics. It must maintain good relationships with each of the many countries in which it is active. As a result, it may only report results within a particular country with the agreement of the country's government. Because some governments regard the release ofanyinformation on disease outbreaks as a state secret, this can place the WHO in a difficult position.[citation needed]
The WHO coordinatedInternational Outbreak Alert and Responseis designed to ensure "outbreaks of potential international importance are rapidly verified and information is quickly shared within the Network" but not necessarily by the public; integrate and coordinate "activities to support national efforts" rather than challenge national authority within that nation in order to "respect the independence and objectivity of all partners". The commitment that "All Network responses will proceed with full respect for ethical standards, human rights, national and local laws, cultural sensitivities and tradition" ensures each nation that its security, financial, and other interests will be given full weight.[6]
Testing for a disease can be expensive, and distinguishing between two diseases can be prohibitively difficult in many countries. One standard means of determining if a person has had a particular disease is to test for the presence ofantibodiesthat are particular to this disease. In the case of H5N1, for example, there is a low pathogenic H5N1 strain in wild birds in North America that a human could conceivably have antibodies against. It would be extremely difficult to distinguish between antibodies produced by this strain, and antibodies produced byAsian lineage HPAI A(H5N1). Similar difficulties are common, and make it difficult to determine how widely a disease may have spread.[citation needed]
There is currently little available data on the spread of H5N1 in wild birds in Africa and Asia. Without such data, predicting how the disease might spread in the future is difficult. Information that scientists and decision makers need to make useful medical products and informed decisions for health care, but currently lack include:[citation needed]
Surveillance ofH5N1in humans, poultry, wild birds, cats and other animals remains very weak in many parts of Asia and Africa. Much remains unknown about the exact extent of its spread.[citation needed]
H5N1 in China is less than fully reported. Blogs have described many discrepancies between official China government announcements concerning H5N1 and what people in China see with their own eyes. Many reports of total H5N1 cases have excluded China due to widespread disbelief in China's official numbers.[7][8][9][10](SeeDisease surveillance in China.)
"Only half the world's human bird flu cases are being reported to the World Health Organization within two weeks of being detected, a response time that must be improved to avert a pandemic, a senior WHO official said Saturday.Shigeru Omi, WHO's regional director for the Western Pacific, said it is estimated that countries would have only two to three weeks to stamp out, or at least slow, a pandemic flu strain after it began spreading in humans."[11]
David Nabarro, chief avian flu coordinator for theUnited Nations, says avian flu has too many unanswered questions.[12][13]
CIDRAPreported on 25 August 2006 on a new US government Website[14]that allows the public to view current information about testing of wild birds for H5N1 avian influenza, which is part of a national wild-bird surveillance plan that "includes five strategies for early detection of highly pathogenic avian influenza. Sample numbers from three of these will be available onHEDDS: live wild birds, subsistence hunter-killed birds, and investigations of sick and dead wild birds. The other two strategies involve domestic bird testing and environmental sampling of water and wild-bird droppings. [...] A map on the newUSGSsite shows that,9327birds from Alaska have been tested so far this year, with only a few from most other states. Last year, officials tested just721birds from Alaska and none from most other states, another map shows. The goal of the surveillance program for 2006 is to collect75000to100000samples from wild birds and50000environmental samples, officials have said".[15]
|
https://en.wikipedia.org/wiki/Disease_surveillance
|
Learning analyticsis the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.[1]The growth ofonline learningsince the 1990s, particularly inhigher education, has contributed to the advancement of Learning Analytics as student data can be captured and made available for analysis.[2][3][4]When learners use anLMS,social media, or similar online tools, their clicks, navigation patterns, time on task,social networks,information flow, and concept development through discussions can be tracked. The rapid development ofmassive open online courses(MOOCs) offers additional data for researchers to evaluate teaching and learning in online environments.[5]
Although a majority of Learning Analytics literature has started to adopt the aforementioned definition, the definition and aims of Learning Analytics are still contested.
One earlier definition discussed by the community suggested that Learning Analytics is the use of intelligent data, learner-produced data, and analysis models to discover information and social connections for predicting and advising people's learning.[6]But this definition has been criticised byGeorge Siemens[7][non-primary source needed]andMike Sharkey.[8][non-primary source needed]
Dr. Wolfgang GrellerandDr. Hendrik Drachslerdefined learning analytics holistically as a framework. They proposed that it is a generic design framework that can act as a useful guide for setting up analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. It uses ageneral morphological analysis(GMA) to divide the domain into six "critical dimensions".[9]
The broader term "Analytics" has been defined as the science of examining data to draw conclusions and, when used indecision-making, to present paths or courses of action.[10]From this perspective, Learning Analytics has been defined as a particular case ofAnalytics, in whichdecision-makingaims to improve learning and education.[11]During the 2010s, this definition of analytics has gone further to incorporate elements ofoperations researchsuch asdecision treesandstrategy mapsto establishpredictive modelsand to determine probabilities for certain courses of action.[10]
Another approach for defining Learning Analytics is based on the concept ofAnalyticsinterpreted as theprocessof developing actionable insights through problem definition and the application ofstatistical modelsand analysis against existing and/or simulated future data.[12][13]From this point of view, Learning Analytics emerges as a type ofAnalytics(as aprocess), in which the data, the problem definition and the insights are learning-related.
In 2016, a research jointly conducted by the New Media Consortium (NMC) and the EDUCAUSE Learning Initiative (ELI) -anEDUCAUSEProgram- describes six areas of emerging technology that will have had significant impact onhigher educationand creative expression by the end of 2020. As a result of this research, Learning analytics was defined as an educational application ofweb analyticsaimed at learner profiling, a process of gathering and analyzing details of individual student interactions inonline learningactivities.[14]
In 2017,Gašević,Коvanović, andJoksimovićproposed a consolidated model of learning analytics.[15]The model posits that learning analytics is defined at the intersection of three disciplines: data science, theory, and design. Data science offers computational methods and techniques for data collection, pre-processing, analysis, and presentation. Theory is typically drawn from the literature in the learning sciences, education, psychology, sociology, and philosophy. The design dimension of the model includes: learning design, interaction design, and study design.
In 2015,Gašević,Dawson, andSiemensargued that computational aspects of learning analytics need to be linked with the existing educational research in order for Learning Analytics to deliver its promise to understand and optimize learning.[16]
Differentiating the fields ofeducational data mining(EDM) and learning analytics (LA) has been a concern of several researchers.George Siemenstakes the position that educational data mining encompasses both learning analytics andacademic analytics,[17]the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch defineacademic analyticsas an area that "...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior".[18]They go on to attempt to disambiguate educational data mining from academic analytics based on whether the process is hypothesis driven or not, though Brooks[19]questions whether this distinction exists in the literature. Brooks[19]instead proposes that a better distinction between the EDM and LA communities is in the roots of where each community originated, with authorship at the EDM community being dominated by researchers coming from intelligent tutoring paradigms, and learning anaytics researchers being more focused on enterprise learning systems (e.g. learning content management systems).
Regardless of the differences between the LA and EDM communities, the two areas have significant overlap both in the objectives of investigators as well as in the methods and techniques that are used in the investigation. In theMSprogram offering in learning analytics atTeachers College, Columbia University, students are taught both EDM and LA methods.[20]
Learning Analytics, as a field, has multiple disciplinary roots. While the fields ofartificial intelligence (AI),statistical analysis,machine learning, andbusiness intelligenceoffer an additional narrative, the main historical roots of analytics are the ones directly related tohuman interactionand theeducation system.[5]More in particular, the history of Learning Analytics is tightly linked to the development of fourSocial Sciences' fields that have converged throughout time. These fields pursued, and still do, four goals:
A diversity of disciplines and research activities have influenced in these 4 aspects throughout the last decades, contributing to the gradual development of learning analytics. Some of most determinant disciplines areSocial Network Analysis,User Modelling,Cognitive modelling,Data MiningandE-Learning. The history of Learning Analytics can be understood by the rise and development of these fields.[5]
Social network analysis(SNA) is the process of investigating social structures through the use ofnetworksandgraph theory.[21]It characterizes networked structures in terms ofnodes(individual actors, people, or things within the network) and theties,edges, orlinks(relationships or interactions) that connect them.[citation needed]Social network analysisis prominent inSociology, and its development has had a key role in the emergence of Learning Analytics.
One of the first examples or attempts to provide a deeper understanding of interactions is by Austrian-American SociologistPaul Lazarsfeld. In 1944, Lazarsfeld made the statement of "who talks to whom about what and to what effect".[22]That statement forms what today is still the area of interest or the target within social network analysis, which tries to understand how people are connected and what insights can be derived as a result of their interactions, a core idea of Learning Analytics.[5]
Citation analysis
American linguistEugene Garfieldwas an early pioneer in analytics in science. In 1955, Garfield led the first attempt to analyse the structure of science regarding how developments in science can be better understood by tracking the associations (citations) between articles (how they reference one another, the importance of the resources that they include, citation frequency, etc). Through tracking citations, scientists can observe how research is disseminated and validated. This was the basic idea of what eventually became a "page rank", which in the early days ofGoogle(beginning of the 21st century) was one of the key ways of understanding the structure of a field by looking at page connections and the importance of those connections. The algorithmPageRank-the first search algorithm used by Google- was based on this principle.[23][24]Americancomputer scientistLarry Page, Google's co-founder, defined PageRank as "an approximation of the importance" of a particular resource.[25]Educationally, citation orlink analysisis important for mappingknowledge domains.[5]
The essential idea behind these attempts is the realization that, as data increases, individuals, researchers or business analysts need to understand how to track the underlying patterns behind the data and how to gain insight from them. And this is also a core idea in Learning Analytics.[5]
Digitalization of Social network analysis
During the early 1970s,pushed by the rapid evolution in technology,Social network analysistransitioned into analysis of networks in digital settings.[5]
During the first decade of the century, ProfessorCaroline Haythornthwaiteexplored the impact ofmedia typeon the development ofsocial ties, observing thathuman interactionscan be analyzed to gain novel insight not fromstrong interactions(i.e. people that are strongly related to the subject) but, rather, fromweak ties. This provides Learning Analytics with a central idea: apparently un-related data may hide crucial information. As an example of this phenomenon, an individual looking for a job will have a better chance of finding new information through weak connections rather than strong ones.[31](Siemens, George (2013-03-17).Intro to Learning Analytics. LAK13 open online course for University of Texas at Austin & Edx. 11 minutes in. Retrieved2018-11-01.)
Her research also focused on the way that differenttypes of mediacan impact theformation of networks. Her work highly contributed to the development ofsocial network analysisas a field. Important ideas were inherited by Learning Analytics, such that a range of metrics and approaches can define the importance of a particular node, the value ofinformation exchange, the way that clusters are connected to one another, structural gaps that might exist within those networks, etc.[5]
The application of social network analysis in digital learning settings has been pioneered by ProfessorShane P. Dawson. He has developed a number of software tools, such as Social Networks Adapting Pedagogical Practice (SNAPP) for evaluating the networks that form in [learning management systems] when students engage in forum discussions.[32]
The main goal ofuser modellingis the customization andadaptation of systemsto the user's specific needs, especially in theirinteraction with computing systems. The importance of computers being able to respond individually to into people was starting to be understood in the decade of 1970s. DrElaine Richin 1979 predicted that "computers are going to treat their users as individuals with distinct personalities, goals, and so forth".[33]This is a central idea not only educationally but also in general web use activity, in whichpersonalizationis an important goal.[5]
User modellinghas become important in research inhuman-computer interactionsas it helps researchers to design better systems by understanding how users interact with software.[34]Recognizing unique traits, goals, and motivations of individuals remains an important activity in learning analytics.[5]
Personalization andadaptation of learningcontent is an important present and future direction oflearning sciences, and its history within education has contributed to the development of learning analytics.[5]Hypermediais a nonlinear medium of information that includes graphics, audio, video, plain text andhyperlinks. The term was first used in a 1965 article written by American SociologistTed Nelson.[35]Adaptive hypermediabuilds onuser modellingby increasing personalization of content and interaction. In particular, adaptive hypermedia systems build a model of the goals, preferences and knowledge of each user, in order to adapt to the needs of that user. From the end of the 20th century onwards, the field grew rapidly, mainly due to that theinternetboosted research into adaptivity and, secondly, the accumulation and consolidation of research experience in the field. In turn, Learning Analytics has been influenced by this strong development.[36]
Education/cognitive modellinghas been applied to tracing how learners develop knowledge. Since the end of the 1980s and early 1990s, computers have been used in education as learning tools for decades. In 1989,Hugh Burnsargued for the adoption and development ofintelligent tutor systemsthat ultimately would pass three levels of "intelligence":domain knowledge, learner knowledge evaluation, andpedagogicalintervention. During the 21st century, these three levels have remained relevant for researchers and educators.[37]
In the decade of 1990s, the academic activity around cognitive models focused on attempting to develop systems that possess a computational model capable of solving the problems that are given to students in the ways students are expected to solve the problems.[38]Cognitive modelling has contributed to the rise in popularity of intelligent orcognitive tutors. Once cognitive processes can be modelled, software (tutors) can be developed to support learners in the learning process. The research base on this field became, eventually, significantly relevant for learning analytics during the 21st century.[5][39][40]
While big data analytics has been more and more widely applied in education, Wise and Shaffer[41]addressed the importance of theory-based approach in the analysis. Epistemic Frame Theory conceptualized the "ways of thinking, acting, and being in the world" in a collaborative learning environment. Specifically, the framework is based on the context ofCommunity of Practice(CoP), which is a group of learners, with common goals, standards and prior knowledge and skills, to solve a complex problem. Due to the essence of CoP, it is important to study the connections between elements (learners, knowledge, concepts, skills and so on). To identify the connections, the co-occurrences of elements in learners' data are identified and analyzed.
Shaffer and Ruis[42]pointed out the concept of closing the interpretive loop, by emphasizing the transparency and validation of model, interpretation and the original data. The loop can be closed by a good theoretical sound analytics approaches,Epistemic Network Analysis.
In a discussion of the history of analytics,Adam Cooperhighlights a number of communities from which learning analytics has drawn techniques, mainly during the first decades of the 21st century, including:[43]
The first graduate program focused specifically on learning analytics was created byRyan S. Bakerand launched in the Fall 2015 semester atTeachers College,Columbia University. The program description states that
"(...)data about learning and learners are being generated today on an unprecedented scale. The fields of learning analytics (LA) andeducational data mining(EDM) have emerged with the aim of transforming this data into new insights that can benefit students, teachers, and administrators. As one of world's leading teaching and research institutions in education, psychology, and health, we are proud to offer an innovative graduate curriculum dedicated to improving education through technology anddata analysis."[44]
Masters programs are now offered at several other universities as well, including the University of Texas at Arlington, the University of Wisconsin, and the University of Pennsylvania.
Methods for learning analytics include:
Learning Applications can be and has been applied in a noticeable number of contexts.
Analytics have been used for:
There is a broad awareness of analytics across educational institutions for various stakeholders,[10]but that the way learning analytics is defined and implemented may vary, including:[13]
Some motivations and implementations of analytics may come into conflict with others, for example highlighting potential conflict between analytics for individual learners and organisational stakeholders.[13]
Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include:
The ethics of data collection, analytics, reporting and accountability has been raised as a potential concern for learning analytics,[9][57][58]with concerns raised regarding:
As Kay, Kom and Oppenheim point out, the range of data is wide, potentially derived from:[60]
Thus the legal and ethical situation is challenging and different from country to country, raising implications for:[60]
In some prominent cases like the inBloom disaster,[61]even full functional systems have been shut down due to lack of trust in the data collection by governments, stakeholders and civil rights groups. Since then, the learning analytics community has extensively studied legal conditions in a series of experts workshops on "Ethics & Privacy 4 Learning Analytics" that constitute the use of trusted learning analytics.[62][non-primary source needed]Drachsler & Greller released an 8-point checklist named DELICATE that is based on the intensive studies in this area to demystify the ethics and privacy discussions around learning analytics.[63]
It shows ways to design and provide privacy conform learning analytics that can benefit all stakeholders. The full DELICATE checklist is publicly available.[64]
Privacy management practices of students have shown discrepancies between one's privacy beliefs and one's privacy related actions.[65]Learning analytic systems can have default settings that allow data collection of students if they do not choose to opt-out.[65]Some online education systems such asedXorCourserado not offer a choice to opt-out of data collection.[65]In order for certain learning analytics to function properly, these systems utilize cookies to collect data.[65]
In 2012, a systematic overview on learning analytics and its key concepts was provided by ProfessorMohamed Chattiand colleagues through a reference model based on four dimensions, namely:
Chatti, Muslim and Schroeder[68]note that the aim of open learning analytics (OLA) is to improve learning effectiveness in lifelong learning environments. The authors refer to OLA as an ongoing analytics process that encompasses diversity at all four dimensions of the learning analytics reference model.[66]
For general audience introductions, see:
|
https://en.wikipedia.org/wiki/Learning_analytics
|
Statistical inferenceis the process of usingdata analysisto infer properties of an underlyingprobability distribution.[1]Inferential statistical analysisinfers properties of apopulation, for example bytesting hypothesesand deriving estimates. It is assumed that the observed data set issampledfrom a larger population.
Inferential statisticscan be contrasted withdescriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. Inmachine learning, the terminferenceis sometimes used instead to mean "make a prediction, by evaluating an already trained model";[2]in this context inferring properties of the model is referred to astrainingorlearning(rather thaninference), and using a model for prediction is referred to asinference(instead ofprediction); see alsopredictive inference.
Statistical inference makes propositions about a population, using data drawn from the population with some form ofsampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first)selectingastatistical modelof the process that generates the data and (second) deducing propositions from the model.[3]
Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling".[4]Relatedly,Sir David Coxhas said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[5]
Theconclusionof a statistical inference is a statisticalproposition.[6]Some common forms of statistical proposition are the following:
Any statistical inference requires some assumptions. Astatistical modelis a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference.[7]Descriptive statisticsare typically used as a preliminary step before more formal inferences are drawn.[8]
Statisticians distinguish between three levels of modeling assumptions:
Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.
Incorrect assumptions of'simple' random samplingcan invalidate statistical inference.[10]More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions.[11]Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference.[12]The use ofanyparametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal."[13]In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population."[13]Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed.
Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.
With finite samples,approximation resultsmeasure how close a limiting distribution approaches the statistic'ssample distribution: For example, with 10,000 independent samples thenormal distributionapproximates (to two digits of accuracy) the distribution of thesample meanfor many population distributions, by theBerry–Esseen theorem.[14]Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience.[14]Following Kolmogorov's work in the 1950s, advanced statistics usesapproximation theoryandfunctional analysisto quantify the error of approximation. In this approach, themetric geometryofprobability distributionsis studied; this approach quantifies approximation error with, for example, theKullback–Leibler divergence,Bregman divergence, and theHellinger distance.[15][16][17]
With indefinitely large samples,limiting resultslike thecentral limit theoremdescribe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples.[18][19][20]However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify thegeneralized method of momentsand the use ofgeneralized estimating equations, which are popular ineconometricsandbiostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation.[21]The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensionalmodelswithlog-concavelikelihoods(such as with one-parameterexponential families).
For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments.[22][23]Statistical inference from randomized studies is also more straightforward than many other situations.[24][25][26]InBayesian inference, randomization is also of importance: insurvey sampling, use ofsampling without replacementensures theexchangeabilityof the sample with the population; in randomized experiments, randomization warrants amissing at randomassumption forcovariateinformation.[27]
Objective randomization allows properly inductive procedures.[28][29][30][31][32]Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures.[33](However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.[34][35]) Similarly, results fromrandomized experimentsare recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena.[36]However, a good observational study may be better than a bad randomized experiment.
The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model.[37][38]
However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.
It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments.[39]However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme.[23]Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units.[40]
Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.[41][42]
For example, model-free simple linear regression is based either on:
In either case, the model-free randomization inference for features of the common conditional distributionDx(.){\displaystyle D_{x}(.)}relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population featureconditional mean,μ(x)=E(Y|X=x){\displaystyle \mu (x)=E(Y|X=x)}, can be consistently estimated via local averaging or local polynomial fitting, under the assumption thatμ(x){\displaystyle \mu (x)}is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, theconditional mean,μ(x){\displaystyle \mu (x)}.[43]
Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.
Bandyopadhyay and Forster describe four paradigms: The classical (orfrequentist) paradigm, theBayesianparadigm, thelikelihoodistparadigm, and theAkaikean-Information Criterion-based paradigm.[44]
This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
One interpretation offrequentist inference(or classical inference) is that it is applicable only in terms offrequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman[45]develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard toutility functions. However, some elements of frequentist statistics, such asstatistical decision theory, do incorporateutility functions.[citation needed]In particular, frequentist developments of optimal inference (such asminimum-variance unbiased estimators, oruniformly most powerful testing) make use ofloss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property.[46]However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal underabsolute valueloss functions, in that they minimize expected loss, andleast squaresestimators are optimal under squared error loss functions, in that they minimize expected loss.
While statisticians using frequentist inference must choose for themselves the parameters of interest, and theestimators/test statisticto be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.[47]
The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions.[48]There areseveral different justificationsfor using the Bayesian approach.
Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user'sutility functionneed not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have beenproposedbut not yet fully developed.)
Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically providesoptimal decisionsin adecision theoreticsense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically)incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to becoherent. Some advocates ofBayesian inferenceassert that inferencemusttake place in this decision-theoretic framework, and thatBayesian inferenceshould not conclude with the evaluation and summarization of posterior beliefs.
Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data.Likelihoodismapproaches statistics by using thelikelihood function, denoted asL(x|θ){\displaystyle L(x|\theta )}, quantifies the probability of observing the given datax{\displaystyle x}, assuming a specific set of parameter valuesθ{\displaystyle \theta }. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.
The process of likelihood-based inference usually involves the following steps:
TheAkaike information criterion(AIC) is anestimatorof the relative quality ofstatistical modelsfor a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means formodel selection.
AIC is founded oninformation theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between thegoodness of fitof the model and the simplicity of the model.)
The minimum description length (MDL) principle has been developed from ideas ininformation theory[49]and the theory ofKolmogorov complexity.[50]The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" orprobability modelsfor the data, as might be done in frequentist or Bayesian approaches.
However, if a "data generating mechanism" does exist in reality, then according toShannon'ssource coding theoremit provides the MDL description of the data, on average and asymptotically.[51]In minimizing description length (or descriptive complexity), MDL estimation is similar tomaximum likelihood estimationandmaximum a posteriori estimation(usingmaximum-entropyBayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.[51][52]
The MDL principle has been applied in communication-coding theoryininformation theory, inlinear regression,[52]and indata mining.[50]
The evaluation of MDL-based inferential procedures often uses techniques or criteria fromcomputational complexity theory.[53]
Fiducial inferencewas an approach to statistical inference based onfiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.[54][55]However this argument is the same as that which shows[56]that a so-calledconfidence distributionis not a validprobability distributionand, since this has not invalidated the application ofconfidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher'sfiducial argumentas a special case of an inference theory usingupper and lower probabilities.[57]
Developing ideas of Fisher and of Pitman from 1938 to 1939,[58]George A. Barnarddeveloped "structural inference" or "pivotal inference",[59]an approach usinginvariant probabilitiesongroup families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.Donald A. S. Fraserdeveloped a general theory for structural inference[60]based ongroup theoryand applied this to linear models.[61]The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.[62]
The topics below are usually included in the area ofstatistical inference.
Predictive inference is an approach to statistical inference that emphasizes thepredictionof future observations based on past observations.
Initially, predictive inference was based onobservableparameters and it was the main purpose of studyingprobability,[citation needed]but it fell out of favor in the 20th century due to a new parametric approach pioneered byBruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g.,celestial mechanics). De Finetti's idea ofexchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper,[63]and has since been propounded by such statisticians asSeymour Geisser.[64]
|
https://en.wikipedia.org/wiki/Predictive_inference
|
Predictive policingis the usage of mathematics,predictive analytics, and other analytical techniques inlaw enforcementto identify potential criminal activity.[1][2][3]A report published by theRAND Corporationidentified four general categories predictive policing methods fall into: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.[4]
Predictive policing uses data on the times, locations and nature of past crimes to provide insight to police strategists concerning where, and at what times,police patrolsshould patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crimevictimwill be.Algorithmsare produced by taking into account these factors, which consist of large amounts of data that can be analyzed.[5][6]The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime.[7]The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization.[8]
Police may also use data accumulated on shootings and thesounds of gunfireto identify locations of shootings. The city ofChicagouses data blended from population mappingcrime statisticsto improve monitoring and identify patterns.[9]
Rather than predicting crime, predictive policing can be used to prevent it. The "AIEthics of Care" approach recognizes that some locations have greater crime rates as a result of negative environmental conditions.Artificial intelligencecan be used to minimize crime by addressing the identified demands.[10]
At the conclusion of intense combat operations in April 2003, Improvised Explosive Devices (IEDs) were dispersed throughout Iraq’s streets. These devices were deployed to monitor and counteract U.S. military activities using predictive policing tactics. However, the extensive areas covered by these IEDs made it impractical for Iraqi forces to respond to every American presence within the region. This challenge led to the concept of Actionable Hot Spots—zones experiencing high levels of activity yet too vast for effective control. This situation presented difficulties for the Iraqi military in selecting optimal locations for surveillance, sniper placements, and route patrols along areas monitored by IEDs.[citation needed]
The roots of predictive policing can be traced to the policy approach of social governance, in whichleader of the Chinese Communist PartyXi Jinpingannounced at a security conference in 2016 is the Chinese regime’s agenda to promote a harmonious and prosperous country through an extensive use of information systems.[11]A common instance of social governance is the development of thesocial credit system, where big data is used to digitize identities and quantify trustworthiness. There is no other comparably comprehensive and institutionalized system of citizen assessment in the West.[12]
The increase in collecting and assessing aggregate public and private information by China’spolice forceto analyze past crime and forecast future criminal activity is part of the government’s mission to promote social stability by converting intelligence-led policing (i.e. effectively using information) into informatization (i.e. using information technologies) of policing.[11]The increase in employment of big data through the policegeographical information system(PGIS) is withinChina’spromise to better coordinate information resources across departments and regions to transform analysis of past crime patterns and trends into automated prevention and suppression of crime.[13][14]PGIS was first introduced in 1970s and was originally used for internal government management and research institutions for city surveying and planning. Since the mid-1990s PGIS has been introduced into the Chinese public security industry to empower law enforcement by promoting police collaboration and resource sharing.[13][15]The current applications of PGIS are still contained within the stages of public map services,spatial queries, andhot spotmapping. Its application in crime trajectory analysis and prediction is still in the exploratory stage; however, the promotion of informatization of policing has encouraged cloud-based upgrades to PGIS design, fusion of multi-sourcespatiotemporaldata, and developments to police spatiotemporalbig dataanalysis and visualization.[16]
Although there is no nationwide police prediction program in China, local projects between 2015 and 2018 have also been undertaken in regions such asZhejiang,Guangdong,Suzhou, andXinjiang, that are either advertised as or are building blocks towards a predictive policing system.[11][17]
Zhejiang and Guangdong had established prediction and prevention oftelecommunicationfraud through the real-time collection and surveillance of suspicious online or telecommunication activities and the collaboration with private companies such as theAlibaba Groupfor the identification of potential suspects.[18]The predictive policing and crime prevention operation involves forewarning to specific victims, with 9,120 warning calls being made in 2018 by theZhongshanpolice force along with direct interception of over 13,000 telephone calls and over 30,000 text messages in 2017.[11]
Substance-related crime is also investigated in Guangdong, specifically theZhongshanpolice force who were the first city in 2017 to utilize wastewater analysis and data models that included water and electricity usage to locate hotspots for drug crime. This method led to the arrest of 341 suspects in 45 different criminal investigations by 2019.[19]
InChina, Suzhou Police Bureau has adopted predictive policing since 2013. During 2015–2018, several cities in China have adopted predictive policing.[20]China has used predictive policing to identify and target people for sent toXinjiang internment camps.[21][22]
The integrated joint operations platform (IJOP) predictive policing system is operated by theCentral Political and Legal Affairs Commission.[23]
In Europe there has been significant pushback against predictive policing and the broader use of artificial intelligence in policing on both a national and European Union level.[24]
The DanishPOL-INTELproject has been operational since 2017 and is based on theGothamsystem fromPalantir Technologies. The Gotham system has also been used by German state police andEuropol.[24]
Predictive policing has been used inthe Netherlands.[24]
In theUnited States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois.[25][26]
In New York, the NYPD has begun implementing a new crime tracking program calledPatternizr. The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible "pattern" of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.[27]
In India, various state police forces have adopted AI technologies to enhance their law enforcement capabilities. For instance, the Maharashtra Police have launchedMaharashtra Advanced Research and Vigilance for Enhanced Law Enforcement (MARVEL), the country's first state-level police AI system, to improve crime prediction and detection.[28]Additionally, the Uttar Pradesh Police utilize the AI-powered mobile application 'Trinetra' for facial recognition and criminal tracking.[29]
Predictive policing faces issues that affect its effectiveness. Obioha mentions several concerns raised about predictive policing. High costs and limited use prevent more widespread use, especially among poorer countries. Another issue that affects predictive policing is that it relies on human input to determine patterns. Flawed data can lead to biased and possibly racist results.[30]Technology cannot predict crime, it can only weaponize proximity to policing. Though it is claimed to be unbiased data, communities of color and low income are the most targeted.[31]It should also be noted that not all crime is reported, making the data faulty[further explanation needed]and inaccurate.[citation needed]
In 2020, followingprotests against police brutality, a group of mathematicians published a letter inNotices of the American Mathematical Societyurging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott.[32]
Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops.[33]
Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other “invasive” intelligence-gathering techniques within their jurisdictions.
Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city ofSanta Cruz, Californiaexperienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June 2020 in the aftermath of the murder ofGeorge FloydinMinneapolis, Minnesotaalong with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology.[34]
Accompanying the ban on predictive policing, was a similar prohibition offacial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones - which can contribute to cases of mistaken identity and potentially,wrongful convictions.[35]
In 2019, Michael Oliver, ofDetroit, Michigan, was wrongfully accused oflarcenywhen his face registered as a “match” in theDataWorks Plussoftware to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence - and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all - except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error.[35]
With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, “this is something that targets people who are like me,” referencing the patterns ofracial biasand discrimination that predictive policing can continue rather than stop.[36]
For example, asDorothy Robertsexplains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been impacted by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership ingangsor organized crime groups,911call records, among other features, can produce algorithms that suggest the over-policing ofminorityorlow-incomecommunities.[35]
|
https://en.wikipedia.org/wiki/Predictive_policing
|
Inmachine learning,kernel machinesare a class of algorithms forpattern analysis, whose best known member is thesupport-vector machine(SVM). These methods involve using linear classifiers to solve nonlinear problems.[1]The general task ofpattern analysisis to find and study general types of relations (for exampleclusters,rankings,principal components,correlations,classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed intofeature vectorrepresentations via a user-specifiedfeature map: in contrast, kernel methods require only a user-specifiedkernel, i.e., asimilarity functionover all pairs of data points computed usinginner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to therepresenter theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use ofkernel functions, which enable them to operate in a high-dimensional,implicitfeature spacewithout ever computing the coordinates of the data in that space, but rather by simply computing theinner productsbetween theimagesof all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2]Kernel functions have been introduced for sequence data,graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include thekernel perceptron, support-vector machines (SVM),Gaussian processes,principal components analysis(PCA),canonical correlation analysis,ridge regression,spectral clustering,linear adaptive filtersand many others.
Most kernel algorithms are based onconvex optimizationoreigenproblemsand are statistically well-founded. Typically, their statistical properties are analyzed usingstatistical learning theory(for example, usingRademacher complexity).
Kernel methods can be thought of asinstance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" thei{\displaystyle i}-th training example(xi,yi){\displaystyle (\mathbf {x} _{i},y_{i})}and learn for it a corresponding weightwi{\displaystyle w_{i}}. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of asimilarity functionk{\displaystyle k}, called akernel, between the unlabeled inputx′{\displaystyle \mathbf {x'} }and each of the training inputsxi{\displaystyle \mathbf {x} _{i}}. For instance, a kernelizedbinary classifiertypically computes a weighted sum of similaritiesy^=sgn∑i=1nwiyik(xi,x′),{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}where
Kernel classifiers were described as early as the 1960s, with the invention of thekernel perceptron.[3]They rose to great prominence with the popularity of thesupport-vector machine(SVM) in the 1990s, when the SVM was found to be competitive withneural networkson tasks such ashandwriting recognition.
The kernel trick avoids the explicit mapping that is needed to get linearlearning algorithmsto learn a nonlinear function ordecision boundary. For allx{\displaystyle \mathbf {x} }andx′{\displaystyle \mathbf {x'} }in the input spaceX{\displaystyle {\mathcal {X}}}, certain functionsk(x,x′){\displaystyle k(\mathbf {x} ,\mathbf {x'} )}can be expressed as aninner productin another spaceV{\displaystyle {\mathcal {V}}}. The functionk:X×X→R{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is often referred to as akernelor akernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum orintegral.
Certain problems in machine learning have more structure than an arbitrary weighting functionk{\displaystyle k}. The computation is made much simpler if the kernel can be written in the form of a "feature map"φ:X→V{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}which satisfiesk(x,x′)=⟨φ(x),φ(x′)⟩V.{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}The key restriction is that⟨⋅,⋅⟩V{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}must be a proper inner product. On the other hand, an explicit representation forφ{\displaystyle \varphi }is not necessary, as long asV{\displaystyle {\mathcal {V}}}is aninner product space. The alternative follows fromMercer's theorem: an implicitly defined functionφ{\displaystyle \varphi }exists whenever the spaceX{\displaystyle {\mathcal {X}}}can be equipped with a suitablemeasureensuring the functionk{\displaystyle k}satisfiesMercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra thatassociates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure thecounting measureμ(T)=|T|{\displaystyle \mu (T)=|T|}for allT⊂X{\displaystyle T\subset X}, which counts the number of points inside the setT{\displaystyle T}, then the integral in Mercer's theorem reduces to a summation∑i=1n∑j=1nk(xi,xj)cicj≥0.{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}If this summation holds for all finite sequences of points(x1,…,xn){\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}inX{\displaystyle {\mathcal {X}}}and all choices ofn{\displaystyle n}real-valued coefficients(c1,…,cn){\displaystyle (c_{1},\dots ,c_{n})}(cf.positive definite kernel), then the functionk{\displaystyle k}satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native spaceX{\displaystyle {\mathcal {X}}}would, in fact, have a linear interpretation in a different setting: the range space ofφ{\displaystyle \varphi }. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to computeφ{\displaystyle \varphi }directly during computation, as is the case withsupport-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, aGram matrixK∈Rn×n{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}with respect to{x1,…,xn}{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}(sometimes also called a "kernel matrix"[4]), whereKij=k(xi,xj){\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}, must bepositive semi-definite (PSD).[5]Empirically, for machine learning heuristics, choices of a functionk{\displaystyle k}that do not satisfy Mercer's condition may still perform reasonably ifk{\displaystyle k}at least approximates the intuitive idea of similarity.[6]Regardless of whetherk{\displaystyle k}is a Mercer kernel,k{\displaystyle k}may still be referred to as a "kernel".
If the kernel functionk{\displaystyle k}is also acovariance functionas used inGaussian processes, then the Gram matrixK{\displaystyle \mathbf {K} }can also be called acovariance matrix.[7]
Application areas of kernel methods are diverse and includegeostatistics,[8]kriging,inverse distance weighting,3D reconstruction,bioinformatics,cheminformatics,information extractionandhandwriting recognition.
|
https://en.wikipedia.org/wiki/Kernel_trick
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.